Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - JeGX

Pages: [1] 2 3 ... 78
In this, the fourth post in the Windows Command-Line series, we'll discuss the new Windows Pseudo Console (ConPTY) infrastructure and API - why we built it, what it's for, how it works, how to use it, and more.

In the previous post in this series, we started to explore the internals of the Windows Console and Windows' Command-Line infrastructure. We also discussed many of Console's strengths and outlined its key weaknesses.

One of those weaknesses is that Windows tries to be "helpful" but gets in the way of alternative and 3rd party Console developers, service developers, etc. When building a Console or service, developers need to be able to access/supply the communication pipes through which their Terminal/service communicates with command-line applications. In the *NIX world, this isn't a problem because *NIX provides a "Pseudo Terminal" (PTY) infrastructure which makes it easy to build the communication plumbing for a Console or service, but Windows does not ...

... until now!

The new Win32 ConPTY API (formal docs to follow soon) is now available in recent Windows 10 Insider builds and corresponding Windows 10 Insider Preview SDK, and will ship in the next major release of Windows 10 (due sometime in fall/winter 2018).


3D-Tech News Around The Web / Re: Vulkan Hardware Capability Viewer 1.7
« on: August 17, 2018, 03:04:10 PM »

Download zone updated:

3D-Tech News Around The Web / Dissecting a Shader Quine
« on: August 17, 2018, 02:47:58 PM »
I'm not normally very interested in quines, but today I saw one that that really made me sit down and study it carefully. It was a quine in the form of a shadertoy.

For those that might be wondering, the impressive thing about this is that GLSL doesn't really have a concept of text, or fonts, or anything that could be used to put human-readable messages on the screen. The output of a pixel shader is just a color, and that's it. So how does this thing manage to display its own source code and be so compact? That's what I set out to learn.


3D-Tech News Around The Web / MSI GeForce RTX 2080 Ti (GAMING X TRIO)
« on: August 17, 2018, 02:41:13 PM »
VideoCardz got some pictures of the upcoming Turing-based GeForce graphics card.

The SLI connector has been replaced by the new NVLink connector that offers more bandwidth.


In celebration of the launch of @nvidia Turing ray tracing hardware, I am making my three ray tracing books are available as free pdfs. I have donated half the money people have sent to @hackthehood, a really neat organization.

- Ray-Tracing e-books @ Geeks3D
- Ray-Tracing e-books @ Google drive

The source codes for the three e-books are available here:

3D-Tech News Around The Web / Beginner's Guide to Game AI
« on: August 14, 2018, 01:37:14 PM »
This article will introduce you to a range of introductory concepts used in artificial intelligence for games (or ‘Game AI’ for short) so that you can understand what tools are available for approaching your AI problems, how they work together, and how you might start to implement them in the language or engine of your choice.

We’re going to assume you have a basic knowledge of video games, and some grasp on mathematical concepts like geometry, trigonometry, etc. Most code examples will be in pseudo-code, so no specific programming language knowledge should be required.


Real-time ray tracing represents a new milestone in gaming graphics. Game developers have relied on rasterization techniques which looked very, very good. However, rasterization good enough to achieve near-realism requires a substantial time investment on the part of developers and artists. NVIDIA RTX technology combined with Microsoft’s DXR ray tracing extension for DirectX 12 will enable greater realism while simplifying the development pipeline.

Game programmers eager to try out ray tracing can begin with the DXR tutorials developed by NVIDIA to assist developers new to ray tracing concepts. The tutorial is available in two parts. Part 1 lays the groundwork, with information on how to set up Windows 10 and your programming environment to get started. You’ll set up your first trial application, based on an existing DirectX 12 example.

Part 2 gets into the real meat of the tutorial. You’ll take the example set up in part 1 and add the necessary frameworks to enable the application to use ray tracing. The tutorial dives into how to turn an application using standard rasterization into an application which can either rasterize or ray trace a single triangle. While the application is simple, you’ll have the groundwork in place to begin adding ray tracing to more complex applications.


3D-Tech News Around The Web / Vulkan API specifications 1.1.83 released
« on: August 14, 2018, 08:51:52 AM »
Change log for August 13, 2018 Vulkan 1.1.83 spec update:

  * Update release number to 83.

Public Issues:

  * Use [%inline] directive for all SVGs to reduce file size (public pull
    request 734).
  * Convert XML `value` aliases into \<alias> tags (public pull request
  * Fix metadoc script showing non-selected extensions (public pull request
  * Reapply public pull request 742 to make
    graphices pipeline (public pull request 749).
  * Fix numerous typos related to accidental duplication of words (public
    pull request 760).
  * Fix `vk.xml` contact typos (public pull request 761).

Internal Issues:

  * Add images to the <<Standard sample locations>> table (internal issue
  * Add a definition of "`Inherited from`" precision in the
    <<spirvenv-precision-operation, Precision and Operation of SPIR-V
    Instructions>> section (internal issue 1314).
  * Clarify that both built-in and user-defined variables count against the
    location limits for shader interfaces in the
    <<interfaces-iointerfaces-locations, Location Assignment>> section
    (internal issue 1316).
  * Merge "`required`" capabilities into the <<spirvenv-capabilities-table,
    list of optional: SPIR-V capabilities>> (internal issue 1320).
  * Relax the layout matching rules of descriptors referring to only a
    single aspect of a depth/stencil image, by reference to the new
    <<resources-image-layouts-matching-rule, Image Layout Matching Rules>>
    section (internal issue 1346).
  * Revert extension metadoc generator warning about name mismatches to a
    diagnostic, due to annoying warnings in build output for conscious
    choices we've made (internal issue 1351).

Other Issues:

  * Reserve bits for pending vendor extensions.
  * Make Vulkan consistent with SPIR-V regarding code:DepthReplacing and
    code:FragDepth in the <<interfaces-builtin-variables, Built-In
    Variables>> section.
  * Add missing ChangeLog entries for the previous three spec updates.


3D-Tech News Around The Web / (SIGGRAPH 2018) NVIDIA Turing Architecture
« on: August 14, 2018, 08:48:08 AM »
SIGGRAPH — NVIDIA today reinvented computer graphics with the launch of the NVIDIA Turing™ GPU architecture.

The greatest leap since the invention of the CUDA GPU in 2006, Turing features new RT Cores to accelerate ray tracing and new Tensor Cores for AI inferencing which, together for the first time, make real-time ray tracing possible.

These two engines — along with more powerful compute for simulation and enhanced rasterization — usher in a new generation of hybrid rendering to address the $250 billion visual effects industry. Hybrid rendering enables cinematic-quality interactive experiences, amazing new effects powered by neural networks and fluid interactivity on highly complex models.

The company also unveiled its initial Turing-based products — the NVIDIA® Quadro® RTX™ 8000, Quadro RTX 6000 and Quadro RTX 5000 GPUs — which will revolutionize the work of some 50 million designers and artists across multiple industries.

“Turing is NVIDIA’s most important innovation in computer graphics in more than a decade,” said Jensen Huang, founder and CEO of NVIDIA, speaking at the start of the annual SIGGRAPH conference. “Hybrid rendering will change the industry, opening up amazing possibilities that enhance our lives with more beautiful designs, richer entertainment and more interactive experiences. The arrival of real-time ray tracing is the Holy Grail of our industry.”

NVIDIA’s eighth-generation GPU architecture, Turing enables the world’s first ray-tracing GPU and is the result of more than 10,000 engineering-years of effort. By using Turing’s hybrid rendering capabilities, applications can simulate the physical world at 6x the speed of the previous Pascal™ generation.

To help developers take full advantage of these capabilities, NVIDIA has enhanced its RTX development platform with new AI, ray-tracing and simulation SDKs. It also announced that key graphics applications addressing millions of designers, artists and scientists are planning to take advantage of Turing features through the RTX development platform.

“This is a significant moment in the history of computer graphics,” said Jon Peddie, CEO of analyst firm JPR. “NVIDIA is delivering real-time ray tracing five years before we had thought possible.”

Real-Time Ray Tracing Accelerated by RT Cores
The Turing architecture is armed with dedicated ray-tracing processors called RT Cores, which accelerate the computation of how light and sound travel in 3D environments at up to 10 GigaRays a second. Turing accelerates real-time ray tracing operations by up to 25x that of the previous Pascal generation, and GPU nodes can be used for final-frame rendering for film effects at more than 30x the speed of CPU nodes.

“Cinesite is proud to partner with Autodesk and NVIDIA to bring Arnold to the GPU, but we never expected to see results this dramatic,” said Michele Sciolette, CTO of Cinesite. “This means we can iterate faster, more frequently and with higher quality settings. This will completely change how our artists work.”

AI Accelerated by Powerful Tensor Cores
The Turing architecture also features Tensor Cores, processors that accelerate deep learning training and inferencing, providing up to 500 trillion tensor operations a second.

This level of performance powers AI-enhanced features for creating applications with powerful new capabilities. These include DLAA — deep learning anti-aliasing, which is a breakthrough in high-quality motion image generation — denoising, resolution scaling and video re-timing.

These features are part of the NVIDIA NGX™ software development kit, a new deep learning-powered technology stack that enables developers to easily integrate accelerated, enhanced graphics, photo imaging and video processing into applications with pre-trained networks.

Faster Simulation and Rasterization with New Turing Streaming Multiprocessor
Turing-based GPUs feature a new streaming multiprocessor (SM) architecture that adds an integer execution unit executing in parallel with the floating point datapath, and a new unified cache architecture with double the bandwidth of the previous generation.

Combined with new graphics technologies such as variable rate shading, the Turing SM achieves unprecedented levels of performance per core. With up to 4,608 CUDA cores, Turing supports up to 16 trillion floating point operations in parallel with 16 trillion integer operations per second.

Developers can take advantage of NVIDIA’s CUDA 10, FleX and PhysX SDKs to create complex simulations, such as particles or fluid dynamics for scientific visualization, virtual environments and special effects.

Quadro GPUs based on Turing will be initially available in the fourth quarter.


3D-Tech News Around The Web / NVIDIA Nsight Graphics 2018.4 released
« on: August 14, 2018, 08:40:43 AM »
NVIDIA Nsight Graphics is a standalone developer tool that enables you to debug, profile, and export frames built with Direct3D (11,12), Vulkan, OpenGL, OpenVR, and the Oculus SDK.

Today, NVIDIA announced Nsight Graphics 2018.4, the first public release of GPU Trace. This release also adds D3D12 Pixel history, supports NVIDIA’s Vulkan ray tracing extension, completes support for the D3D12 RS3 SDK, and improves performance for D3D11 and Vulkan debugging and profiling. Additionally, with this release, the Nsight family of tools is being re-versioned to a year dot release number versioning scheme.


Filament is a physically based rendering (PBR) engine for Android, Linux, macOS and Windows. This rendering engine was designed to be as small as possible and as efficient as possible on Android. The goal of Filament is to offer a set of tools and APIs for Android developers that will enable them to create high quality 2D and 3D rendering with ease.

- Native C++ API for Android, Linux, macOS and Windows
- Java/JNI API for Android, Linux, macOS and Windows

- OpenGL 4.1+ for Linux, macOS and Windows
- OpenGL ES 3.0+ for Android
- Vulkan 1.0 for Android, Linux, macOS (with MoltenVk) and Windows

- Clustered forward renderer
- Cook-Torrance microfacet specular BRDF
- Lambertian diffuse BRDF
- HDR/linear lighting
- Metallic workflow
- Clear coat
- Anisotropic lighting
- Approximated translucent (subsurface) materials (direct and indirect lighting)
- Cloth shading
- Normal mapping & ambient occlusion mapping
- Image-based lighting
- Physically-based camera (shutter speed, sensitivity and aperture)
- Physical light units
- Point light, spot light and directional light
- ACES-like tone-mapping
- Temporal dithering
- Dynamic resolution (on Android)


3D-Tech News Around The Web / Compute shaders: Introduction and more
« on: August 06, 2018, 10:57:55 AM »
A couple of months ago I went to the Munich GameCamp -- a bar camp where anyone can propose a talk, and then a quick vote is cast which talks get accepted. I've been asking around for some ideas, and one developer mentioned I might want to cover compute shaders. So I went in, without much hope to attract many people, but I ended up with an overcrowded room with roughly one quarter of all the game camp participants, rambling about compute shaders for roughly an hour. Afterwards, the main question I got was: "Where can I read about this?", and I couldn't quite nail a good introduction to compute shaders (there's Ryg's "a trip through the graphics pipeline", but that's already quite past an introduction.)

1 -
2 -
3 - 

3D-Tech News Around The Web / Rules of optimization (Humus)
« on: August 06, 2018, 10:55:30 AM »
Rules of optimization:
1 - Design for performance from day 1
2 - Profile often
3 - Be vigilant on performance regressions
4 - Understand the data
5 - Understand the HW
6 - Help the compiler
7 - Verify your assumptions
8 - Performance is everyone's responsibility


What you need is a performance culture. Understand that performance is a core feature of your product. Poor performance is poor user experience, or in the case of games, possibly unplayable and unshippable. When you design new systems, you need to think about performance from the start. Yes, you can hack away at prototypes and proof of concept implementations without being overly concerned about micro-optimizations. You can run with it for a while and get a feel for how things hold up. It’s fine, you don’t have all the answers at this point. But put some thought into what sort of workload it should be able to handle once it goes into production. Does it run fine with 100 items? How about 1,000? A million? Is it conceivable that there will be a million items? Don’t just integrate a prototype into mainline without first having thought about the scale it will run at eventually. The idea isn’t that you should optimize everything down to the last SIMD instruction and last byte of memory in the first pass. But you should prepare your solution to be able to operate at the intended scale. If the code will only ever need a handful of objects, don’t obsess over performance. Will there be hundreds? See if you can simplify your math. Maybe group things sensibly. Will there be thousands? You should probably make sure it operates in batches, think about memory consumption, access pattern and bandwidth, perhaps separate active and inactive sets, or hierarchical subdivision. Tens of thousands? Time to think about threading perhaps? Millions? Start counting cycles and you can essentially not have cache misses anymore.


3D-Tech News Around The Web / NVIDIA GeForce GTX 2080 / GTX 2070
« on: August 06, 2018, 09:17:59 AM »
Maxsun is a chinese computer hardware maker and the following picture might be the first picture of the upcoming GTX 2080...


3D-Tech News Around The Web / Pathtraced Depth of Field and Bokeh
« on: July 06, 2018, 10:44:19 AM »
How to implement depth of field and bokeh in a path tracer.

3D-Tech News Around The Web / Understanding GPU context rolls
« on: July 06, 2018, 10:37:17 AM »
If you’ve ever heard the term “context roll” in the context of AMD GPUs — I’ll do that a lot in this post, sorry in advance — chances are you maybe have an intuitive instinct for what the term means. But I’d bet you’re probably missing some of the detail about the actual GPU- and driver-side mechanics, maybe aren’t sure what “roll” means, or maybe you’re even thinking about a context at the wrong conceptual level.

Regardless, let me clear it up so that you know definitively what a context roll on our GPUs is, how they apply to the pipeline and how they’re managed, and what you can do to analyse them and find out if they’re a limiting factor in the performance of your game or application. Note that everything I write here applies to modern AMD Radeon GPUs based on GCN.


GeeXLab - english forum / Loops (099 + 104)
« on: July 06, 2018, 10:12:06 AM »
Demo updated to handle several loops.

Article + download links:

This demo shows how a NVIDIA GPU draws a quad by visualizing some GLSL built-in variables: gl_ThreadInWarpNV, gl_WarpIDNV and gl_SMIDNV (GL_NV_shader_thread_group support is required).

The article also discusses about SMs, WARPS, threads and how the number of GPU cores can be computed.
Article + downloads links:

Pages: [1] 2 3 ... 78