Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - JeGX

Pages: [1] 2 3 ... 49
1
Quote
In this, the fourth post in the Windows Command-Line series, we'll discuss the new Windows Pseudo Console (ConPTY) infrastructure and API - why we built it, what it's for, how it works, how to use it, and more.

In the previous post in this series, we started to explore the internals of the Windows Console and Windows' Command-Line infrastructure. We also discussed many of Console's strengths and outlined its key weaknesses.

One of those weaknesses is that Windows tries to be "helpful" but gets in the way of alternative and 3rd party Console developers, service developers, etc. When building a Console or service, developers need to be able to access/supply the communication pipes through which their Terminal/service communicates with command-line applications. In the *NIX world, this isn't a problem because *NIX provides a "Pseudo Terminal" (PTY) infrastructure which makes it easy to build the communication plumbing for a Console or service, but Windows does not ...

... until now!


Quote
The new Win32 ConPTY API (formal docs to follow soon) is now available in recent Windows 10 Insider builds and corresponding Windows 10 Insider Preview SDK, and will ship in the next major release of Windows 10 (due sometime in fall/winter 2018).

Link: https://blogs.msdn.microsoft.com/commandline/2018/08/02/windows-command-line-introducing-the-windows-pseudo-console-conpty/

2
3D-Tech News Around The Web / Dissecting a Shader Quine
« on: August 17, 2018, 02:47:58 PM »
Quote
I'm not normally very interested in quines, but today I saw one that that really made me sit down and study it carefully. It was a quine in the form of a shadertoy.

For those that might be wondering, the impressive thing about this is that GLSL doesn't really have a concept of text, or fonts, or anything that could be used to put human-readable messages on the screen. The output of a pixel shader is just a color, and that's it. So how does this thing manage to display its own source code and be so compact? That's what I set out to learn.

Links:
- https://gpfault.net/posts/shader-quine.txt.html
- https://www.shadertoy.com/view/llcyD2



3
Quote
In celebration of the launch of @nvidia Turing ray tracing hardware, I am making my three ray tracing books are available as free pdfs. I have donated half the money people have sent to @hackthehood, a really neat organization.
source

Download:
- Ray-Tracing e-books @ Geeks3D
- Ray-Tracing e-books @ Google drive


The source codes for the three e-books are available here:
- https://github.com/petershirley/raytracinginoneweekend
- https://github.com/petershirley/raytracingthenextweek
- https://github.com/petershirley/raytracingtherestofyourlife



4
3D-Tech News Around The Web / Beginner's Guide to Game AI
« on: August 14, 2018, 01:37:14 PM »
Quote
This article will introduce you to a range of introductory concepts used in artificial intelligence for games (or ‘Game AI’ for short) so that you can understand what tools are available for approaching your AI problems, how they work together, and how you might start to implement them in the language or engine of your choice.

We’re going to assume you have a basic knowledge of video games, and some grasp on mathematical concepts like geometry, trigonometry, etc. Most code examples will be in pseudo-code, so no specific programming language knowledge should be required.

Link: https://www.gamedev.net/articles/programming/artificial-intelligence/the-total-beginners-guide-to-game-ai-r4942/

5
Quote
Real-time ray tracing represents a new milestone in gaming graphics. Game developers have relied on rasterization techniques which looked very, very good. However, rasterization good enough to achieve near-realism requires a substantial time investment on the part of developers and artists. NVIDIA RTX technology combined with Microsoft’s DXR ray tracing extension for DirectX 12 will enable greater realism while simplifying the development pipeline.

Game programmers eager to try out ray tracing can begin with the DXR tutorials developed by NVIDIA to assist developers new to ray tracing concepts. The tutorial is available in two parts. Part 1 lays the groundwork, with information on how to set up Windows 10 and your programming environment to get started. You’ll set up your first trial application, based on an existing DirectX 12 example.

Part 2 gets into the real meat of the tutorial. You’ll take the example set up in part 1 and add the necessary frameworks to enable the application to use ray tracing. The tutorial dives into how to turn an application using standard rasterization into an application which can either rasterize or ray trace a single triangle. While the application is simple, you’ll have the groundwork in place to begin adding ray tracing to more complex applications.

Links
https://news.developer.nvidia.com/dx12-raytracing-tutorials/
https://developer.nvidia.com/rtx/raytracing/dxr/DX12-Raytracing-tutorial-Part-1
https://developer.nvidia.com/rtx/raytracing/dxr/DX12-Raytracing-tutorial-Part-2




6
3D-Tech News Around The Web / (SIGGRAPH 2018) NVIDIA Turing Architecture
« on: August 14, 2018, 08:48:08 AM »
Quote
SIGGRAPH — NVIDIA today reinvented computer graphics with the launch of the NVIDIA Turing™ GPU architecture.

The greatest leap since the invention of the CUDA GPU in 2006, Turing features new RT Cores to accelerate ray tracing and new Tensor Cores for AI inferencing which, together for the first time, make real-time ray tracing possible.

These two engines — along with more powerful compute for simulation and enhanced rasterization — usher in a new generation of hybrid rendering to address the $250 billion visual effects industry. Hybrid rendering enables cinematic-quality interactive experiences, amazing new effects powered by neural networks and fluid interactivity on highly complex models.

The company also unveiled its initial Turing-based products — the NVIDIA® Quadro® RTX™ 8000, Quadro RTX 6000 and Quadro RTX 5000 GPUs — which will revolutionize the work of some 50 million designers and artists across multiple industries.

“Turing is NVIDIA’s most important innovation in computer graphics in more than a decade,” said Jensen Huang, founder and CEO of NVIDIA, speaking at the start of the annual SIGGRAPH conference. “Hybrid rendering will change the industry, opening up amazing possibilities that enhance our lives with more beautiful designs, richer entertainment and more interactive experiences. The arrival of real-time ray tracing is the Holy Grail of our industry.”

NVIDIA’s eighth-generation GPU architecture, Turing enables the world’s first ray-tracing GPU and is the result of more than 10,000 engineering-years of effort. By using Turing’s hybrid rendering capabilities, applications can simulate the physical world at 6x the speed of the previous Pascal™ generation.

To help developers take full advantage of these capabilities, NVIDIA has enhanced its RTX development platform with new AI, ray-tracing and simulation SDKs. It also announced that key graphics applications addressing millions of designers, artists and scientists are planning to take advantage of Turing features through the RTX development platform.

“This is a significant moment in the history of computer graphics,” said Jon Peddie, CEO of analyst firm JPR. “NVIDIA is delivering real-time ray tracing five years before we had thought possible.”


Real-Time Ray Tracing Accelerated by RT Cores
The Turing architecture is armed with dedicated ray-tracing processors called RT Cores, which accelerate the computation of how light and sound travel in 3D environments at up to 10 GigaRays a second. Turing accelerates real-time ray tracing operations by up to 25x that of the previous Pascal generation, and GPU nodes can be used for final-frame rendering for film effects at more than 30x the speed of CPU nodes.

“Cinesite is proud to partner with Autodesk and NVIDIA to bring Arnold to the GPU, but we never expected to see results this dramatic,” said Michele Sciolette, CTO of Cinesite. “This means we can iterate faster, more frequently and with higher quality settings. This will completely change how our artists work.”

AI Accelerated by Powerful Tensor Cores
The Turing architecture also features Tensor Cores, processors that accelerate deep learning training and inferencing, providing up to 500 trillion tensor operations a second.

This level of performance powers AI-enhanced features for creating applications with powerful new capabilities. These include DLAA — deep learning anti-aliasing, which is a breakthrough in high-quality motion image generation — denoising, resolution scaling and video re-timing.

These features are part of the NVIDIA NGX™ software development kit, a new deep learning-powered technology stack that enables developers to easily integrate accelerated, enhanced graphics, photo imaging and video processing into applications with pre-trained networks.

Faster Simulation and Rasterization with New Turing Streaming Multiprocessor
Turing-based GPUs feature a new streaming multiprocessor (SM) architecture that adds an integer execution unit executing in parallel with the floating point datapath, and a new unified cache architecture with double the bandwidth of the previous generation.

Combined with new graphics technologies such as variable rate shading, the Turing SM achieves unprecedented levels of performance per core. With up to 4,608 CUDA cores, Turing supports up to 16 trillion floating point operations in parallel with 16 trillion integer operations per second.

Developers can take advantage of NVIDIA’s CUDA 10, FleX and PhysX SDKs to create complex simulations, such as particles or fluid dynamics for scientific visualization, virtual environments and special effects.

Availability
Quadro GPUs based on Turing will be initially available in the fourth quarter.


Links:
- https://nvidianews.nvidia.com/news/nvidia-reinvents-computer-graphics-with-turing-architecture
- https://www.nvidia.com/en-us/design-visualization/technologies/turing-architecture/
- https://www.nvidia.com/en-us/design-visualization/quadro-desktop-gpus/






7
3D-Tech News Around The Web / NVIDIA Nsight Graphics 2018.4 released
« on: August 14, 2018, 08:40:43 AM »
NVIDIA Nsight Graphics is a standalone developer tool that enables you to debug, profile, and export frames built with Direct3D (11,12), Vulkan, OpenGL, OpenVR, and the Oculus SDK.

Quote
Today, NVIDIA announced Nsight Graphics 2018.4, the first public release of GPU Trace. This release also adds D3D12 Pixel history, supports NVIDIA’s Vulkan ray tracing extension, completes support for the D3D12 RS3 SDK, and improves performance for D3D11 and Vulkan debugging and profiling. Additionally, with this release, the Nsight family of tools is being re-versioned to a year dot release number versioning scheme.

Links:
- https://developer.nvidia.com/nsight-graphics
- https://news.developer.nvidia.com/nvidia-announces-nsight-graphics-2018-4/
- https://www.youtube.com/watch?v=aOBU-8P8P8Y





9
Filament is a physically based rendering (PBR) engine for Android, Linux, macOS and Windows. This rendering engine was designed to be as small as possible and as efficient as possible on Android. The goal of Filament is to offer a set of tools and APIs for Android developers that will enable them to create high quality 2D and 3D rendering with ease.

APIs:
- Native C++ API for Android, Linux, macOS and Windows
- Java/JNI API for Android, Linux, macOS and Windows


Backends:
- OpenGL 4.1+ for Linux, macOS and Windows
- OpenGL ES 3.0+ for Android
- Vulkan 1.0 for Android, Linux, macOS (with MoltenVk) and Windows


Rendering
- Clustered forward renderer
- Cook-Torrance microfacet specular BRDF
- Lambertian diffuse BRDF
- HDR/linear lighting
- Metallic workflow
- Clear coat
- Anisotropic lighting
- Approximated translucent (subsurface) materials (direct and indirect lighting)
- Cloth shading
- Normal mapping & ambient occlusion mapping
- Image-based lighting
- Physically-based camera (shutter speed, sensitivity and aperture)
- Physical light units
- Point light, spot light and directional light
- ACES-like tone-mapping
- Temporal dithering
- FXAA or MSAA
- Dynamic resolution (on Android)


Links:
- https://github.com/google/filament
- https://google.github.io/filament/Filament.md.html



10
3D-Tech News Around The Web / Compute shaders: Introduction and more
« on: August 06, 2018, 10:57:55 AM »
Quote
A couple of months ago I went to the Munich GameCamp -- a bar camp where anyone can propose a talk, and then a quick vote is cast which talks get accepted. I've been asking around for some ideas, and one developer mentioned I might want to cover compute shaders. So I went in, without much hope to attract many people, but I ended up with an overcrowded room with roughly one quarter of all the game camp participants, rambling about compute shaders for roughly an hour. Afterwards, the main question I got was: "Where can I read about this?", and I couldn't quite nail a good introduction to compute shaders (there's Ryg's "a trip through the graphics pipeline", but that's already quite past an introduction.)

Links:
1 - https://anteru.net/blog/2018/intro-to-compute-shaders
2 - https://anteru.net/blog/2018/more-compute-shaders
3 - https://anteru.net/blog/2018/even-more-compute-shaders/ 


11
3D-Tech News Around The Web / Rules of optimization (Humus)
« on: August 06, 2018, 10:55:30 AM »
Quote
Rules of optimization:
1 - Design for performance from day 1
2 - Profile often
3 - Be vigilant on performance regressions
4 - Understand the data
5 - Understand the HW
6 - Help the compiler
7 - Verify your assumptions
8 - Performance is everyone's responsibility

...

What you need is a performance culture. Understand that performance is a core feature of your product. Poor performance is poor user experience, or in the case of games, possibly unplayable and unshippable. When you design new systems, you need to think about performance from the start. Yes, you can hack away at prototypes and proof of concept implementations without being overly concerned about micro-optimizations. You can run with it for a while and get a feel for how things hold up. It’s fine, you don’t have all the answers at this point. But put some thought into what sort of workload it should be able to handle once it goes into production. Does it run fine with 100 items? How about 1,000? A million? Is it conceivable that there will be a million items? Don’t just integrate a prototype into mainline without first having thought about the scale it will run at eventually. The idea isn’t that you should optimize everything down to the last SIMD instruction and last byte of memory in the first pass. But you should prepare your solution to be able to operate at the intended scale. If the code will only ever need a handful of objects, don’t obsess over performance. Will there be hundreds? See if you can simplify your math. Maybe group things sensibly. Will there be thousands? You should probably make sure it operates in batches, think about memory consumption, access pattern and bandwidth, perhaps separate active and inactive sets, or hierarchical subdivision. Tens of thousands? Time to think about threading perhaps? Millions? Start counting cycles and you can essentially not have cache misses anymore.

Link: http://www.humus.name/index.php?page=News&ID=383

12
3D-Tech News Around The Web / NVIDIA GeForce GTX 2080 / GTX 2070
« on: August 06, 2018, 09:17:59 AM »
Maxsun is a chinese computer hardware maker and the following picture might be the first picture of the upcoming GTX 2080...

source: http://tieba.baidu.com/p/5827448822?red_tag=d1613003542
via: https://videocardz.com/newz/maxsun-teases-geforce-gtx-2080-icraft-graphics-card



13
3D-Tech News Around The Web / Pathtraced Depth of Field and Bokeh
« on: July 06, 2018, 10:44:19 AM »
How to implement depth of field and bokeh in a path tracer.

https://blog.demofox.org/2018/07/04/pathtraced-depth-of-field-bokeh/




14
3D-Tech News Around The Web / Understanding GPU context rolls
« on: July 06, 2018, 10:37:17 AM »
Quote
If you’ve ever heard the term “context roll” in the context of AMD GPUs — I’ll do that a lot in this post, sorry in advance — chances are you maybe have an intuitive instinct for what the term means. But I’d bet you’re probably missing some of the detail about the actual GPU- and driver-side mechanics, maybe aren’t sure what “roll” means, or maybe you’re even thinking about a context at the wrong conceptual level.

Regardless, let me clear it up so that you know definitively what a context roll on our GPUs is, how they apply to the pipeline and how they’re managed, and what you can do to analyse them and find out if they’re a limiting factor in the performance of your game or application. Note that everything I write here applies to modern AMD Radeon GPUs based on GCN.

Link: https://gpuopen.com/understanding-gpu-context-rolls/

15
This demo shows how a NVIDIA GPU draws a quad by visualizing some GLSL built-in variables: gl_ThreadInWarpNV, gl_WarpIDNV and gl_SMIDNV (GL_NV_shader_thread_group support is required).

The article also discusses about SMs, WARPS, threads and how the number of GPU cores can be computed.
 
Article + downloads links:
https://www.geeks3d.com/hacklab/20180705/demo-visualizing-nvidia-gl_threadinwarpnv-gl_warpidnv-and-gl_smidnv-gl_nv_shader_thread_group/




16
Quote
Motivated by efficient GPU procedural texturing of the sphere, we describe several approximately equal-area cube-to-sphere projections. The projections not only provide low-distortion UV mapping, but also enable efficient generation of jittered point sets with O(1) nearest neighbor lookup. We provide GLSL implementations of the projections, with several real-time procedural texturing examples. Our numerical results summarize the various methods’ ability to preserve projected areas as well as their performance on both integrated and discrete GPUs. More broadly, the overall cube-to-sphere approach provides an underexplored avenue for adopting existing 2D grid-based methods to the sphere. As an example, we showcase fast Poisson disk sampling.

Links:
- http://www.jcgt.org/published/0007/02/01/paper-lowres.pdf
- http://www.jcgt.org/published/0007/02/01/


17
3D-Tech News Around The Web / OpenSubdiv 3.3.2 released
« on: June 29, 2018, 07:38:36 PM »
Quote
OpenSubdiv is a set of open source libraries that implement high performance subdivision surface (subdiv) evaluation on massively parallel CPU and GPU architectures. This codepath is optimized for drawing deforming subdivs with static topology at interactive framerates. The resulting limit surface matches Pixar's Renderman to numerical precision.

OpenSubdiv is covered by the Apache license, and is free to use for commercial or non-commercial use. This is the same code that Pixar uses internally for animated film production. Our intent is to encourage high performance accurate subdiv drawing by giving away the "good stuff".

Links:
- https://github.com/PixarAnimationStudios/OpenSubdiv
- http://graphics.pixar.com/opensubdiv/docs/release_notes.html#release-3-3-2
- http://graphics.pixar.com/opensubdiv/docs/intro.html



18
3D-Tech News Around The Web / CPU RayTracer
« on: June 29, 2018, 07:30:31 PM »
Quote
Quick path tracer project written in C++

Features

    Lambert brdf for diffuse
    Cook-Torrance microfacet brdf for specular
    Uses OpenMP for multithreading
    Single-bounce atmospheric scattering model based on Elek
    Firefly reduction by limiting the roughness as the path bounces around
    Improved importance sampling for microfacet brdf
    Anti-aliasing
    Depth of field

Link: https://github.com/rorydriscoll/RayTracer


19
3D-Tech News Around The Web / Milton: infinite canvas paint program
« on: June 27, 2018, 10:32:37 AM »
Milton is an open source application that lets you paint, just paint, with an infinite level of detail thanks to an infinite canvas. Milton is an OpenGL app and is available for Windows and Linux.

Quote
There are no pixels, you can paint with (almost) infinite detail. It feels raster-based but it works with vectors. It is not an image editor. It is not a vector graphics editor. It is a program that lets you draw, sketch and paint. There is no save button, your work is persistent with unlimited undo.

Links:
- Downloads: https://github.com/serge-rgb/milton/releases
- https://github.com/serge-rgb/milton
- https://www.miltonpaint.com/










20
3D-Tech News Around The Web / Moebius Registration
« on: June 27, 2018, 10:16:35 AM »
Quote
This distribution contains code for constructing and registering conformal spherical parametrizations of water-tight, genus-zero surfaces. Specifically, it provides implementations for:

- Computing a conformal parametrization over the sphere
- Centering the parametrization with respect to Möbius inversions
- Tessellating the conformal parametrization to a regular equirectangular grid
- Performing fast spherical correlation to find the rotation/reflections that best aligning two centered parametrizations
- Using the registered parametrizations to compute dense correspondences from a source mesh to a target

Link: https://github.com/mkazhdan/MoebiusRegistration



Quote
Conformal parameterizations over the sphere provide high-quality maps between genus zero surfaces, and are essential for applications such as data transfer and comparative shape analysis. However, such maps are not unique: to define correspondence between two surfaces, one must find the Möbius transformation that best aligns two parameterizations—akin to picking a translation and rotation in rigid registration problems. We describe a simple procedure that canonically centers and rotationally aligns two spherical maps. Centering is implemented via elementary operations on triangle meshes in R3, and minimizes area distortion. Alignment is achieved using the FFT over the group of rotations. We examine this procedure in the context of spherical conformal parameterization, orbifold maps, non-rigid symmetry detection, and dense point-to-point surface correspondence.

Link: http://www.cs.jhu.edu/~misha/MyPapers/SGP18.pdf



Pages: [1] 2 3 ... 49