Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - JeGX

Pages: [1] 2 3 ... 71
As announced today at the Games Developers Conference by CRC Press / Taylor & Francis Group (booth 2104, South Hall – I’m told there’s a discount code to be had), we’re indeed finally putting out a new edition of Real-Time Rendering. It should be out by SIGGRAPH if all goes well. Tomas, Naty, and I have been working on this edition since August 2016. We realized that, given the amount that’s changed in area lighting, global illumination, and volume rendering, that we could use help, so asked Angelo Pesce, Michał Iwanicki, and Sébastien Hillaire to join us, which they all kindly and eagerly did. Their contributions both considerably improved the book and got it done.

List of chapters:

1 Introduction
2 The Graphics Rendering Pipeline
3 The Graphics Processing Unit
4 Transforms
5 Shading Basics
6 Texturing
7 Shadows
8 Light and Color
9 Physically-Based Shading
10 Local Illumination
11 Global Illumination
12 Image-Space Effects
13 Beyond Polygons
14 Volumetric and Translucency Rendering
15 Non-Photorealistic Rendering
16 Polygonal Techniques
17 Curves and Curved Surfaces
18 Pipeline Optimization
19 Acceleration Algorithms
20 Efficient Shading
21 Virtual and Augmented Reality
22 Intersection Test Methods
23 Graphics Hardware
24 The Future


3D-Tech News Around The Web / What’s New in Radeon GPU Profiler 1.2
« on: March 22, 2018, 11:20:30 AM »
Analyze. Adjust. Accelerate. Radeon™ GPU Profiler is the first PC graphics tool which allows for low level, built-in hardware thread tracing and provides detailed timing and occupancy information on Radeon™ GPUs.”

The release of Radeon™ GPU Profiler 1.2 brings exciting new features including:
- RenderDoc interop
- detailed barrier codes
- improved frame overview


AMD is announcing Radeon ProRender support for real-time GPU acceleration of ray tracing techniques mixed with traditional rasterization based rendering. This new process fuses the speed of rasterization with the physically-based realism that users of Radeon ProRender expect for their workflows. At a high level, the process achieves these results by using rasterization to draw basic structures and surfaces before a ray tracing process is used to compute advanced light-effects like reflections, shadows and transparency. The flexibility of the process allows users to decide when these advanced light effects are actually necessary and add noticeable new dimensions of realism to their renders.


I will upload asap the current version of the micro Lua framework on GeeXLab download page.

3D-Tech News Around The Web / Nmap 7.70 released
« on: March 22, 2018, 10:29:25 AM »

Nmap 7.70 released! Better service and OS detection, 9 new NSE scripts, new Npcap, and much more.

We're excited to make our first Nmap release of 2018--version 7.70!  It
includes hundreds of new OS and service fingerprints, 9 new NSE scripts
(for a total of 588), a much-improved version of our Npcap windows packet
capturing library/driver, and service detection improvements to make -sV
faster and more accurate.  And those are just a few of the dozens of
improvements described below.

- Download Nmap 7.70 (Windows)
- Nmap 7.70 announcement
- Nmap changelog
- Nmap homepage

Nvidia’s flagship Titan V graphics cards may have hardware gremlins causing them to spit out different answers to repeated complex calculations under certain conditions, according to computer scientists.

The Titan V is the Silicon Valley giant's most powerful GPU board available to date, and is built on Nv's Volta technology. Gamers and casual users will not notice any errors or issues, however folks running scientific software may encounter the glitches.

One engineer told The Register that when he tried to run identical simulations of an interaction between a protein and enzyme on Nvidia’s Titan V cards, the results varied. After repeated tests on four of the top-of-the-line GPUs, he found two gave numerical errors about 10 per cent of the time. These tests should produce the same output values each time again and again. On previous generations of Nvidia hardware, that generally was the case. On the Titan V, not so, we're told.


For the last thirty years, almost all games have used the same general technique—rasterization—to render images on screen.  While the internal representation of the game world is maintained as three dimensions, rasterization ultimately operates in two dimensions (the plane of the screen), with 3D primitives mapped onto it through transformation matrices.  Through approaches like z-buffering and occlusion culling, games have historically strived to minimize the number of spurious pixels rendered, as normally they do not contribute to the final frame.  And in a perfect world, the pixels rendered would be exactly those that are directly visible from the camera.

Through the first few years of the new millennium, this approach was sufficient.  Normal and parallax mapping continued to add layers of realism to 3D games, and GPUs provided the ongoing improvements to bandwidth and processing power needed to deliver them.  It wasn’t long, however, until games began using techniques that were incompatible with these optimizations.  Shadow mapping allowed off-screen objects to contribute to on-screen pixels, and environment mapping required a complete spherical representation of the world.  Today, techniques such as screen-space reflection and global illumination are pushing rasterization to its limits, with SSR, for example, being solved with level design tricks, and GI being solved in some cases by processing a full 3D representation of the world using async compute.  In the future, the utilization of full-world 3D data for rendering techniques will only increase.

Today, we are introducing a feature to DirectX 12 that will bridge the gap between the rasterization techniques employed by games today, and the full 3D effects of tomorrow.  This feature is DirectX Raytracing.  By allowing traversal of a full 3D representation of the game world, DirectX Raytracing allows current rendering techniques such as SSR to naturally and efficiently fill the gaps left by rasterization, and opens the door to an entirely new class of techniques that have never been achieved in a real-time game.


DirectX Raytracing (DXR) is a new feature in DirectX 12 that opens the door to a new class of real-time graphics techniques for games.

We were thrilled to join Microsoft onstage for the announcement, which we followed with a presentation of our own work in developing practical real-time applications for this exciting new tech.

Rendering accurate reflections in real-time is difficult. There are many challenges and limitations when using the existing methods.
For the past few months, we've been exploring ways of combining DirectX Raytracing with existing methods to solve some of these challenges.
While much of our presentation went deep into the math for our solution, I would like to show you some examples of our new technique in action.

Practical real-time raytracing for games
Raytracing is not a new technique, but until recently it has been too computationally demanding to use in real-time games.
With modern GPUs, it's now possible to use rasterization for most of the rendering and a smaller amount of raytracing to enhance shadows, reflections, and other effects that are difficult to achieve with traditional techniques.
Our DXR tech demo runs in real-time on current GPU hardware and, because it builds on existing methods, it was relatively easy to implement into our DirectX 12 game engine.
We are proud to be one of the first developers chosen to work with DirectX Raytracing, and we are excited about the opportunities for this new API.
I am happy to announce that we will be using DirectX Raytracing in a new 3DMark benchmark test that we hope to release towards the end of the year.

- DirectX Raytracing tech demo @ youtube

Fact Sheet and FAQ

What is DirectX Raytracing?
DirectX Raytracing is a new feature in DirectX 12 that bridges the gap between today’s rasterization techniques and the full 3D effects of tomorrow. It opens the door to a new class of real-time graphics techniques for games. Find out more from Microsoft’s DirectX Developer Blog. 

What did Futuremark present at GDC?
Futuremark and Microsoft presented a joint session at GDC called, "New Techniques for Accurate Real-Time Reflections.” It was the first in a series of advanced graphics tutorials for graphics engineers, technical leads, and advanced technical artists.
In our talk, we presented the first practical real-time applications for DirectX Raytracing. We showed and explained a new technique that combines DirectX Raytracing (DXR) with existing methods to improve the quality and accuracy of real-time reflections in games.

What are the advantages of this new technique for reflections?
Our reflection technique uses DXR to enhance commonly used reflection techniques and to solve cases that couldn’t be handled previously, such as reflections of dynamic objects outside the main camera view, reflections on non-planar surfaces, and producing perspective correct reflections for non-trivial-shaped spaces. 

What about performance?
Our demo runs in real-time on current GPU hardware. Raytracing is used selectively to enhance reflections that are difficult to achieve with traditional techniques. 

Which game engine are you using?
As with all our products, we use our own engine. Our DXR demo uses a modified version of the DirectX 12 engine we used for 3DMark Time Spy. 
Can your technique be implemented in other game engines?
Yes. Our technique builds on existing techniques which are well known to game developers. It would be relatively straightforward to implement in modern game engines.

Does your demo use NVIDIA RTX or AMD’s raytracing solution?
No. Our raytracing demo uses Microsoft’s DirectX Raytracing (DXR) API.

GeeXLab - english forum / GeeXLab released
« on: March 21, 2018, 12:07:38 PM »
GeeXLab has been released for Windows 64-bit only. I will update the documentation asap with new functions.


Version - 2018.03.21
+ added set_view_matrix_4x4() and set_projection_matrix_4x4() to gh_camera lib (lua / python).
+ added uniform_4x4f() to gh_gpu_program (lua / python).
+ added color_edit_4f_v2() and color_picker_4f_v2() to gh_imgui lib (lua / python).
+ added tree_node_leaf_v2() to gh_imgui lib (lua / python).
+ added set_cur_font_display_offset() and get_cur_font_display_offset() to gh_imgui lib (lua / python).
+ added project_3d_to_2d_v1() and project_3d_to_2d_v2() to gh_utils (lua / python).

Version - 2018.02.22
+ added joint_revolute_set_angular_limits() to gh_physx3 lib (lua)

GeeXLab - english forum / Re: horizontal scrollbar ImGui
« on: March 20, 2018, 07:09:30 PM »
I found the way to enable horizontal scrollbar. Just define a new constant window_horizontal_scrollbar with value 2048 and use it in window flags:

Code: [Select]
local pos_size_flag_always = 1
local window_no_save_settings = 256
local window_horizontal_scrollbar = 2048

local window_flags = window_no_save_settings | window_horizontal_scrollbar

local is_open = gh_imgui.window_begin("myWindow", 200, 400, 0, 0, window_flags, pos_size_flag_always, pos_size_flag_always)

By default, the vertical scrollbar is enabled and the horizontal scrollbar is disabled.

GeeXLab - english forum / Re: horizontal scrollbar ImGui
« on: March 20, 2018, 03:48:18 PM »
Yes it's weird indeed. Vertical and horizontal scrollbars are enabled. I will look at ImGui functions, maybe it's a bug in GeeXLab. I let you know.

The GeeXLab codes for shift and ctrl:


KC_LEFT_CTRL    = 29

Other codes are available here: libs/lua/keyboard_codes.lua

ASRock, a well-known motherboard maker, has published a teaser about its first graphics card. The following hashtags have been used in this tweet:

#ASRock #PhantomGaming #PG #Phantom #Gaming #FAST #MYSTERIOUS #UNPREDICTABLE

- Teaser @ youtube
- source

3D-Tech News Around The Web / Vulkan Subgroup Tutorial
« on: March 16, 2018, 01:14:18 PM »
Subgroups are an important new feature in Vulkan 1.1 because they enable highly-efficient sharing and manipulation of data between multiple tasks running in parallel on a GPU. In this tutorial, we will cover how to use the new subgroup functionality.

Modern heterogeneous hardware like GPUs gain performance by using parallel hardware and exposing a parallel programming model to target this hardware. When a user wants to run N parallel tasks for their algorithm, a GPU would divide this N-sized workload between the compute units of that GPU. Each compute unit of the GPU is then capable of running one or more of these parallel tasks concurrently. In Vulkan, we refer to the data that runs on a single compute unit of a GPU as the local workgroup, and an individual parallel task as an invocation.

Vulkan 1.0 already exposes a method to share data between the invocations in a local workgroup via shared memory, which is exposed only in compute shaders. Shared memory allows for invocations within the local workgroup to share some data via memory that is faster to access than reading and writing to buffer memory, providing a mechanism to share data in a performance sensitive context.

Vulkan 1.1 goes further and introduces a mechanism to share data between the invocations that run in parallel on a single compute unit. These concurrently running invocations are named the subgroup. This subgroup allows for the sharing of data between a much smaller set of invocations than the local workgroup could, but at a significantly higher performance.

While shared memory is only available in compute shaders, sharing data via subgroup operations is allowed in all shader stages via optionally supported stages as we'll explain below.


The Raspberry Pi team has updated the Raspbian operating system for the launch of the new Raspberry Pi 3 Model B+. This new version of Raspbian improves the support of different screen resolutions (large, medium and small screens) and brings a new option for supporting high resolution screens: the Pixel Doubling.

Enabling pixel doubling simply draws every pixel in the desktop as a 2×2 block of pixels on the screen, making everything exactly twice the size and resulting in a usable desktop on, for example, a MacBook Pro’s Retina display. We’ve included the option on the version of the desktop for the Pi as well, because we know that some people use their Pi with large-screen HDMI TVs.

As pixel doubling magnifies everything on the screen by a factor of two, it’s also a useful option for people with visual impairments.


3D-Tech News Around The Web / Unreal Engine 4.19 released
« on: March 15, 2018, 08:56:20 AM »
Unreal Engine 4.19 enables you to step inside the creative process - the tools become almost transparent so you can spend more time creating. Improvements to rendering, physics, Landscape terrain, and many more systems mean you can build worlds that run than ever before. The quality of life for the developers using our tools is always top of mind, so we continue to look at areas we can improve to put the power into the developers' hands.

Whether you are creating games, linear media, architectural visualizations, or product design tools, Unreal Engine 4.19 enables you to know exactly what the finished product will look like every step of the way. The new Live Link plugin seamlessly blends workflows between external content creation tools and Unreal Engine so you can see updates as you make changes to source content. And with the continued improvements to Sequencer, you can be the director with even more control of your scenes in real time.

When it comes to bringing the worlds in your imagination to life, the sky's the limit. Create breathtaking vistas in large, open worlds thanks to Landscape rendering optimizations. With the new Dynamic Resolution feature that adjusts the resolution as needed to achieve desired frame rates, those worlds will run smoother than ever before on PlayStation 4 and Xbox One.

It wouldn't be a complete release without a mountain of workflow and usability improvements, and Unreal Engine 4.19 does not disappoint in this respect. Working with Material layers and parameters is easier and more intuitive. Features for debugging Blueprints are more powerful with the ability to step over, step into, and step out. You can now save content folders as favorites. Animation tools have been improved with pinnable commands, ability to have multiple viewports, and lots more.


Thanks for your work!
I added a new key.lua file in the libs/lua/ folder.  Will be shipped in the next update  ;)

3D-Tech News Around The Web / MagicaVoxel 0.99.1 released
« on: March 13, 2018, 04:03:32 PM »
MagicaVoxel is a free lightweight 8-bit voxel art editor and interactive path tracing renderer.


0.99.1 - 3/12/2018

    Renderer (hidden menu)
        Atmospheric Scattering Skydome : Rayleigh/Mie scattering
        Bladed Bokeu : for large depth of field
        Stretched Bloom Filter
        Grids : can change Spacing, Width, and Color
        Field of View (FOV) : change range to : 1-360
        Fix some bugs : e.g. Bloom dark points
        more options are saved into file, format is changed as well
        Align Objects in Editor
        New object is using last model size
        Fix importing files with unicode paths
        Add default export and snapshots folders in config



More advanced examples:


Pages: [1] 2 3 ... 71