Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - JeGX

Pages: 1 [2] 3 4 ... 68
3D-Tech News Around The Web / meshoptimizer: Mesh optimization library
« on: January 12, 2018, 03:21:05 PM »
meshoptimizer is a mesh optimization library that makes indexed meshes more GPU-friendly.

When GPU renders triangle meshes, various stages of the GPU pipeline have to process vertex and index data. The efficiency of these stages depends on the data you feed to them; this library provides algorithms to help optimize meshes for these stages, as well as algorithms to reduce the mesh complexity and storage overhead.

The library provides a C and C++ interface for all algorithms; you can use it from C/C++ or from other languages via FFI (such as P/Invoke).

- Latest version 0.7:

NVIDIA has released GeForce 390.65 two days ago with security updates against Spectre based attacks.

NVIDIA is providing an initial security update to mitigate aspects of Google Project Zero’s January 3, 2018 publication of novel information disclosure attacks that combine CPU speculative execution with known side channels.

The vulnerability has three known variants:

- Variant 1 (CVE-2017-5753): Mitigations are provided with the security update included in this bulletin. NVIDIA expects to work together with its ecosystem partners on future updates to further strengthen mitigations.

- Variant 2 (CVE-2017-5715): NVIDIA’s initial analysis indicates that the NVIDIA GPU Display Driver is potentially affected by this variant. NVIDIA expects to work together with its ecosystem partners on future updates for this variant.

- Variant 3 (CVE-2017-5754): At this time, NVIDIA has no reason to believe that the NVIDIA GPU Display Driver is vulnerable to this variant.

More information:

NVIDIA wants to put a 65-inch gaming display on your desk!

NVIDIA Supersizes PC Gaming with New Breed of Big Format Gaming Displays

PC gaming today makes the leap to a giant screen, with NVIDIA’s introduction of big format gaming displays, or BFGDs™.

Created in conjunction with NVIDIA hardware partners Acer, ASUS and HP, BFGDs integrate a high-end 65-inch, 4K 120Hz HDR display with NVIDIA® G-SYNC® technology together with NVIDIA SHIELD™, the world’s most advanced streaming device. The combination delivers a buttery-smooth gaming experience and your favorite media streaming applications — all on a giant screen.

“PC gamers expect high performance and instant response times, but, until now, they’ve been largely limited to traditional desktop displays,” said Matt Wuebbling, head of GeForce marketing at NVIDIA. “BFGDs change that. With NVIDIA’s latest technology built into these new displays, PC gamers can now experience their favorite titles in all the low-latency glory they deserve.”

At the heart of BFGDs is the latest G-SYNC HDR technology that synchronizes the display’s 120Hz refresh rate to that of the game at every moment in time. This G-SYNC Variable Refresh Rate technology delivers a highly responsive, smooth, tear-free, immersive gaming experience unmatched by any display of this size.

Additionally, the 4K HDR display features a full-array direct backlight, 1,000-nit peak luminance and DCI-P3 color gamut for the ultimate in visual quality.

Ultra-Low Latency Gaming
Nothing is more important to gamers than responsive gameplay. G-SYNC technology brings the ultra-low latency found in G-SYNC desktop gaming monitors to the BFGD when gaming directly on the PC, Android™, another console or using NVIDIA GameStream™ technology from a desktop or laptop gaming PC.

Big Screen Streaming
The integration of the Android TV™-based SHIELD into BFGDs allows gamers to easily switch between gaming and other forms of entertainment. The bundled SHIELD remote and game controller allow for easy navigation and access to all of the world’s biggest streaming apps, including Netflix, Amazon Video, YouTube™ and Hulu.

With support for the Google Assistant, the entire experience can be controlled simply by using your voice. G-SYNC HDR technology also supports video playback at native framerates, including popular 23.976, 24 and 25 FPS formats. This matches the screen’s refresh rate to the video source’s actual frame rate, eliminating interpolation and presenting the video content as it was intended to be viewed by the director.

Availability and Pricing
BFGDs are available for hands-on demos at CES at the NVIDIA gaming suite and ASUS ROG showcase room at the Wynn Las Vegas by appointment only, and in the HP booth at the Pepcom and Showstoppers press events on Monday and Tuesday evening, respectively. General availability is expected this summer when pricing and further specifications will be announced.

- Press release @ NVIDIA

- NVIDIA unveils 65-inch 4K 'Big Format Gaming Displays' with G-SYNC

- The ROG Swift PG65 Big Format Gaming Display brings 120Hz NVIDIA® G-SYNC to a huge 65-inch screen

GeeXLab - english forum / GeeXLab 0.20.x.x released
« on: January 09, 2018, 08:35:31 PM »
GeeXLab has been released today. Main new features:
- PhysX 3.4.1 + GRB (Windows + Linux)
- ASUS Tinker Board support


Release notes:

GeeXLab - english forum / Re: GeeXLab 0.19.x.x released
« on: January 09, 2018, 05:35:44 PM »
Sorry, there is no offline doc available.

But you can read these short articles to get an idea of how GeeXLab works:


You can also test and modify the learn code samples available in the full code sample pack (in the learn/ folder).

And the reference quide for all functions:

3D-Tech News Around The Web / AMD CPU/GPU Roadmap 2018
« on: January 09, 2018, 04:13:09 PM »
Roadmap for 2018:
- CPU: Ryzen desktop APU (CPU + GPU), Ryzen 2nd generation CPU
- GPU: Vega 64/56 for desktop and Radeon Vega for notebook

Roadmap for 2020:


Intel unveiled its new processors that pack a Radeon RX Vega GPU (the pGPU), an Intel HD 630 GPU (the iGPU) and a 4C/8T CPU on the same chip.

Intel Core i7-8809G
- CPU: Kaby Lake, 4C/8T, 3.1GHz (base) / 4.2GHz (boost)
- pGPU: Radeon RX Vega M GH, 24 CUs (1536 shader cores)
- pGPU clock speed: 1063MHz (base) / 1190MHz (boost)
- pGPU memory: 4GB HBM2, 1024-bit
- iGPU: Intel HD 630
- Package TDP: 100W

Intel Core i7-8709G
- CPU: Kaby Lake, 4C/8T, 3.1GHz (base) / 4.1GHz (boost)
- pGPU: Radeon RX Vega M GH, 24 CUs (1536 shader cores)
- pGPU clock speed: 1063MHz (base) / 1190MHz (boost)
- pGPU memory: 4GB HBM2, 1024-bit
- iGPU: Intel HD 630
- Package TDP: 100W

Intel Core i7-8706G
- CPU: Kaby Lake, 4C/8T, 3.1GHz (base) / 4.1GHz (boost)
- pGPU: Radeon RX Vega M GL, 20 CUs (1280 shader cores)
- pGPU clock speed: 931MHz (base) / 1101MHz (boost)
- pGPU memory: 4GB HBM2, 1024-bit
- iGPU: Intel HD 630
- Package TDP: 65W

Intel Core i7-8706G
- CPU: Kaby Lake, 4C/8T, 3.1GHz (base) / 4.1GHz (boost)
- pGPU: Radeon RX Vega M GL, 20 CUs (1280 shader cores)
- pGPU clock speed: 931MHz (base) / 1101MHz (boost)
- pGPU memory: 4GB HBM2, 1024-bit
- iGPU: Intel HD 630
- Package TDP: 65W

Intel Core i5-8305G
- CPU: Kaby Lake, 4C/8T, 2.8GHz (base) / 3.8GHz (boost)
- pGPU: Radeon RX Vega M GL, 20 CUs (1280 shader cores)
- pGPU clock speed: 931MHz (base) / 1101MHz (boost)
- pGPU memory: 4GB HBM2, 1024-bit
- iGPU: Intel HD 630
- Package TDP: 65W


GeeXLab - english forum / Re: simple Lua framework over GeeXLab Lua API
« on: January 06, 2018, 07:33:26 PM »
Sorry for the delay, I've worked on the new version of GeeXLab and as usual it took lot of time. I will add the small framework in the code sample...
And happy new year too!

3D-Tech News Around The Web / Vulkan API specifications 1.0.67 released
« on: January 06, 2018, 07:30:18 PM »
Change log for January 5, 2018 Vulkan 1.0.67 spec update:

  * Bump API patch number and header version number to 67 for this update.
  * Update copyright dates to 2018

Github Issues:

  * Fix texture lookup functions in `GL_KHR_vulkan_glsl` specification
    (public pull request 363).
  * Clarify the state waited semaphores are left in when a call to
    flink:vkQueuePresentKHR fails (public issue 572).
  * Cleanup descriptions of slink:VkObjectTablePushConstantEntryNVX and
    slink:VkObjectTableDescriptorSetEntryNVX (public issue 583)
  * Remove redundant flink:vkCmdSetDiscardRectangleEXT valid usage
    statements (public pull 586).
  * Make dynamic state array length valid usage statements implicit for
    flink:vkCmdSetViewportWScalingNV, flink:vkCmdSetDiscardRectangleEXT, and
    flink:vkCmdSetViewport (public pull 589).
  * Clarify meaning of window extent (0,0) in slink:VkSwapchainKHR for the
    Windows and X11 platforms, in their respective extensions (public issue
  * Allow flink:vkGetPastPresentationTimingGOOGLE to return
    ename:VK_INCOMPLETE (public issue 604).
  * Add synchronization valid usage statements to flink:vkAcquireNextImage
    (public pull 611).
  * Fix some broken external links and internal xrefs (public pull 613).
  * Clean up slink:VkViewport valid usage statements in the presence or
    absence of relevant extensions (public pull 623).
  * Remove
    token from VK_KHR_maintenance2 from the non-extension VU path for
    slink:VkGraphicsPipelineCreateInfo (public issue 628).
  * Miscellaneous minor markup fixes - extension name strings (public pull
    631), Notes (pull 633), queue names emitted by generator scripts (pull
    634), block formatting in slink:VkDescriptorUpdateTemplateEntryKHR (pull
    641), quotes and apostrophes (pull 643),
  * Miscellaneous minor grammar fixes (public pull 644).
  * Fix markup macros so usage like ptext:*Src* works (public pull 647).

Internal Issues:

  * Clarify in the `VK_KHR_surface` and `VK_KHR_swapchain` extensions that
    parameter combinations which aren't supported for normal images are also
    unsupported for presentable images, even if the parameter values are
    individually supported as reported by the surface capability queries
    (internal issue 1029).
  * Fixed XML typo in the valid value field of the pname:sType member of
    slink:VkPhysicalDeviceExternalMemoryHostPropertiesEXT (internal issue

Other Issues:

  * Add memory semantics validity rules to the <<spirvenv-module-validation,
    Validation Rules within a Module>> section of the SPIR-V environment
    appendix, and specify that sequentiality consistency is not supported.
    This forbids certain cases like "`Load+Release`" that we don't expect to
    ever be meaningful.
  * Document mapping of OpenGL Shading Language barriers to SPIR-V scope and
    semantics in the `GL_KHR_vulkan_glsl` specification.

New Extensions:

  * `VK_EXT_conservative_rasterization`


Geeks3D's GPU Tools / Re: GPU Caps Viewer 1.37.0 released
« on: January 03, 2018, 08:00:38 PM »
CPU OpenCL demos are supported by AMD and Intel only. NVIDIA does not provide CPU OpenCL and supports only GPU OpenCL.  You can try to install the AMD OpenCL SDK on you system to get software OpenCL.

Snelly is a system for physically-based SDF (signed distance field) pathtracing in a web browser.

A Snelly scene consists of 3d objects defined by a mathematical signed distance function (SDF) written in GLSL code. The SDF gives the distance to the surface from any given point in space, where the distance is positive in the exterior of the shape and negative in the interior (and of course zero on the surface). In each scene there can (currently) only exist three such specified objects, with different rendered material properties: a Metal, a Dielectric, and a general purpose plastic-like Surface ("uber" material). These three materials can freely intersect and embed one another.

It is generally quite challenging to find SDF functions which correspond to interesting shapes. We provide some example scenes (and this library of sample scenes will be added to over time). A lot of interesting examples and resources can be found on the web, at for example shadertoy. Fractal surfaces in particular are quite easy to define as SDFs, as described for example here.


3D-Tech News Around The Web / Reverse Z Cheat Sheet
« on: January 03, 2018, 07:53:08 PM »
Just recently I’ve started looking into ways to optimize the depth buffer precision for large draw distances and one specific approach caught my eye again. A technique which is now commonly referred to as Reverse Z. While it comes with just a few minor changes, the results can be quite considerable.

That considerable everyone should just go ahead and use it.

The general idea itself is actually very simple: instead of mapping the interval between the near and far plane [zn,zf] to [0,1], a special projection matrix is constructed in a way that it is being mapped to [1,0] instead.

Why this actually increases the depth buffer precision is not directly obvious, but I will also not go into detail here. I’ve added some references to articles on this topic at the end of this post.


Here are some details posted on Intel India website (now removed) about the Intel Core i7 8809G, a quad-core CPU (4C/8T) that comes with two GPUs: an Intel HD 630 (iGPU) and a Radeon Vega (pGPU):


3D-Tech News Around The Web / Re: (Shadertoy) Turn n' Burn
« on: December 12, 2017, 08:14:01 PM »
I ported this demo to GeeXLab few days ago. It's available in the full code sample pack here:


The demo needs several seconds (around 10 sec) before displaying the first frame...

3D-Tech News Around The Web / Re: HWiNFO32 + HWiNFO64 v5.70
« on: December 12, 2017, 08:00:17 PM »

Geeks3D's GPU Tools / Re: FurMark 1.19.1 released
« on: December 11, 2017, 07:33:20 PM »
Here's the problem in detail.

I have a laptop with i7-8700K CPU, Nvidia GTX 1070 graphics card and windows 10.
The Furmark v1.19.1.0 will show me a 140 FPS and a 99% of GPU usage after running stress test.
After I ran a CPU stress test together as the GPU burner bulid-in Furmark app.(or Prime95)
The FPS and GPU usage will immediately drop down to 108 FPS and 65% of GPU usage.
In the meantime, No any limit reasons were found in Furmak.
But it will come back to normal as running Furmark alone if I reset the affinity(CPU) in the device manager.

Yes there a minor thread affinity issue in the current FurMark but it will be fixed in the next release.

Geeks3D's GPU Tools / Re: FurMark 1.19.0 released
« on: December 11, 2017, 07:24:18 PM »
Any plans in including artifact scanner into the new releases like ROG FurMark has? I would love this.

Currently, the artifact scanner feature is only for graphics cards makers (ASUS, EVGA, MSI, etc.). Maybe in the future...

Geeks3D's GPU Tools / Re: FurMark 1.19.1 released
« on: December 11, 2017, 07:22:12 PM »
Why is there no database on Furmark 1.19.1 for 4k resolution. I have a very decent score on my RX Vega 56 of 7555 and would like to know how that stands comparitively to other 4k resolution scores.

There is a link at the bottom of the FurMark scores page. But most of scores seem fake. I just re-benchmarked my GeForce GTX 1080 and I get a score of 2790 points (47 FPS) for preset P2160 (3840x2160). I haven't tested the RX Vega 56 but 7555 seems a very high score for a single Radeon. Maybe it's a crossfire score?  For the preset P1080, I get 7295 points (122 FPS) with the GTX 1080.

I will update asap this page with 4k scores.

Pages: 1 [2] 3 4 ... 68