Recent Posts

Pages: 1 ... 8 9 [10]
91
Quote
Particle-based methods like smoothed particle hydrodynamics (SPH) are increasingly adopted for large-scale fluid simulation in interactive computer graphics. However, surface rendering for such dynamic particle sets is challenging: current methods either produce low-quality results, or they are time consuming. In this paper, we introduce a novel approach to render high-quality fluid surfaces in screen space. Our method combines the techniques of particle splatting, ray-casting, and surface-normal estimation. We apply particle splatting to accelerate the ray-casting process, estimating the surface normal using principal component analysis (PCA) and a GPU-based technique to further accelerate our method. Our method can produce high-quality smooth surfaces while preserving thin and sharp details of large-scale fluids. The computation and memory cost of our rendering step depends only on the image resolution. These advantages make our method very suitable for previewing or rendering hundreds of millions of particles interactively. We demonstrate the efficiency and effectiveness of our method by rendering various fluid scenarios with different-sized particle sets.

Links:
- http://www.jcgt.org/published/0007/01/02/
- http://www.jcgt.org/published/0007/01/02/paper.pdf
- http://www.jcgt.org/published/0007/01/02/paper-lowres.pdf
92
Quote
New generation, explcit graphics APIs (Vulkan and DirectX 12) are more efficient, involve less CPU overhead. Part of it is that they don't check most errors. In old APIs (Direct3D 9, OpenGL) every function call was validated internally, returned success of failure code, while driver crash indicated a bug in driver code. New APIs, on the other hand, rely on developer doing the right thing. Of course some functions still return error code (especially ones that allocate memory or create some resource), but those that record commands into a command buffer just return void. If you do something illegal, you can expect undefined behavior. You can use Validation Layers / Debug Layer to do some checks, but otherwise everything may work fine on some GPUs, you may get incorrect result, or you may experience driver crash or timeout (called "TDR"). Good thing is that (contrary to old Windows XP), crash inside graphics driver doesn't cause "blue screen of death" or machine restart. System just restarts graphics hardware and driver, while your program received VK_ERROR_DEVICE_LOST code from one of functions like vkQueueSubmit. Unfortunately, you then don't know which specific draw call or other command caused the crash.

NVIDIA proposed solution for that: they created NVIDIA Aftermath library. It lets you (among other things) record commands that write custom "marker" data to a buffer that survives driver crash, so you can later read it and see which command was successfully executed last. Unfortunately, this library works only with NVIDIA graphics cards and only in D3D11 and D3D12.

I was looking for similar solution for Vulkan. When I saw that Vulkan can "import" external memory, I thought that maybe I could use function vkCmdFillBuffer to write immediate value to such buffer and this way implement the same logic. I then started experimenting with extensions: VK_KHR_get_physical_device_properties_2, VK_KHR_external_memory_capabilities, VK_KHR_external_memory, VK_KHR_external_memory_win32, VK_KHR_dedicated_allocation. I was basically trying to somehow allocate a piece of system memory and import it to Vulkan to write to it as Vulkan buffer. I tried many things: CreateFileMapping + MapViewOfFile, HeapCreate + HeapAlloc and other ways, with various flags, but nothing worked for me. I also couldn't find any description or sample code of how these extensions could be used in Windows to import some system memory as Vulkan buffer.

Everything changed when I learned that creating normal device memory and buffer inside Vulkan is enough! It survives driver crash, so its content can be read later via mapped pointer. No extensions required. I don't think this is guaranteed by specification, but it seems to work on both AMD and NVIDIA cards.

...

Links:
- http://asawicki.info/news_1677_debugging_vulkan_driver_crash_-_equivalent_of_nvidia_aftermath.html
- https://github.com/sawickiap/MISC/blob/master/VulkanAfterCrash.h
93
Intel has released a new graphics driver:

Quote
- DRIVER VERSION: 15.65.5.4982
- Windows Driver Store Version 23.20.16.4982

HIGHLIGHTS:
- Surviving Mars* Launch Driver
- Ni No Kuni II Revenant Kingdom* Launch Driver

- Download v4982 win64 @ Geeks3D
- Download @ Intel
- Release notes

On the release notes, this driver exposes Vulkan 1.0.68 but according to GPU Caps Viewer, the Vulkan version is 1.0.66.

This driver exposes the same OpenGL extensions and Vulkan features than v4944.








94
GeeXLab - english forum / Re: GeeXLab 0.20.x.x released
« Last post by JeGX on March 30, 2018, 11:58:03 AM »
I installed a fresh version of Tinker OS 2.0.5 and I tested GeeXLab 0.20.x for Tinker OS. I also launched glxgears. Both tools work fine in OpenGL (CPU mode only!). There is still the message "unable to load driver: rockchip_dri.so" but according to this reply, it's normal:
Quote
This error message is caused by the userspace application try to use the OpenGL API.

On the Tinker Board, only OpenGL ES is hardware accelerated. The regular OpenGL runs in software mode only.


95
GeeXLab - english forum / Re: GeeXLab 0.20.x.x released
« Last post by Noobz347 on March 29, 2018, 08:41:57 PM »
Looks like something has changed in the OpenGL support in the latest Tinker OS 2.0.5. The last time I tested GeeXLab on Tinker OS it was Tinker OS 2.0.4. I will install the new Tinker OS 2.0.5 and I let you know asap.

Thank You!   :)
96
GeeXLab - english forum / Re: GeeXLab 0.20.x.x released
« Last post by JeGX on March 29, 2018, 08:33:02 PM »
Looks like something has changed in the OpenGL support in the latest Tinker OS 2.0.5. The last time I tested GeeXLab on Tinker OS it was Tinker OS 2.0.4. I will install the new Tinker OS 2.0.5 and I let you know asap.
97
GeeXLab - english forum / Re: GeeXLab 0.20.x.x released
« Last post by Noobz347 on March 29, 2018, 08:08:14 PM »
Hello,

I am attempting to install support for OpenGL 2.1 specifically.  I am installing and running this package:


File: GeeXLab
Version: 0.20.0.0
Added on: 2018.01.09
Platform: ASUS Tinker Board / TinkerOS 32-bit
Description: 3D programming with Lua, Python and GLSL (OpenGL 2.1 / 3.0)


What I get from the terminal window is this:


linaro@tinkerboard:~/Downloads/GeeXLab_FREE_tinkeros_gl$ dir
EULA.txt     _scene_init_log.txt  demos   gxlerror.xml  libs
GeeXLab        conf.xml          dylibs   gxlstart.xml  opengl21-test
README.txt     demo-shadertoy.sh    fonts   imgui.ini
_geexlab_log.txt  demo.sh          gl21.sh   init0.xml

linaro@tinkerboard:~/Downloads/GeeXLab_FREE_tinkeros_gl$ ./gl21.sh

libGL error: MESA-LOADER: failed to retrieve device information
libGL error: unable to load driver: rockchip_dri.so
libGL error: driver pointer missing
libGL error: failed to load driver: rockchip
libGL error: MESA-LOADER: failed to retrieve device information
libGL error: unable to load driver: rockchip_dri.so
libGL error: driver pointer missing
libGL error: failed to load driver: rockchip

linaro@tinkerboard:~/Downloads/GeeXLab_FREE_tinkeros_gl$

I've tried the 0.21.x version for ES and it works without issue but the OpenGL doesn't appear to function.  I'm running Official TinkerOS: v2.0.5 dated 2018/02/22  Any help you could provide is appreciated. 

Thank You
98
 
  • Added support for the following GPUs:
    • Quadro GV100
  • Updated the driver to prevent G-SYNC from being enabled when a Quadro Sync board is installed. G-SYNC and Quadro Sync were always mutually incompatible features, and this change makes it easier to use G-SYNC capable monitors on Quadro Sync configurations, as it is now no longer necessary to manually disable G-SYNC.
  • Further improved the fix for occasional flicker when using the X driver's composition pipeline.  This was mostly fixed in 390.42, but now the fix should be more complete.
  • Improved compatibility with recent Linux kernels.
  • Fixed a string concatenation bug that caused libGL to accidentally try to create the directory "$HOME.nv" rather than "$HOME/.nv" in some cases where /tmp isn't accessible.
  • Increased the version numbers of the GLVND libGL, libGLESv1_CM, libGLESv2, and libEGL libraries, to prevent concurrently installed non-GLVND libraries from taking precedence in the dynamic linker cache.
  • Fixed a bug which could cause X servers that export a Video Driver ABI earlier than 0.8 to crash when running X11 applications which call XRenderAddTraps().
  http://de.download.nvidia.com/XFree86/Linux-x86_64/390.48/NVIDIA-Linux-x86_64-390.48.run
 http://it.download.nvidia.com/XFree86/Linux-x86/390.48/NVIDIA-Linux-x86-390.48.run
 http://us.download.nvidia.com/XFree86/FreeBSD-x86_64/390.48/NVIDIA-FreeBSD-x86_64-390.48.tar.gz
 http://fr.download.nvidia.com/solaris/390.48/NVIDIA-Solaris-x86-390.48.run
 http://cn.download.nvidia.com/XFree86/Linux-x86-ARM/390.48/NVIDIA-Linux-armv7l-gnueabihf-390.48.run
 http://jp.download.nvidia.com/XFree86/FreeBSD-x86/390.48/NVIDIA-FreeBSD-x86-390.48.tar.gz
 
99
3D-Tech News Around The Web / (GDC2018) Terrain Rendering in Far Cry 5
« Last post by JeGX on March 28, 2018, 06:30:31 PM »
Here is a 155-page PDF presentation of the techniques used in Far Cry 5 to render terrains:

Quote
What I am covering will be divided into 6 sections.

First I’ll discuss how we render the terrain heightfield geometry.
I’ll start with the classic approaches that you may be familiar with.
And then I’ll discuss how we ported parts of this to a GPU pipeline.

Second I’ll discuss how we shade the resulting terrain mesh.

Then I’ll discuss shading specializations and optimizations that we made for cliffs.

After that I’ll discuss how we combined our base heightfield with additional unique
geometry.

This will lead on to covering how we shade all our terrain geometry inputs in a single
screen space pass.

Finally I’ll talk about how we used the terrain data on the GPU to enhance the
rendering of other assets such as trees, grass and rocks.

Link: https://drive.google.com/file/d/1H6ouhi96pLg8WDlwXSGHFupPyZDv2MF6/view


100
ASRock's #MYSTERIOUS and #UNPREDICTABLE card is a Radeon RX 580 (double fan VGA cooler). According to the source, ASRock RX 500 should be launched in Q2 2018.
 





source | via
Pages: 1 ... 8 9 [10]