Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - JeGX

Pages: 1 2 [3] 4 5 ... 63
41
Quote
The content is running real time in 4K, the frame rate depends on the use of settings and with some optimization can run very fast on a single Titan or 1080TI (around 45 or perhaps I can hit 60). In SLI it is no problems to hit 60, so is playing it HD on regular cards. I am making use of Unreal engine and Nvidia's VXGI solution for real time lighting.

Youtube video: https://www.youtube.com/watch?v=bXouFfqSfxg

The setup used to render this demo:
- Dual GTX 1080 Ti in SLI
- Intel Core i7 5960X
- Asus X99-E WS
- Corsair 64GB DDR4 2400MHz
- Acer 28" Predator XB281HK G-Sync 4K


More information and high resolution screenshots: http://overview.artbyrens.com








44
Geeks3D's GPU Tools / GPU Caps Viewer 1.35.x released
« on: July 28, 2017, 02:11:53 PM »
GPU Caps Viewer 1.35.0.1 is available.

More information and downloads links:
http://www.geeks3d.com/20170728/gpu-caps-viewer-1-35-x-released/



45
The Radeon GPU Profiler (RGP) is a ground-breaking low-level optimization tool from AMD.  It provides detailed timing information on Radeon Graphics using custom, built-in, hardware thread-tracing, allowing the developer deep inspection of GPU workloads.

This unique tool generates easy to understand visualizations of how your DirectX®12 and Vulkan® games interact with the GPU at the hardware level. Profiling a game is both a quick, and simple process using the Radeon Developer Panel and the public Crimson driver.

Supported GPUs:
- AMD Radeon R9 Fury and Nano series 
- AMD Radeon RX 400, RX 500 series

Links:
- Radeon-GPUProfiler @ github
- Radeon GPU Profiler DOWNLOADS


46
3D-Tech News Around The Web / Where do I start graphics programming?
« on: July 28, 2017, 08:21:44 AM »
Quote
Sometimes I get this question. And since it happens frequent enough, I decided I should write it publicly so that others can benefit, and also so I don’t have to repeat myself every time

In case you don’t know me, I’m self taught. I started doing graphics programming at age 14. My first experience was Visual C++ 6.0 and dived straight into C, C++ and assembly and came out just fine. The basics I learnt by reading a random PDF I found back then “Aprenda C++ como si estuviera en primero” (in Spanish) which was a very simple introduction to the language in 87 pages. What a variable is, what an enum is. How to do basic hello world.

Then I learnt a lot by trial and error. I started with several open source Glide Wrappers which were popular at the time. I used the debugger a lot to see line by line how the variables evolved. I would often remove some snippet of code to see what would happen if I took it out. I also played a lot with XviD’s source code.

Back then I had the DirectX SDK 7.0 samples and documentation to play with and I learnt a lot from it. In particular, Mathematics of Direct3D lighting fascinated me. I ended up writing my own TnL C library written in SSE assembly. It wasn’t very useful and I haven’t really updated it in more than a decade, but it helped a lot paving my foundations for when I came into contact with Vertex & Pixel Shaders. I was shocked that what took me an entire summer vacation and lots of assembly instructions (e.g. a matrix multiplication) could be done in one line of vertex shader code.

...

Link: http://www.yosoygames.com.ar/wp/2017/07/where-do-i-start-graphics-programming/

47
3D-Tech News Around The Web / Vulkan Synchronization Examples
« on: July 28, 2017, 08:19:24 AM »
Lot of synchronization examples are available here:

https://github.com/KhronosGroup/Vulkan-Docs/wiki/Synchronization-Examples

Quote
Synchronization in Vulkan can be confusing. It takes a lot of time to understand, and even then it's easy to trip up on small details. Most common use of Vulkan synchronization can be boiled down to a handful of use cases though, and this page lists a number of examples. May expand this in future to include other questions about synchronization.


Swapchain Image Acquire and Present:
Code: [Select]
VkAttachmentReference attachmentReference = {
    .attachment = 0,
    .layout     = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL};

// Subpass containing first draw
VkSubpassDescription subpass = {
    ...
    .colorAttachmentCount = 1,
    .pColorAttachments = &attachmentReference,
    ...};

/* Only need a dependency coming in to ensure that the first
   layout transition happens at the right time.
   Second external dependency is implied by having a different
   finalLayout and subpass layout. */
VkSubpassDependency dependency = {
    .srcSubpass = VK_SUBPASS_EXTERNAL,
    .dstSubpass = 0,
    .srcStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT,
    .dstStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT,
    .srcAccessMask = 0,
    .dstAccessMask = VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT,
    .dependencyFlags = 0};

VkAttachmentDescription attachmentDescription = {
    ...
    .loadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE,
    .storeOp = VK_ATTACHMENT_STORE_OP_STORE,
    ...
    .initialLayout = VK_IMAGE_LAYOUT_UNDEFINED,
    .finalLayout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL};

VkRenderPassCreateInfo renderPassCreateInfo = {
    ...
    .attachmentCount = 1,
    .pAttachments    = &attachmentDescription,
    .subpassCount    = 1,
    .pSubpasses      = &subpass,
    .dependencyCount = 1,
    .pDependencies   = &dependency};

vkCreateRenderPass(...);

...

vkAcquireNextImageKHR(
    ...
    acquireSemaphore,   //semaphore
    ...
    &imageIndex);       //image index

VkPipelineStageFlags waitDstStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT;

VkSubmitInfo submitInfo = {
    ...
    .waitSemaphoreCount = 1,
    .pWaitSemaphores = &acquireSemaphore,
    .pWaitDstStageMask = &waitDstStageMask,
    ...
    .signalSemaphoreCount = 1,
    .pSignalSemaphores = &graphicsSemaphore};

vkQueueSubmit(..., &submitInfo, ...);

VkPresentInfoKHR presentInfo = {
    .waitSemaphoreCount = 1,
    .pWaitSemaphores = &graphicsSemaphore,
    ...};

vkQueuePresentKHR(..., &presentInfo);

48
Geeks3D's GPU Tools / Re: GPU Caps Viewer 1.34.x.x released
« on: July 28, 2017, 07:19:59 AM »
Wait for the new GPU Caps Viewer 1.35.  I fixed many small bugs in the Vulkan code and it should run better with latest AMD Crimson drivers.

49
3D-Tech News Around The Web / Linux Graphics Demystified
« on: July 26, 2017, 07:16:53 PM »
A 49-page PDF about graphics on Linux is available here:
https://keyj.emphy.de/files/linuxgraphics_en.pdf



50
Geeks3D's GPU Tools / GPU Caps Viewer 1.34.4.0 released
« on: June 29, 2017, 08:21:24 PM »
GPU Caps Viewer 1.34.4.0 has been released and comes with a better Vulkan API support.

More information + downloads:

http://www.geeks3d.com/20170327/gpu-caps-viewer-1-34-0-released/#20170629


51
GeeXLab - english forum / GeeXLab 0.16.0.3 released
« on: June 29, 2017, 04:35:45 PM »
A minor update of GeeXLab has been released for Windows 32-bit and Windows 64-bit platforms.

http://www.geeks3d.com/geexlab/downloads/


Quote
Version 0.16.0.3 - 2017.06.29
-----------------------------
! minor bugfixes and improvements in the Vulkan plugin.

52
Geeks3D's GPU Tools / Re: FurMark 1.19.0 released
« on: June 29, 2017, 12:59:13 PM »
I have no particular idea. FurMark 1.19.0 has exactly the same stress test module than FurMark 1.18.2. The background image has been updated but I don't think it's the source of the additional stress test.

53
3D-Tech News Around The Web / Re: MSI Kombustor v3.5.2.1
« on: June 29, 2017, 12:56:29 PM »
Is it a joke? Kombustor 3.5.2.1 has been released two years ago. MSI marketing... ;D

54
GeeXLab - forum en français / Re: Updater un fragment shader
« on: June 18, 2017, 04:50:23 PM »
Le nouveau GeeXLab avec le live-coding des shaders GLSL depuis un fichier est dispo:
http://www.geeks3d.com/hacklab/20170618/geexlab-0-16-0-2-released-for-windows-and-linux-64-bit/


Comment utiliser la nouvelle technique de live-coding:
http://www.geeks3d.com/hacklab/20170618/new-way-to-live-code-glsl-shaders-in-geexlab-0-16/


Je n'ai pas fait des tests approfondis de cette nouvelle façon de live-coder, j'espère qu'il n'y aura pas trop de bugs.

55
GeeXLab - english forum / New way to live-code a GLSL shader
« on: June 18, 2017, 04:46:22 PM »
GeeXLab 0.16+ comes with a new and cross-platform way to live code GLSL shaders:

http://www.geeks3d.com/hacklab/20170618/new-way-to-live-code-glsl-shaders-in-geexlab-0-16/



56
GeeXLab - english forum / GeeXLab 0.16.x released
« on: June 18, 2017, 04:41:43 PM »
A new version of GeeXLab is available for Windows 64-bit and Linux 64-bit.

Release highlights:
http://www.geeks3d.com/hacklab/20170618/geexlab-0-16-0-2-released-for-windows-and-linux-64-bit/

Quote
Version 0.16.0.2 - 2017.06.17
-----------------------------
+ added a new command line param to select the GPU for Vulkan (or Direct3D 12)
  demos: /gpu_index=x
! [WINDOWS] Vulkan plugin recompiled with latest Vulkan API headers (v1.0.51).
- [WINDOWS] removed from Tools menu entries related to network live updaters.
  Network live updaters tools are still available but in the nettools/ folder.
! [WINDOWS] GPU Shark utility moved to gpushark/ sub-folder. For some very
  obscure reasons, GPU Shark utility no longer works in the root
  folder of GeeXLab. But it works fine from the sub folder gpushark/.
* fixed a minor bug in the Lua kx framework for Vulkan.
+ added support of live-coding of GLSL shaders by editing the source
  code of a shader with any text editor.
+ added delay between two reloads of script files for live-coding of scripts.
! improved the performance of the Vulkan renderer.
! improved the support of AMD GPUs in the Vulkan renderer.
* fixed a bug in the Vulkan renderer in the creation of swapchain that led
  to a crash on NVIDIA GPUs.
+ added an automatic call to wait_for_gpu after all INIT scripts.
  Usefull with Vulkan or Direct3D 12 renderers.
+ added support of render targets in the Vulkan renderer.
! updated gh_texture.create_2d_from_rt() internals.
! [WINDOWS] updated the GPU monitoring plugin with NVIDIA GeForce GT 1030.

57
GeeXLab - forum en français / Re: Updater un fragment shader
« on: June 16, 2017, 12:02:10 PM »
En fait GeeXLab ne gère pas directement le redimensionnement d'une fenetre. Lors d'un window resize, GeeXLab appelle un script de type SIZE et c'est au développeur de la demo de coder ce qu'il faut dedans. Donc tout dépend de ce que tu veux obtenir comme effet, si tu es en camera ortho (comme dans ton example) ou en camera perspective. Si le script est vide, le resize ne sera pas géré du tout.

58
GeeXLab - forum en français / Re: Updater un fragment shader
« on: June 15, 2017, 01:34:37 PM »
Ca m'aurait étonné que ca fonctionne bien ce truc! Je regarderai le code pourri de l'interface live-coding réseau un des ces quatres.

Pas grave, je suis en train d'ajouter le support live-coding des shaders depuis un editeur de texte et ca commence à fonctionner. Dès que c'est bon dans la version Windows, je compile une nouvelle version Linux.  Peut etre pour ce weekend :P

J'en profite pour te poser une question : si on appelle un frame script avec "update_from_file_every_frame="1"" et qu'on a 60 FPS, ça signifie 60 accès disque par seconde ? Dans ce cas j'imagine que ça serait mieux de mettre le frame script dans un ramdrive avant de l'ouvrir avec geexlab et un éditeur texte, comme ça les deux pointent vers la RAM au lieu du disque. Sinon il faudrait ajouter une option du style "number_of_frames_between_updates="60"" ou bien "number_of_seconds_between_updates="1"" avec la possibilité de changer l'int pour éviter trop d'accès disque.

Oui dans la version actuelle, c'est assez bourrin, pour une scene à 60 FPS, le script est reloadé 60 fois par seconde. Je vais ajouter un petit délai paramétrable entre deux reloads.

59
GeeXLab - forum en français / Re: Updater un fragment shader
« on: June 13, 2017, 10:00:17 PM »
Je vais regarder le live-coding des shaders à partir d'un editeur de texte asap! J'aime bien cette facon simple et cross-platform pour bidouiller les shaders, c'est d'ailleurs pour ça que j'avais implémenté le live-coding des scripts à partir des fichiers.

Maintenant pour live-coder les shaders sous Linux il existe une autre solution: utiliser les outils de live-coding reseau avec Wine. J'avais testé cette technique au début de GeeXLab et à l'époque ça fonctionnait bien. Dans le répertoire GeeXLab/LiveUpdaters_Wine/ tu trouveras le binaire network_live_updater_glsl.exe.

J'ai si bonne mémoire, tu lances d'abord GeeXLab avec l'option pour démarrer le serveur TCP de live coding:

Code: [Select]
$ GeeXLab /start_tcpip_server /demofile="..."

Ensuite tu lances le network_live_updater_glsl.exe avec Wine:

Code: [Select]
$ wine ./network_live_updater_glsl.exe

Maintenant tu peux cliquer sur "Connect to GeeXLab" et normalement de là, tu obtiens la liste de tous les GPU programs. Tu sélectionnes un GPU program et tu peux le live coder.

Je viens de tester rapidement sous Windows et ca fonctionne. Dis moi ce qu'il en est sous Linux...

60
Boost.Compute is a GPU/parallel-computing library for C++ based on OpenCL.

The core library is a thin C++ wrapper over the OpenCL API and provides access to compute devices, contexts, command queues and memory buffers.

On top of the core library is a generic, STL-like interface providing common algorithms (e.g. transform(), accumulate(), sort()) along with common containers (e.g. vector<T>, flat_set<T>). It also features a number of extensions including parallel-computing algorithms (e.g. exclusive_scan(), scatter(), reduce()) and a number of fancy iterators (e.g. transform_iterator<>, permutation_iterator<>, zip_iterator<>).

Links:
- https://github.com/boostorg/compute/
- http://boostorg.github.io/compute/


The development branch brings the full support of OpenCL 2.1:
https://github.com/boostorg/compute/tree/develop



Code sample:
Code: [Select]
#include <vector>
#include <algorithm>
#include <boost/compute.hpp>

namespace compute = boost::compute;

int main()
{
    // get the default compute device
    compute::device gpu = compute::system::default_device();

    // create a compute context and command queue
    compute::context ctx(gpu);
    compute::command_queue queue(ctx, gpu);

    // generate random numbers on the host
    std::vector<float> host_vector(1000000);
    std::generate(host_vector.begin(), host_vector.end(), rand);

    // create vector on the device
    compute::vector<float> device_vector(1000000, ctx);

    // copy data to the device
    compute::copy(
        host_vector.begin(), host_vector.end(), device_vector.begin(), queue
    );

    // sort data on the device
    compute::sort(
        device_vector.begin(), device_vector.end(), queue
    );

    // copy data back to the host
    compute::copy(
        device_vector.begin(), device_vector.end(), host_vector.begin(), queue
    );

    return 0;
}

Pages: 1 2 [3] 4 5 ... 63