Author Topic: Lonely Sea Shell vs. NVIDIA compiler  (Read 3323 times)

0 Members and 1 Guest are viewing this topic.

Stefan

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 4330
    • View Profile
Lonely Sea Shell vs. NVIDIA compiler
« on: January 31, 2016, 08:23:43 AM »
Lonely Sea Shell - ported to GeeXLab
runs fine with INTEL and ATI, but NVIDIA compiler fails with error C1103: too few parameters in function call

Work-around: replace  float r = length(target-source);  with
   
Code: [Select]
#ifdef GL_NV_command_list // check for NVIDIA, patch would fail with ATI and INTEL
float r = length(0.0, target-source); // ignore warning C7011: implicit cast from "float" to "vec3"
#else
float r = length(target-source); // original code compiles fine with ATI and Intel
#endif

I wonder why it runs at shadertoy.com with NVIDIA, but not in GeeXLab?

I added also vendor dependent overlay text

Code: [Select]
    local ret = gh_renderer.check_opengl_extension("GL_NV_command_list")
    if (ret == 1) then
    gfx.write_text(10, 160, 0, 1, 0, 1, "patched for NVIDIA")
    end
   
    local ret = gh_renderer.check_opengl_extension("GL_AMD_debug_output")
    if (ret == 1) then
    gfx.write_text(10, 160, 0, 1, 0, 1, "AMD works fine")
    end
   
    local ret = gh_renderer.check_opengl_extension("GL_INTEL_performance_query")
    if (ret == 1) then
    gfx.write_text(10, 160, 0, 1, 0, 1, "Intel works fine")
    end