Geeks3D Forums

GeeXLab => GeeXLab - english forum => Topic started by: Stefan on February 11, 2016, 10:31:41 PM

Title: High float precision NVIDIA vs. INTEL
Post by: Stefan on February 11, 2016, 10:31:41 PM
Shadertoy demo Seascape - fix white pixels (https://www.shadertoy.com/view/MdGGzy) (Ported to GeeXLab (https://drive.google.com/file/d/0BykQ4pHxfGQWcDVQWE50SDRtZUk/view)) is supposed to fix graphical corruption that appears after some hours in the original demo (https://www.shadertoy.com/view/Ms2SD1)
 Apparently caused by bad floating point calculations.

If you max it out you can still spoof the FPU, whereat NVIDIA's precision is very imprecise

Code: [Select]
// High float precision in fragment shader should be: -/+ 2 to the power of 127
// max. NVIDIA
// float SEA_TIME = iGlobalTime + pow(2.0, 118.0); // OK with bad visuals
// float SEA_TIME = iGlobalTime + pow(2.0, 119.0); // blank screen

// max. INTEL
// float SEA_TIME = iGlobalTime + pow(2.0, 127.0); // OK with bad visuals
// float SEA_TIME = iGlobalTime + pow(2.0, 128.0); // "end of the world"