Author Topic: High float precision NVIDIA vs. INTEL  (Read 3754 times)

0 Members and 1 Guest are viewing this topic.

Stefan

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 4370
    • View Profile
High float precision NVIDIA vs. INTEL
« on: February 11, 2016, 10:31:41 PM »
Shadertoy demo Seascape - fix white pixels (Ported to GeeXLab) is supposed to fix graphical corruption that appears after some hours in the original demo
 Apparently caused by bad floating point calculations.

If you max it out you can still spoof the FPU, whereat NVIDIA's precision is very imprecise

Code: [Select]
// High float precision in fragment shader should be: -/+ 2 to the power of 127
// max. NVIDIA
// float SEA_TIME = iGlobalTime + pow(2.0, 118.0); // OK with bad visuals
// float SEA_TIME = iGlobalTime + pow(2.0, 119.0); // blank screen

// max. INTEL
// float SEA_TIME = iGlobalTime + pow(2.0, 127.0); // OK with bad visuals
// float SEA_TIME = iGlobalTime + pow(2.0, 128.0); // "end of the world"