FurMark 1.20.1.0,1.20.2.0 incorrectly disp the temp, 1.20.3.0-new problem(RX480)

Started by entertainm30, January 14, 2019, 02:51:28 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

entertainm30

The program incorrectly displays the temperature of the graphics card during the test. I apply the program log below.
How to fix? Video card driver version: Adrenalin 2019 19.1.1

<->[NORMAL]#00000001#Mon Jan 14 16:40:07 2019#FurMark 1.20.1.0 is starting up...
<->[NORMAL]#00000002#Mon Jan 14 16:40:07 2019#FurMark folder: G:\Program Files (x86)\Geeks3D\Benchmarks\FurMark\
<->[NORMAL]#00000003#Mon Jan 14 16:40:07 2019#OpenGL renderer (main graphics card): AMD Radeon RX 580
<->[NORMAL]#00000004#Mon Jan 14 16:40:07 2019#OpenGL version detected: 4.6
<->[NORMAL]#00000005#Mon Jan 14 16:40:07 2019#OpenGL max viewport size: 16384X16384 pixels
<->[NORMAL]#00000006#Mon Jan 14 16:40:07 2019#Device ID string (main graphics card): PCI\VEN_1002&DEV_67DF&SUBSYS_E347174B&REV_C7
<->[NORMAL]#00000007#Mon Jan 14 16:40:07 2019#CPU/System info:
<->[NORMAL]#00000008#Mon Jan 14 16:40:07 2019#- CPU: AMD FX(tm)-8350 Eight-Core Processor           
<->[NORMAL]#00000009#Mon Jan 14 16:40:08 2019#- CPU speed: 4000 MHz
<->[NORMAL]#00000010#Mon Jan 14 16:40:08 2019#- System memory: 8148 MB
<->[NORMAL]#00000011#Mon Jan 14 16:40:08 2019#- OS: Windows 7 64-bit build 7601 [Service Pack 1]
<->[NORMAL]#00000012#Mon Jan 14 16:40:08 2019#Detecting GPUs...
<->[NORMAL]#00000013#Mon Jan 14 16:40:08 2019#Found 1 GPUs.
<->[NORMAL]#00000014#Mon Jan 14 16:40:08 2019#- GPU 1:  AMD Radeon RX 480
<->[NORMAL]#00000015#Mon Jan 14 16:40:08 2019#  - Cores: 2304
<->[NORMAL]#00000016#Mon Jan 14 16:40:08 2019#  - Memory size: 8192MB
<->[NORMAL]#00000017#Mon Jan 14 16:40:08 2019#  - Memory type: GDDR5
<->[NORMAL]#00000018#Mon Jan 14 16:40:08 2019#  - TDP: 150W
<->[NORMAL]#00000019#Mon Jan 14 16:40:08 2019#  - Bus ID: 1
<->[NORMAL]#00000020#Mon Jan 14 16:40:08 2019#  - Bios: 015.050.000.000
<->[NORMAL]#00000021#Mon Jan 14 16:40:08 2019#  - Graphics driver: 25.20.15011.1004
<->[NORMAL]#00000022#Mon Jan 14 16:40:08 2019#  - GPU temperature: 50.0°C
<->[NORMAL]#00000023#Mon Jan 14 16:40:08 2019#WARNING: true GPU core and memory clocks for GPU 1 are not available on this platform. Only Pstate clocks will be displayed.
<->[NORMAL]#00000024#Mon Jan 14 16:40:08 2019#  - PState 0 - GPU clock: 300MHz - Memory clock: 300MHz - GPU voltage: 0.800V
<->[NORMAL]#00000025#Mon Jan 14 16:40:08 2019#  - PState 0 - GPU clock: 608MHz - Memory clock: 2000MHz - GPU voltage: 0.818V
<->[NORMAL]#00000026#Mon Jan 14 16:40:08 2019#  - PState 0 - GPU clock: 930MHz - Memory clock: 2000MHz - GPU voltage: 0.831V
<->[NORMAL]#00000027#Mon Jan 14 16:40:08 2019#  - PState 0 - GPU clock: 1097MHz - Memory clock: 2000MHz - GPU voltage: 0.962V
<->[NORMAL]#00000028#Mon Jan 14 16:40:08 2019#  - PState 0 - GPU clock: 1165MHz - Memory clock: 2000MHz - GPU voltage: 1.025V
<->[NORMAL]#00000029#Mon Jan 14 16:40:08 2019#  - PState 0 - GPU clock: 1211MHz - Memory clock: 2000MHz - GPU voltage: 1.075V
<->[NORMAL]#00000030#Mon Jan 14 16:40:08 2019#  - PState 0 - GPU clock: 1256MHz - Memory clock: 2000MHz - GPU voltage: 1.125V
<->[NORMAL]#00000031#Mon Jan 14 16:40:08 2019#  - PState 0 - GPU clock: 1342MHz - Memory clock: 2000MHz - GPU voltage: 1.150V
<->[NORMAL]#00000032#Mon Jan 14 16:40:08 2019#  - Fan speed: 0%
<->[NORMAL]#00000033#Mon Jan 14 16:40:08 2019#no multi-GPU support (1 physical GPUs)
<->[NORMAL]#00000034#Mon Jan 14 16:40:08 2019#GPU monitoring thread started up ok.
<->[NORMAL]#00000035#Mon Jan 14 16:40:08 2019#oZone3D Engine is starting up - kernel build: [v3.4.0 - Aug 22 2016 @ 16:36:12] - codename: Zhadanas
<->[NORMAL]#00000036#Mon Jan 14 16:40:08 2019#oZone3D initialization in progress...
<->[NORMAL]#00000037#Mon Jan 14 16:40:08 2019#oZone3D - OpenGL renderer creation ok.
<->[NORMAL]#00000038#Mon Jan 14 16:40:08 2019#o3RendererOpenGL INFO: OpenGL version: 4.6.13544 Compatibility Profile Context 25.20.15011.1004
<->[NORMAL]#00000039#Mon Jan 14 16:40:08 2019#o3RendererOpenGL INFO: OpenGL Shading Language (GLSL) version: 4.60
<->[NORMAL]#00000040#Mon Jan 14 16:40:08 2019#oZone3D initialization ok.
<->[NORMAL]#00000041#Mon Jan 14 16:40:08 2019#Checking for new version...
<->[NORMAL]#00000042#Mon Jan 14 16:40:09 2019#A new version is available online: 1.20.2.0
<->[NORMAL]#00000043#Mon Jan 14 16:45:22 2019#Render thread - affinity mask set to 0
<!>[WARNING]#00000044#Mon Jan 14 16:45:23 2019#GPU temperature spike detected: 35.000°C
<!>[WARNING]#00000045#Mon Jan 14 16:45:24 2019#GPU temperature spike detected: 35.000°C
<!>[WARNING]#00000046#Mon Jan 14 16:45:25 2019#GPU temperature spike detected: 35.000°C
<!>[WARNING]#00000047#Mon Jan 14 16:45:26 2019#GPU temperature spike detected: 35.000°C
<!>[WARNING]#00000048#Mon Jan 14 16:45:28 2019#GPU temperature spike detected: 35.000°C
<!>[WARNING]#00000049#Mon Jan 14 16:45:29 2019#GPU temperature spike detected: 34.000°C
<!>[WARNING]#00000050#Mon Jan 14 16:45:30 2019#GPU temperature spike detected: 35.000°C
<!>[WARNING]#00000051#Mon Jan 14 16:45:31 2019#GPU temperature spike detected: 35.000°C
<!>[WARNING]#00000052#Mon Jan 14 16:45:32 2019#GPU temperature spike detected: 35.000°C
<!>[WARNING]#00000053#Mon Jan 14 16:45:33 2019#GPU temperature spike detected: 35.000°C
<!>[WARNING]#00000054#Mon Jan 14 16:45:34 2019#GPU temperature spike detected: 35.000°C



JeGX

Always Trouble Inside...

Indeed, there is a problem. I did test with a RX 470 and a Vega 56.

- RX 470 : at the first run of FurMark, the GPU temperature is correct. I fully closed FurMark, launched it again, and, at the second run, GPU temp reading is wrong.

- Vega 56: at the first run of FurMark, the GPU temperature is properly read. At second run, GPU temp is still ok, at the third run still ok...

Looks like the issue is only present on RX 400 (maybe RX 500 and HD 7000 too, I don't know). RX Vega GPUs are not impacted...

Last test: GPU Shark. I launched GPU Shark when FurMark displayed wrong GPU temp and the GPU temp in GPU Shark was correct.

Likely a bug in FurMark since GPU Shark reads the GPU temp correctly (I also used GPU-Z). Will try to fix it asap  :P
Thanks for your feedback.

entertainm30

I'm glad I helped you develop the program. We will wait for fix :)

entertainm30

Hi. I tried a new version of the program. The error just started to manifest itself in a new way: the temperature instantly rises to 90 degrees, the GPU-Z is also the wrong value. At the end of the test also instantly falls to normal values. I tried to run the test for 7 seconds, no longer decided. As a proof, I apply the GPU-Z screenshots and the test result, as well as a fresh program log.

<->[NORMAL]#00000001#Tue Jan 15 22:41:55 2019#FurMark 1.20.3.0 is starting up...
<->[NORMAL]#00000002#Tue Jan 15 22:41:55 2019#FurMark folder: G:\Program Files (x86)\Geeks3D\Benchmarks\FurMark\
<->[NORMAL]#00000003#Tue Jan 15 22:41:56 2019#OpenGL renderer (main graphics card): AMD Radeon RX 590
<->[NORMAL]#00000004#Tue Jan 15 22:41:56 2019#OpenGL version detected: 4.6
<->[NORMAL]#00000005#Tue Jan 15 22:41:56 2019#OpenGL max viewport size: 16384X16384 pixels
<->[NORMAL]#00000006#Tue Jan 15 22:41:56 2019#Device ID string (main graphics card): PCI\VEN_1002&DEV_67DF&SUBSYS_E347174B&REV_C7
<->[NORMAL]#00000007#Tue Jan 15 22:41:56 2019#CPU/System info:
<->[NORMAL]#00000008#Tue Jan 15 22:41:56 2019#- CPU: AMD FX(tm)-8350 Eight-Core Processor           
<->[NORMAL]#00000009#Tue Jan 15 22:41:57 2019#- CPU speed: 4000 MHz
<->[NORMAL]#00000010#Tue Jan 15 22:41:57 2019#- System memory: 8148 MB
<->[NORMAL]#00000011#Tue Jan 15 22:41:57 2019#- OS: Windows 7 64-bit build 7601 [Service Pack 1]
<->[NORMAL]#00000012#Tue Jan 15 22:41:57 2019#Detecting GPUs...
<->[NORMAL]#00000013#Tue Jan 15 22:41:57 2019#Found 1 GPUs.
<->[NORMAL]#00000014#Tue Jan 15 22:41:57 2019#- GPU 1:  AMD Radeon RX 480
<->[NORMAL]#00000015#Tue Jan 15 22:41:57 2019#  - Cores: 2304
<->[NORMAL]#00000016#Tue Jan 15 22:41:57 2019#  - Memory size: 8192MB
<->[NORMAL]#00000017#Tue Jan 15 22:41:57 2019#  - Memory type: GDDR5
<->[NORMAL]#00000018#Tue Jan 15 22:41:57 2019#  - TDP: 150W
<->[NORMAL]#00000019#Tue Jan 15 22:41:57 2019#  - Bus ID: 1
<->[NORMAL]#00000020#Tue Jan 15 22:41:57 2019#  - Bios: 015.050.000.000/113-2E3470U.X5W (2016/09/21 02:29)
<->[NORMAL]#00000021#Tue Jan 15 22:41:57 2019#  - Graphics driver: 25.20.15011.1004
<->[NORMAL]#00000022#Tue Jan 15 22:41:57 2019#  - GPU temperature: 51.0°C
<->[NORMAL]#00000023#Tue Jan 15 22:41:57 2019#WARNING: true GPU core and memory clocks for GPU 1 are not available on this platform. Only Pstate clocks will be displayed.
<->[NORMAL]#00000024#Tue Jan 15 22:41:57 2019#  - PState 0 - GPU clock: 300MHz - Memory clock: 300MHz - GPU voltage: 0.800V
<->[NORMAL]#00000025#Tue Jan 15 22:41:57 2019#  - PState 0 - GPU clock: 608MHz - Memory clock: 2000MHz - GPU voltage: 0.818V
<->[NORMAL]#00000026#Tue Jan 15 22:41:57 2019#  - PState 0 - GPU clock: 930MHz - Memory clock: 2000MHz - GPU voltage: 0.831V
<->[NORMAL]#00000027#Tue Jan 15 22:41:57 2019#  - PState 0 - GPU clock: 1097MHz - Memory clock: 2000MHz - GPU voltage: 0.962V
<->[NORMAL]#00000028#Tue Jan 15 22:41:57 2019#  - PState 0 - GPU clock: 1165MHz - Memory clock: 2000MHz - GPU voltage: 1.025V
<->[NORMAL]#00000029#Tue Jan 15 22:41:57 2019#  - PState 0 - GPU clock: 1211MHz - Memory clock: 2000MHz - GPU voltage: 1.075V
<->[NORMAL]#00000030#Tue Jan 15 22:41:57 2019#  - PState 0 - GPU clock: 1256MHz - Memory clock: 2000MHz - GPU voltage: 1.125V
<->[NORMAL]#00000031#Tue Jan 15 22:41:57 2019#  - PState 0 - GPU clock: 1342MHz - Memory clock: 2000MHz - GPU voltage: 1.150V
<->[NORMAL]#00000032#Tue Jan 15 22:41:57 2019#  - Fan speed: 0%
<->[NORMAL]#00000033#Tue Jan 15 22:41:57 2019#no multi-GPU support (1 physical GPUs)
<->[NORMAL]#00000034#Tue Jan 15 22:41:57 2019#GPU monitoring thread started up ok.
<->[NORMAL]#00000035#Tue Jan 15 22:41:57 2019#oZone3D Engine is starting up - kernel build: [v3.4.0 - Dec 29 2018 @ 11:09:53] - codename: Zhadanas
<->[NORMAL]#00000036#Tue Jan 15 22:41:57 2019#oZone3D initialization in progress...
<->[NORMAL]#00000037#Tue Jan 15 22:41:57 2019#oZone3D - OpenGL renderer creation ok.
<->[NORMAL]#00000038#Tue Jan 15 22:41:57 2019#o3RendererOpenGL INFO: OpenGL version: 4.6.13544 Compatibility Profile Context 25.20.15011.1004
<->[NORMAL]#00000039#Tue Jan 15 22:41:57 2019#o3RendererOpenGL INFO: OpenGL Shading Language (GLSL) version: 4.60
<->[NORMAL]#00000040#Tue Jan 15 22:41:57 2019#oZone3D initialization ok.
<->[NORMAL]#00000041#Tue Jan 15 22:41:57 2019#Checking for new version...
<->[NORMAL]#00000042#Tue Jan 15 22:42:16 2019#Render thread - affinity mask set to 0
<->[NORMAL]#00000043#Tue Jan 15 22:44:33 2019#Render thread - affinity mask set to 0
<->[NORMAL]#00000044#Tue Jan 15 22:45:03 2019#Render thread - affinity mask set to 0
<->[NORMAL]#00000045#Tue Jan 15 22:45:53 2019#Render thread - affinity mask set to 0
<->[NORMAL]#00000046#Tue Jan 15 22:48:27 2019#Render thread - affinity mask set to 0

Result test:
https://ibb.co/yRSxyLg
Screen GPU-Z:
https://ibb.co/QNCwQ5r

Or is that the real temperature? If so, it's really sad.  ???

I checked the temperature on the histogram in AMD drivers - indeed, the temperature rises to 85 degrees instantly and drops just as quickly after the test is completed. In my opinion, the test currently does not work correctly with the frequency of the RX 400 series family of graphics cards. I am ready to take part in tests to solve this problem.

entertainm30

Hey. Dealt with his video card (sapphire rx480 nitro+). In General, the benchmark does not work correctly with auto driver settings
video card (tested on version adrenalin 2019 18.2.2). The frequency of the gpu chip in the automatic mode will immediately rise to the maximum
possible and correctly not reset, and temperature sensors do not work correctly, so the speed of the fans in the car
the mode rises very slowly. I found the following way out of the situation (all further changes are made in the WattMan FurMark profile of AMD adrenalin driver):
1. To turn off Zero RPM.
2. To reduce the limitation of energy consumption for the profile FurMark 7%.
3. Disable automatic mode for Speed/temperature indicator.
4. In the temperature profile for the leftmost point, raise the cooling system speed at 30 degrees to 20%.
5. In the temperature profile for the far right point, shift the temperature by 77 degrees from 85 degrees and set the cooling system speed to 95%.
6. All middle points of the profile should be placed between the extreme right and extreme left points.
With this profile, the temperature of the graphics card remains stable at 80 degrees during the test.

JeGX

Thanks for your tests and feedback. That will probably help other owners of RX 400.