Recent Posts

Pages: 1 ... 7 8 [9] 10
81
English forum / Re: GPU buffers
« Last post by JeGX on February 23, 2015, 08:38:21 PM »
ok I'll try to write a small code sample asap. Maybe there's a bug and I'll catch it at that moment!
82
English forum / Re: GPU buffers
« Last post by Adik on February 23, 2015, 11:36:47 AM »
First of all, thanks for the reply and let me say that I don't need to have this solved for now.

Second, I have tried what you suggested, but with no success. I only got 0s as results from the sub_data_read_1ui function for all valid buffer's indices. The data in the SSBO should be ok, because when I use it for rendering the rendered frame seems to be ok.

Maybe, I overlooked something, so this is the code:
Code: [Select]
gh_gpu_buffer.map(counts_buffer, "GL_READ_ONLY")

io.output("counts_buffer.txt")
local total = 0

for i = 0, 16 * 16 * 16 - 1 do
  local buffer_offset_bytes = 4 * i
  local count = gh_gpu_buffer.sub_data_read_1ui(counts_buffer, buffer_offset_bytes)
  if (count > 0) then
    io.write(count .. "\n")
  end
  total = total + count
end

io.write("Atoms = " .. total)
io.close()
 
gh_gpu_buffer.unmap(counts_buffer)

If you wanted to implement a working example, you could let me know and I would test it.
83
"Gaming enthusiasts have been griping for months that Nvidia's GeForce GTX 970 graphics chip doesn't operate up to snuff, and now someone has taken the company to court over it.

...

Nvidia markets the chip as having 4GB of performance-boosting video RAM, but some users have complained the chip falters after using 3.5GB of that allocation."

http://www.pcworld.com/article/2887234/nvidia-hit-with-false-advertising-suit-over-gtx-970-performance.html
84
GpuTest / Re: ISSUE with GPUTEST when running for 8 hour or more.
« Last post by JeGX on February 20, 2015, 08:23:00 PM »
Thanks for this feedback. I did long long tests (more than 100 hours!) on Windows several months ago and it worked fine. But I never did long stress test on OSX. I'll try to do that test for the next release...
85
GpuTest / Re: GpuTest 0.7.0
« Last post by JeGX on February 20, 2015, 08:17:38 PM »
@jorgk: I tested on a macbook pro +  Mavericks and the test with a fixed duration works fine (with the GeForce or the Intel GPU).

Maybe it's related to a recent update? What is your version of OSX?

I will release a new update of GpuTest shortly, maybe that will fix that mysterious bug.
86
3D-Tech News Around The Web / Nvidia gained at the expense of AMD and Intel
« Last post by gyg on February 20, 2015, 07:54:05 PM »
"GPU shipments down in fourth quarter of 2014.

Nvidia gained at the expense of AMD and Intel. Gaming is still the market’s bright spot.

The new market report on graphics processing units (GPUs) from Jon Peddie Research (JPR) shows total shipments were down in the fourth quarter of 2014 (ending December 31, 2014). Measured on a sequential basis, Nvidia shipments were up 3%, Intel dropped 4%, AMD slipped 7%."

http://gfxspeak.com/2015/02/20/shipments-fourth-quarter/
87
English forum / Re: GPU buffers
« Last post by JeGX on February 20, 2015, 07:19:31 PM »
I never tested that kind of code but a code snippet with sub_data_read_1ui could be:

Code: [Select]
gh_gpu_buffer.map(ssbo, "GL_READ_ONLY")
x = gh_gpu_buffer.sub_data_read_1ui(ssbo, offset)
gh_gpu_buffer.unmap(ssbo)

Let me know if you need more help.

89
"A startup is betting more than half a billion dollars that it will dazzle you with its approach to creating 3-D imagery.
The technology ­ could open new opportunities for the film, gaming, travel, and telecommunications industries."

http://www.technologyreview.com/featuredstory/534971/magic-leap/
90
3D-Tech News Around The Web / Smart Rendering for Virtual Reality
« Last post by gyg on February 20, 2015, 06:01:24 PM »
"Researchers from Intel have been working on new methods for improving the rendering speed for modern wide-angle head-mounted displays like the Oculus Rift and Google Cardboard. Their approach makes use of the fact that because of the relatively cheap and lightweight lenses the distortion astigmatism happens: only the center area can be perceived very sharp, while with increasing distance from it, the perception gets more and more blurred. So what happens if you don't spend the same amount of calculations and quality for all pixels? The blog entry gives hints to future rendering architectures and shows performance numbers."

http://blogs.intel.com/intellabs/2015/02/20/smart-rendering-virtual-reality/
Pages: 1 ... 7 8 [9] 10