[GPU Computing] CUDA Experiment III: Video Mapping (2 Million Pixel)







When GPU computing meets digital art. Really cool!

This CUDA experiment maps a FULL-HD (1920×1080 @ 30 frames per second, MPEG2 compression) video source into 3D space. Each frame is processed in real-time on the GPU using CUDA. Each pixel in a frame (2.073.600 pixels per frame) is scaled by its luminance value and given the original color. The camera flight is realized with a 3D space navigator in real-time. This application is written in C# using DirectX 11, CUDA.NET and DirectShow.NET libraries. Benchmarks: GPU load is about 85% (GTX 260), GPU memory controller load 25%, CPU (i7-920) is at 20%

[via]



14 thoughts on “[GPU Computing] CUDA Experiment III: Video Mapping (2 Million Pixel)”

  1. Psolord

    Hey geeky stuff! I like! 😀

    Maybe it’s time for someone to author “Videosurf” game?

    Oh, I would surf my pr0n collection all day long! j/k lol

  2. anonymous

    This “Experiment” is one of the perfect examples where CUDA is absolutely the wrong technology.
    It is perfectly achievable using a 10 line vertex shader which is more readable than CUDA and is platform independant.

    I got that link from a friend and laughed hard about the website as it blows the implementation up like its the best thing since sliced bread. I then began to implement it using OpenGL/GLSL and achieved comparable results in less than one hour (without the texture streaming, which isn’t that hard as well)

  3. Michal

    @anonymous

    Fuck off, maybe he just wanted to use CUDA because he like it. What’s your problem?

    Great demo!

  4. anonymouse

    Hey anonymous, let’s see your awesome coding skills post your code somewhere so we can see it.

  5. anonymous

    > Fuck off, maybe he just wanted to use CUDA because he like it. What’s your problem?

    Its potentially slower, restricted to a platform, less readable and more
    complex code.

    > Hey anonymous, let’s see your awesome coding skills […]

    My point was that you do _not_ need awesome coding skills. It is a trivial
    vertex shader. It is about using the right tool for the job, and not using
    proprietary APIs when its not required. CUDA offers you more control over the
    memory and program execution on a graphics card, but in this case this is
    absolutely unnecessary.

    varying vec3 color;
    uniform sampler2D tex;
    void main() {
    vec4 pos = gl_Vertex;
    vec4 col = texture2D(tex, pos.xz);
    float lumi = dot(vec3(0.299, 0.587, 0.114), col.rgb);
    pos.y = lumi;
    color = col.rgb;
    gl_Position = gl_ModelViewProjectionMatrix * pos;
    }

  6. DrBalthar

    I totally agree using CUDA is pretty dump, since it actually is slower staying in the same API doesn’t get you interoop problems. So he could have done it all in Direct3D

  7. enwgido

    #Anonymous:

    Are you blind?

    This isn´t a fucxxxx vertex shader program. This works to the pixel level of the video stream like as pixel/voxel.

    It´s more complex, and your example code doesn´t make the same effect.

  8. anonymous

    > Are you blind?

    Are you dumb? Did you actually have a look at the CUDA implementation?

    > This isn´t a fucxxxx vertex shader program. This works to the pixel level of the video stream like as pixel/voxel.

    It uses a grid of 3D points with the resolution of the texture. Rendered from above, this exactly yields the texture image.

    > It´s more complex, and your example code doesn´t make the same effect.

    Yes it does. Nanananananananana!

  9. Michal

    > Its potentially slower, restricted to a platform…

    So, what? The point is to create something nice. Do not become a slave of technology… it is only technology. GLSL, CUDA, OpenCL… what’s the difference in demo like this? It will be 1 ms slower? It won’t run on AMD? Oh, my god, terrible… The most important thing is idea, imagination not technology. If he likes and know CUDA let him write in CUDA and write your stupid comments somewhere else.

  10. anonymous

    I sure hope not everybody is as ignorant as you are. It is not about getting the last ms but using the appropriate technology to solve a problem. This is especially true if you try to sell yourself and write pseudo papers. Using CUDA in this case would cause less pain for the developer and less pain for the users.
    But i guess that CUDA is nice and shiny and the new buzzword, and is much more impressive than GLSL or HLSL.

  11. anonymous

    Whoops, obviously i meant “_NOT_ using CUDA in this case […]”

  12. Michal

    Personally I prefer OpenGL and OpenCL. You don’t have right to write that I am ignorant. You do not know me. Everyone has a right to use whatever technology he/she likes.

  13. DrBalthar

    Well it is the usual developer trap when you have a big hammer in your hand every problem suddenly looks like a nail.

  14. Павел

    And what? Just pixels in space… the age of Amiga is return? Nothing intresting here.

Comments are closed.