Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - JeGX

Pages: 1 2 [3] 4 5 ... 32
43
Acer has launched a new gaming laptop with 4K display.

Main features:
- Intel Core i7-4710HQ
- 16 GB of DDR3 memory
- NVIDIA GeForce GTX 860M
- 256GB SSD + 1TB HDD
- 4K display (3840 x 2160 pixels)


Links:
- http://www.cnet.com/news/aspire-v-nitro-black-edition-from-acer-jumps-into-4k/
- http://www.tomshardware.co.uk/acer-v-nitro-black-edition,news-49178.html


44
English forum / User clipping planes
« on: November 03, 2014, 04:52:51 PM »
The support of user clipping planes has been added to GLSL Hacker 0.8.0+.  User clipping planes are very easy to use: you need a GLSL program that deals with gl_ClipDistance and you need to enable one of the available user clipping planes with gh_renderer.enable_state("GL_CLIP_DISTANCE0").

A demo is available in the host_api/Clip_Planes/ folder of the code sample pack v2.25.0.

This demo is based on this article: http://github.prideout.net/clip-planes/




45
English forum / FreeType-GL plugin (TTF and OTF)
« on: November 03, 2014, 04:17:00 PM »
One of the coolest features of GLSL Hacker 0.8.0 is the support of FreeType GL via a dedicated plugin. Thanks to this new plugin, GLSL Hacker can now load any TTF (True Type Font) or OTF (Open Type Font) file. I also added a small Lua lib (gx_font.lua in the libs/lua/ folder of glsl hacker) to make things easier.


INIT script:
Code: [Select]
-- Load an otf file:
font = ftgl_load_font(demo_dir .. "data/BebasNeue.otf", 30)

FRAME script:
Code: [Select]
ftgl_begin(font)
ftgl_print(font, 10, 60, 1.0, 1.0, 0, 1.0, "GLSL Hacker rocks!")
ftgl_end(font)


Demos are available in the host_api/freetype-gl/ folder of the code sample pack v2.25.0.

The plugin is available for all platforms (Windows, OSX and Linux).










46
English forum / Omnidirectional Shadow Mapping
« on: November 03, 2014, 04:00:04 PM »
GLSL Hacker 0.8.0.0 supports shadow mapping with omni-directional lights.  Omni-directional shadow mapping is cool because a light can cast shadows in all directions.  Omni-directional shadow mapping (or cubic shadow mapping) relies on a cubemap and requires 6 passes to render the scene.

This demo is available in the host_api/Shadow_Mapping/Omnidirectional_Shadows/ folder of the code sample pack v2.25.0.







47
Quote
One thing we wanted to improve upon with GRID Autosport was our trackside environments, in particular the grass. The old system had served us well but it was time for an improvement so step up Rich Kettlewell, one of our programming wizards who set about the task of doing just that.

Below you’ll find a brief presentation which goes into how we achieved this and the effects it has on the game itself. The presentation was given at the Develop conference and while it was originally meant for game developers we hope you enjoy taking a look at what goes on behind the scenes.

PDF: http://blog.codemasters.com/wp-content/uploads/2014/10/Rendering-Fields-of-Grass-using-DirectX11-in-GRID-Autosport.pdf

Source: http://blog.codemasters.com/grid/10/rendering-fields-of-grass-in-grid-autosport/

48
English forum / OpenGL 4 Subroutines + OSX Yosemite + GTX 780
« on: October 22, 2014, 08:10:47 AM »
OS X 10.10 Yosemite + NVIDIA R343.01: not better!


49
Quote
October 20th, 2014, – The Khronos™ Group today announced the ratification and public release of the finalized OpenVX™ 1.0 specification, an open, royalty-free standard for cross platform acceleration of computer vision applications. OpenVX enables performance and power-optimized computer vision processing, especially important in embedded and real-time uses cases such as face, body and gesture tracking, smart video surveillance, advanced driver assistance systems (ADAS), object and scene reconstruction, augmented reality, visual inspection, robotics and more. In addition to the OpenVX specification, Khronos has developed a full set of conformance tests and an Adopters Program, that enables implementers to test their implementations and use the OpenVX trademark if conformant. Khronos plans to ship an open source, fully-conformant CPU-based implementation of OpenVX 1.0 before the end of 2014. The full OpenVX 1.0 specification and details about the OpenVX Adopters Program are available at www.khronos.org/openvx.

OpenVX defines a higher level of abstraction for execution and memory models than compute frameworks such as OpenCL™, enabling significant implementation innovation and efficient execution on a wide range of architectures while maintaining a consistent vision acceleration API for application portability. An OpenVX developer expresses a connected graph of vision nodes that an implementer can execute and optimize through a wide variety of techniques such as: acceleration on CPUs, GPUs, DSPs or dedicated hardware, compiler optimizations, node coalescing, and tiled execution to keep sections of processed images in local memories. This architectural agility enables OpenVX applications on a diversity of systems optimized for different levels of power and performance, including very battery-sensitive, vision-enabled, wearable displays.

“Increasingly powerful and efficient processors and image sensors are enabling engineers to incorporate visual intelligence into a wide range of systems and applications,” said Jeff Bier, founder of the Embedded Vision Alliance. “A key challenge for engineers is efficiently mapping complex algorithms onto the processor best suited to the application. OpenVX is an important step towards easing this challenge.”

The precisely defined specification and conformance tests for OpenVX make it ideal for deployment in production systems, where cross-vendor consistency and reliability are essential. OpenVX is complementary to the popular OpenCV open source vision library that is also used for application prototyping but is not so tightly defined and lacks OpenVX graph optimizations. Khronos has defined the VXU™ utility library to enable developers to call individual OpenVX nodes as standalone functions for efficient code migration from traditional vision libraries such as OpenCV. Finally, as any Khronos specification, OpenVX is extensible to enable nodes to be defined and deployed to meet customer needs, ahead of being integrated into the core specification.
Quote

Press release:  https://www.khronos.org/news/press/khronos-finalizes-and-releases-openvx-1.0-specification-for-computer-vision

50
English forum / Very Simple OpenGL Extensions Viewer
« on: October 19, 2014, 05:58:29 PM »
Here is a small demo (in Lua) that shows how to list the OpenGL extensions exposed by the driver.

You can browse the extensions list using the following keys:
- PAGE_DOWN / PAGE_UP: to move up and down in the list.
- HOME: to jump to the start of the list
- END: to jump to the end of the list

The demo is available in the code sample pack in the host_api/OpenGL_Extensions/ folder



I haven't tested the demo on OSX and Linux but it should work. Otherwise, let me know in this thread.

51
English forum / GPU PhysX on Linux
« on: October 18, 2014, 04:07:32 PM »
The new PhysX SDK version 3.3.2 adds the GPU PhysX acceleration on Linux (see THIS NEWS).  I updated GLSL Hacker with this new SDK (GLSL Hacker v0.7.2.0) and I added a new particle demo in the code sample pack:

host_api/PhysX3/Pool/demo_gl2_v1.xml

This is a simple particle/fluids demo that fills a pool with particles:



On Linux, you can start the demo with the command line:
Code: [Select]
$ ./GLSLHacker /demofile=\"path_to_code_sample_pack/host_api/PhysX/Pool/demo_gl2_v1.xml\"


Currently, I didn't manage to get the GPU acceleration on Linux (Mint 17 64-bit). I tested with latest R331.104 and with R340.xx. There is a cuInit failure (I installed the latest CUDA toolkit v6.5.14). The cuInit failure is also present with PhysX SDK samples.

Here are some benchmark numbers with GPU PhysX (currently only under Windows) and CPU PhysX (Windows and Linux).

To force CPU PhysX, just edit the demo file (demo_gl2_v1.xml) and update the line 182:
Code: [Select]
gpu_physx = 0


Benchmark settings: 6000 particles, 1280x720 windowed.

On Windows with a GeForce GTX 660 (R337.50) + Intel Core i5 2320 @ 3GHz:
- GPU PhysX: around 420 FPS
- CPU PhysX: around 150 FPS


On Linux Mint 17 64-bit, with a GeForce GTX 680 (R331.104) + AMD FX 6100 @ 3.3GHz:
- GPU PhysX: not available (cuInit failed)
- CPU PhysX: around 60 FPS (this CPU sucks!)

As soon as the GPU PhysX will be enabled, we should see a jump in the FPS on Linux (> 100 FPS on my Linux box).


52
English forum / Re: OpenGL 4 Shader Subroutines - NVIDIA/OSX bug?
« on: October 18, 2014, 03:40:51 PM »
I will try to update the NV driver, but I don't know if the Quadro driver will be ok for the GT650M.

Will try also OSX 10.10 asap...

53
Quote
qu3e is a compact, light-weight and fast 3D physics engine in C++. It is has been specifically created to be used in games. It is portable with no external dependencies other than various standard c header files (such as cassert and cmath). qu3e is designed to have an extremely simple interface for creating and manipulating rigid bodies.

qu3e is of particular interest to those in need of a fast and simple 3D physics engine, without spending too much time learning about how the whole engine works. In order to keep things very simple and friendly for new users, only box collision is supported. No other shapes are supported (capsules and spheres may be added in the future if requested).

Since qu3e is written in C++ is intended for users familiar with C++. The inner-code of qu3e has quite a few comments and is a great place for users to learn the workings of a 3D physics engine.

qu3e stands for "cube", since the 3 looks slightly like the letter b and boxes (or cubes!) are the primary type of collision object.

link: https://github.com/RandyGaul/qu3e

54
English forum / OpenGL 4 Shader Subroutines - NVIDIA/OSX bug?
« on: October 17, 2014, 06:32:29 PM »
I struggled many hours recently with subroutines and I didn't manage to make them ok under OS X with a GeForce GPU.
I added a new simple demo that shows a pqtorus rendered three times, each time with a different subroutine (texture, phong, phong+texture).

The demo is available in the code sample pack:
host_api/gl-400-arb-shader-subroutine/demo_gl4_v3.xml

The latest GLSL Hacker 0.7.2.0 is recommended.

The demo works fine on Windows. On OS X, the demo works fine for Intel and AMD GPUs but not for NVIDIA GPUs. With the GT650M / OSX, the same subroutine is used to shade all meshes and that's the bug. I suspect a bug on OSX with GeForce GPUs because the demo works fine everywhere else. But since it's a NVIDIA related issue, my code can be also guilty...


OK - NVIDIA GeForce GTX 660 - R337.50 - Win7 64-bit


OK - Intel HD 4000 - OS X 10.9


OK - AMD Radeon HD 6870 - OS X 10.9 (Hackintosh)


ERROR - NVIDIA GeForce GT 650M (Macbook Retina Mid-2012) - OS X 10.9


55
English forum / GLSL Hacker 0.7.2.0 released
« on: October 17, 2014, 03:16:11 PM »
A new version of GLSL Hacker is ready, this time for all OSes (Windows, OSX and Linux) at the same time!

The PhysX plugin has been updated with the latest PhysX SDK v3.3.2

Downloads: http://www.geeks3d.com/glslhacker/download.php

The code sample pack has been updated too: http://www.geeks3d.com/glslhacker/cs/

Quote
Version 0.7.2.0 - 2014.10.17
----------------------------
! improved the FBX plugin (Windows and OS X) for loading Autodesk FBX files.
! updated the PhysX 3 plugin (Windows, Linux and OS X) with
  latest PhysX SDK v3.3.2.
+ added a new OBJ loader (for testing purposes) and a way to change
  current OBJ loader with gh_model.set_current_3d_obj_loader(name):
  "ObjLoaderV1" or "ObjLoaderV2". Default is "ObjLoaderV1".

Full changelog is available here: http://www.geeks3d.com/glslhacker/changelog.php

56
3D-Tech News Around The Web / Qt 5.4 Beta Available
« on: October 17, 2014, 02:52:46 PM »
Quote
I am extremely happy to announce that Qt 5.4 Beta is now available for download. There are a lot of new and interesting things in Qt 5.4 and I will try to summarize the most important highlights in this blog post.

Qt 5.4 brings capability to dynamically select during the application startup whether to use ANGLE or OpenGL on Windows. It is possible to use either opengl32.dll or ANGLE’s OpenGL ES 2.0 implementation in Qt applications without the need for two separate builds of the binaries. This significantly simplifies the task of creating Qt Quick applications for Windows PCs. Dynamic GL switching is not yet enabled in the prebuilt Qt 5.4 Beta binaries. In addition to these, there is a large number of smaller improvements and bug fixes for the Windows port in Qt 5.4.

...

QOpenGLContext is now able to adopt existing native contexts (EGL, GLX, …). This allows interoperability between Qt and other frameworks, such as game engines.

Link: http://blog.qt.digia.com/blog/2014/10/17/qt-5-4-beta-available/

57
English forum / GLSL Hacker 0.7.1.5 released
« on: October 10, 2014, 05:43:15 PM »
A new dev version of GLSL Hacker is ready for Windows 64-bit only

You can download it from this page:
http://www.geeks3d.com/glslhacker/download.php

Full changelog:
http://www.geeks3d.com/glslhacker/changelog.php

Changelog:
Quote
Version 0.7.1.5 - 2014.10.10
----------------------------
! updated pixel format support for image2D writing (GL_ARB_shader_image_load_store)
  PF_U8_RGBA (rgba8 in GLSL) and PF_U8_R (r8 in GLSL) added (gh_texture.image_bind()).

58
English forum / GLSL Hacker 0.7.1.4 released
« on: October 10, 2014, 05:38:03 PM »
A new dev version of GLSL Hacker is ready for Windows 64-bit, and OS X.

You can download it from this page:
http://www.geeks3d.com/glslhacker/download.php

Full changelog:
http://www.geeks3d.com/glslhacker/changelog.php

Changelog:
Quote
Version 0.7.1.4 - 2014.10.10
----------------------------
! improved shader subroutines support with several uniform subroutines per GPU program.
! improved GPU programs speed (better management of uniform variables).


Version 0.7.1.3 - 2014.10.05
----------------------------
+ added webcam_get_num() to the gh_utils lib (Lua/Python).
! improved the management of multiple webcams in the Windows version.


Version 0.7.1.2 - 2014.10.04
----------------------------
* bugfix: crash when ZOMBIE scripts are stored in separate file
  instead of in the XML.
+ added network_message_pop() to gh_utils.


59
English forum / Re: RGBA8 texture is unsuported in Compute shaders
« on: October 10, 2014, 05:29:16 PM »
It's not a bug and what you are doing is ok. It's just a pixel format for image2D that was not handled by GLSL Hacker. I added it in GLSL Hacker 0.7.1.5 (which should be downloadable in few minutes).

Now you can use PF_U8_RGBA in Lua and layout(rgba8) in GLSL. Same thing with PF_U8_R and layout(r8). I added a new demo (demo_v02_rgba8.xml).

I will add 16-bit channel texture support in one of the next versions.

Update: GLSL Hacker 0.7.1.5
http://www.geeks3d.com/forums/index.php/topic,3723.msg4470.html#msg4470


60
Quote
When it comes to AMD FirePro™ S-Series GPUs, there is never “too much of a good thing.” During my conversations at trade shows and conventions, customers have been asking for a server GPU solution that not only provides the speed to accomplish computational tasks, but they also want accuracy for precise results. Earlier this year, we announced the top-of-line AMD FirePro™ S9150 – the most powerful server GPU ever built for High Performance Computing1 measured by performance/watt.  But customers also told us they want a product for mainstream, budget conscious projects that still delivers equally exceptional performance for HPC.
 
To meet that demand, we’ve expanded our server GPU portfolio. By infusing our most current iteration of the AMD GCN architecture2 into AMD FirePro™ server GPUs, we have been able to offer two products that deliver leading edge double-precision floating-point rates, and operate with tremendous efficiency.
 
The new AMD FirePro™ S9100 server GPU is supreme in its class, with high peak single- and double-precision floating point performance, including  2.11 TFLOPS peak double-precision and up to 9.4 GFLOPS per watt TDP double-precision performance3. Compute-intensive workloads are where this card excels. With 2,560 stream processors and 12GB of GDDR5 memory, the AMD FirePro S9100 is equipped to handle supercomputing tasks. For those working in scientific compute, research, or structural analysis, it would be fair to say this GPU is a compute powerhouse. On top of the performance, the card’s maximum TDP comes in at 225W, and its passive cooling solution makes it ideal for deployment in server environments.

Complete story:
http://community.amd.com/community/amd-blogs/amd-business/blog/2014/10/06/why-amd-gpus-are-right-for-the-data-center-a-closer-look-at-amd-firepro-s9100

Pages: 1 2 [3] 4 5 ... 32