« on: July 08, 2010, 05:09:57 PM »
This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
Advance warning, UFO invasion expected at GDCeu'10
Last year, exhilarated by the energy at the Evoke demoparty, I had been putting together a task-scheduler suitable for demoscene intros, managing to fit inside a 16K executable graphics, music and parallelism (which makes me qualify as scener-wanabee, I guess). nulstein, the resulting project, is documented in my "Do-it-yourself Game Task Scheduling" article, with full source code available. It's the kind of project that sits at the back of your mind and never quite leaves you in peace, so this year, I couldn't do otherwise than come up with a follow up and address the last remaining serial bit: submitting draw calls to DirectX.
Be there in Cologne, August 16-18, at GDC Europe and attend my session : "UFO invasion: DX11 and Multicore to the Rescue". I'll be showing how DX11 deferred contexts can be used to evade the draw-calls submission tax, how TBB's new task groups make this easy and how to update all entities in the game in parallel despite all kinds of dependencies they may exhibit. And if you can't attend, don't despair, slides and full source code will be made available here, after the show !
The TimelineFX Particle Editor is a tool for creating a whole host of particle effects, and then exporting those effects onto sprite sheets, image strips or just a sequence of images, Available on Windows and Mac.
Effects include explosions, smoke, fire, water, steam, bubbles, and pretty much anything else you can throw about the screen... Animations can be easily configured to loop seamlessly with just one click of the mouse, and with another click, you can make animations tile seamlessly too for use as animated textures.
Export the particle effects as static animations onto sprite sheets, animation strips or image sequences in PNG format, or if you develop for the iPhone or iPad, you can export using the PVRTC format (Currently mac only) which helps with performance on those devices. Plus, if you develop using the Blitzmax programming language, you can use all the effects directly using the TimelineFX module.
You can view some videos of the particle effects and a couple of tutorials on the TimelineFX YouTube channel here
Visit the TimelineFX website here and download a trial version to see what you think, the full version only costs £19.99. See a full list of effects libraries that are freely available to download here and all ready to go.
Here is a couple of screenshots from our unannounced PC/PS3 strategy game, which is currently at the production stage
Right now we are balancing game mechanics and adding last left units into the game, so the end of the production stage is coming very close. The official announce and more info will be available later this month.
The latest released version of VLC, 1.1.0, that was out a week ago, and has been downloaded half-a-dozen millions times so far, has added GPU acceleration for HD decoding under Linux and Windows. On Windows, as already stated, the code isn't working correctly with AMD Radeon cards.
Therefore, we have been working with AMD on this topic and after common work, we are going to release a new version of VLC, versionned as 1.1.1, that will work with the upcoming ATI Catalyst 10.7 driver. AMD did provide us a beta of this driver and we have verified successfully that GPU acceleration works.
Changes between 1.1.0 and 1.1.1:
* Support for the new capabilities: libvlc_adjust_Enable, libvlc_adjust_Contrast,
libvlc_adjust_Brightness, libvlc_adjust_Hue, libvlc_adjust_Saturation, libvlc_adjust_Gamma
* Various fixes and crash preventions, especially when video functions were called early
* Fix h264 streaming in ts
Windows and Mac port:
* Fix mod files support
* Fix performance issues with GPU decoding using DxVA2 using ATI graphic cards
You NEED ATI Catalyst 10.7
* Interface and crash fixes
eyeon unveils the next generation of the GPU Supercomputing framework in Fusion® 6.1. This is not just an acceleration technology; this is unprecedented productivity advancement.
Exploiting the power of low cost GPU graphics cards with hundreds of cores, coupled with an expanded feature set, makes this release much more productive. The need for network rendering becomes greatly reduced, keeping the studio's infrastructure manageable and cost effective.
3D scene importing via FBX has been greatly expanded, streamlining the process between 3D animation and rendering to directly have the same assets working in Fusion. Produce passes and layers on the fly directly on the GPU at breath-taking speed. Cutting reliance on other applications and departments simplifies the production process.
Fusion 6.1 accelerates from the starting line with astounding GPU optimizations, local file caches and particle solution caching. The creative horizon expands with new tools for managing grain, color correction and handling metadata. Scripters will rejoice at the inclusion of native python support, and everyone benefits from the many enhancements to particles.
Fusion 6.1 now supports the OpenCL language, which allows tools to take advantage of the GPU in modern NVidia and ATI graphics cards to achieve blazing speed increases. How fast? We are seeing improvements as much as 1000% on some of the most processor intensive tools in Fusion (e.g. Defocus, Noise generators). Insert OpenCL code directly into Fuse tools to create your own GPU-accelerated tools.
OpenCL’s open computing language is a framework that utilizes the massively parallel operations of GPU's for general computing. Instead of just doing 3D OpenGL rendering on graphics cards, more general processing can be achieved at orders of magnitude faster than possible with just CPU's.
In recent years, the use of GP-GPUs for HPC has sparked quite a bit of interest. ... While there were more entries in the field (Cell and Larabee), the GP-GPU market has tuned into a two horse race with NVidia and AMD/ATI leading the way. From an HPC perspective, the GP-GPU can be considered a SIMD parallel co-processor (Single Instruction Multiple Data) Indeed, graphic processing is by its nature a SIMD process and it makes sense to borrow the hardware for other SIMD applications like those in HPC.