Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - JeGX

Pages: 1 ... 18 19 [20] 21 22 ... 35
Geeks3D's GPU Tools / Scores comparative tables
« on: October 31, 2013, 05:18:34 PM »
Some comparative tables for GpuTest 0.6.0 benchmarks are available here:

Vivante Unveils Less than 1 mm2 OpenGL ES 2.0 GPU for Wearables and Internet of Things (IoT) Devices
07 October 2013

World’s tiniest GPU targets silicon vendors designing for the next billion smart devices

Sunnyvale, CA – October 7, 2013 –Vivante Corporation today announced an area-optimized version of its GC series OpenGL 2.0 ES GPU cores for Wearable and other IoT devices. Featuring the industry’s tiniest GPU core, less than one mm2, the GC 400 gives MCU/MPU silicon vendors a simple plug-in solution and complete IoT software stack that easily adds full-featured smartphone graphics capabilities to any product line.

Microchip, one of the leaders in MCU/MPU solutions and an important player in IoT, recently said, “during the past few years, the evolution in user interface technology was very rapid from touch keys to touchless interface driven by the consumer market, but it also happened in every type of application and market. The user interface is now a key product differentiator.”

IoT solutions ranging from home automation (HA), security, and appliances, to medical electronics and monitoring systems/sensors will require a GPU to create compelling user interface (UI) and establish product differentiation. Vivante’s IoT GC400 core allows end products to be beautifully crafted and visually intelligent, all while providing a seamless user experience (UX) across devices.   

“According to Cisco’s IBSG group, they predict “there will be 25 billion devices connected to the internet by 2015 and 50 billion by 2020,” said Wei-Jin Dai, Vivante President and CEO. “This huge number of interconnected smart devices represents an enormous opportunity for the GPU to make the impersonal device personable and offer a great user experience from the TV to the smartphone through to the breadth of IoT devices with screens of any size.”

As cited in a recent article by Bloomberg Technology,“Android is becoming the standard operating system for the “Internet of Things”—Silicon Valley’s voguish term for the expanding interconnectedness of smart devices, ranging from sensors in your shoe to jet engine monitors...Every screen variant, mobile chip, and sensor known to man has been tuned to work with Android.” Google’s Android has all the qualities, next generation features, and roadmap needed to support the vast amount of web interconnected products imaginable. A key element with Android in IoT is the visual centric nature of the OS that requires a tiny, full featured graphics processor to drive the device screen.

About Vivante IoT GPU Products
Optimized for Google Android, Windows Embedded, and other operating systems, Vivante’s IoT product portfolio includes performance-leading technologies in 3D graphics, CPC composition processors, vector graphics, and optional GPGPU cores. Vivante IoT cores leverage a unified driver architecture that is compatible with industry-standard application programming interfaces like OpenGL® ES 2.0, desktop OpenGL®, OpenCL®, OpenVG®, Microsoft® DirectX® 11, WebGL, Google Renderscript / FilterScript Compute, and other standard APIs.

Robust features built into the GC 400 IoT GPU include:
World’s smallest licensable OpenGL ES 2.0 GPU Core at less than 1 mm2 total silicon area (TSMC 28nm HPM process technology)
Supports up to 720p @ 60 FPS with high quality 32-bit color formats
Supports all major IoT operating systems, APIs and middleware
Accelerated composition processing for butter smooth UIs
Ultra-low power consumption to conserve battery power on-the-go
Tiny software driver footprint for DDR constrained and DDR-less configurations
Real-time sensor fusion processing to reduce bandwidth and increase device intelligence
Industrial temperature support for -40C to 85C
To learn more about Vivante’s IoT technology visit or contact

About Vivante Corporation
Smaller – Faster – Cooler: Vivante Corporation, a leader in multi-core GPU, OpenCL™, CPC Composition Engine and Vector Graphics IP solutions, provides the highest performance and lowest power characteristics across a range of Khronos™ Group API conformant standards based on the ScalarMorphic™ architecture. Vivante GPUs are integrated into customer silicon solutions in mass market products including smartphones, tablets, HDTVs, consumer electronics and embedded devices, running thousands of graphics applications across multiple operating systems and software platforms. Vivante is a privately held company headquartered in Sunnyvale, California, with additional R&D centers in Shanghai and Chengdu.

Gaijin Entertainment, the developers of War Thunder recently claimed that the PS4’s GPU is 40% more powerful than the Xbox One. So we made sure to ask about the console’s GPU and whether it’s eased out the number of things you can do on screen in the game.

“From a pure tech perspective it’s undeniable that the PS4 GPU will make it the most powerful console in the world and for us that means we can turn on every visual flourish we want while keeping a smooth & responsive 60fps. We’re focused on every version running at 60fps including Vita, frame rate is king.”

Origin PC CEO and co-founder Kevin Wasielewski told us that the company is no longer offering AMD GPUs in its systems and will solely provide NVIDIA graphics options. His reasons were initially vague: “This decision was based on a combination of many factors including customer experiences, GPU performance/drivers/stability, and requests from our support staff. Based on our 15+ years of experience building and selling award winning high-performance PCs, we strongly feel the best PC gaming experience is on NVIDIA GPUs,” he said.


What is K3D.js?

K3D is JavaScript library for working with 3D meshes. K3D is not a rendering engine!

K3D.js can be used for different operations with 3D meshes:

    converting between different formats
    applying affine and other transformations
    generating primitive objects
    creating LODs

To use K3D, you should have a basic knowledge of 3D meshes, index / vertex coordinate arrays etc.

3D-Tech News Around The Web / AMD Catalyst 13.10 LINUX Beta Driver
« on: October 01, 2013, 07:45:00 AM »
Download page:

Resolved Issues:

[383176] System hang up when startx after setting up an Eyefinity desktop
[384193] Permission issue with procfs on kernel 3.10
[373812] System hang observed while running disaster stress test on Ubuntu 12.10
[383109] Hang is observed when running Unigine on Linux
[383573] AC/DC switching is not automatically detected
[383075] Laptop backlight adjustment is broken
[383430] Glxtest failures observed in log file with forcing on Anti-Aliasing
[383372] Cairo-dock is broken
[378333] Severe desktop corruption is observed when enabled compiz in certain cases
[384509] glClientWaitSync is waiting even when timeout is 0
[382494] C4Engine get corruption with GL_ARB_texture_array enabled

3D-Tech News Around The Web / Learn 2D/3D graphics programming from scratch
« on: September 26, 2013, 09:27:32 PM »
Scratchapixel is the first complete interactive resource on the web for anyone (beginner or expert) who seeks to learn 2D and 3D computer graphics techniques from the ground up. Follow the link to find out more about who we are, what you will find here and why scratch-a-pixel is the right place for you to learn these techniques.

3D-Tech News Around The Web / Decyphering the Business Card Raytracer
« on: September 26, 2013, 09:13:07 PM »
I recently came across Paul Heckbert's business card raytracer. For those that have never heard of it: It is a very famous challenge in the Computer Graphics field that started on May 4th, 1984 via a post on by Paul Heckbert ( More about this in his article "A Minimal Ray Tracer" from the book Graphics Gems IV).

The goal was to produce the source code for a raytracer...that would fit on the back of a business card.

Code: [Select]
    #include <stdlib.h>   // card > aek.ppm
    #include <stdio.h>
    #include <math.h>
    typedef int i;typedef float f;struct v{
    f x,y,z;v operator+(v r){return v(x+r.x
    ,y+r.y,z+r.z);}v operator*(f r){return
    v(x*r,y*r,z*r);}f operator%(v r){return
    x*r.x+y*r.y+z*r.z;}v(){}v operator^(v r
    ){return v(y*r.z-z*r.y,z*r.x-x*r.z,x*r.
    y-y*r.x);}v(f a,f b,f c){x=a;y=b;z=c;}v
    this));}};i G[]={247570,280596,280600,
    249748,18578,18577,231184,16,16};f R(){
    return(f)rand()/RAND_MAX;}i T(v o,v d,f
    &t,v&n){t=1e9;i m=0;f p=-o.z/d.z;if(.01
    <p)t=p,n=v(0,0,1),m=1;for(i k=19;k--;)
    for(i j=9;j--;)if(G[j]&1<<k){v p=o+v(-k
    ,0,-j-4);f b=p%d,c=p%p-1,q=b*b-c;if(q>0
    ){f s=-b-sqrt(q);if(s<t&&s>.01)t=s,n=!(
    p+d*t),m=2;}}return m;}v S(v o,v d){f t
    ;v n;i m=T(o,d,t,n);if(!m)return v(.7,
    .6,1)*pow(1-d.z,4);v h=o+d*t,l=!(v(9+R(
    ),9+R(),16)+h*-1),r=d+n*(n%d*-2);f b=l%
    n;if(b<0||T(h,l,t,n))b=0;f p=pow(l%r*(b
    *.2+.1);}return v(p,p,p)+S(h,r)*.5;}i
    main(){printf("P6 512 512 255 ");v g=!v
    )*.002,c=(a+b)*-256+g;for(i y=512;y--;)
    for(i x=512;x--;){v p(13,13,13);for(i r
    =64;r--;){v t=a*(R()-.5)*99+b*(R()-.5)*

c++ -O3 -o card card.cpp
./card > card.ppm

3D-Tech News Around The Web / AMD R9 290X GPU Specs Revealed
« on: September 19, 2013, 02:20:41 PM »
With an estimated die-area of 430 mm² (18% bigger than "Tahiti,") the chip physically features 2,816 stream processors (SPs) spread across 44 clusters with 64 SPs each (a 37.5% increase over "Tahiti"). The chip features four independent raster engines, compared to two independent ones on "Tahiti." This could translate into double the geometry processing muscle as "Tahiti," with four independent tessellation units. The memory interface of the chip is expected to be 384-bit wide, based on the GDDR5 specification. Given the way TMUs are arranged on chips based on this architecture, one can deduce 176 TMUs on the chip. The ROP count could be 32 or 48. The chip will feature hardware support for DirectX 11.2, including the much hyped shared resources (mega-texture) feature.

3D-Tech News Around The Web / Taodyne 3D links
« on: September 19, 2013, 11:00:50 AM »

- 3D movie playback:

- Point clouds for math equations:

- Real-time raytracing:

Et pour les francophones, une petite video sur la réalité virtuelle et son coté philosophique ( zzzz... ):

3D-Tech News Around The Web / This is Linus...
« on: July 15, 2013, 04:34:23 PM »


What the F*CK, guys?

This piece-of-shit commit is marked for stable, but you clearly never
even test-compiled it, did you?

Because on x86-64 (the which is the only place where the patch
matters), I don't see how you could have avoided this honking huge
warning otherwise:

  arch/x86/kernel/traps.c:74:1: warning: braces around scalar
initializer [enabled by default]
   gate_desc idt_table[NR_VECTORS] __page_aligned_data = { { { { 0, 0 } } }, };
  arch/x86/kernel/traps.c:74:1: warning: (near initialization for
‘idt_table[0].offset_low’) [enabled by default]
  arch/x86/kernel/traps.c:74:1: warning: braces around scalar
initializer [enabled by default]
  arch/x86/kernel/traps.c:74:1: warning: (near initialization for
‘idt_table[0].offset_low’) [enabled by default]
  arch/x86/kernel/traps.c:74:1: warning: excess elements in scalar
initializer [enabled by default]
  arch/x86/kernel/traps.c:74:1: warning: (near initialization for
‘idt_table[0].offset_low’) [enabled by default]

and I don't think this is compiler-specific, because that code is
crap. The declaration for gate_desc is very very different for 32-bit
and 64-bit x86 for whatever braindamaged reasons.

Seriously, WTF? I made the mistake of doing multiple merges
back-to-back with the intention of not doing a full allmodconfig build
in between them, and now I have to undo them all because this pull
request was full of unbelievable shit.

And why the hell was this marked for stable even *IF* it hadn't been
complete and utter tripe? It even has a comment in the commit message
about how this probably doesn't matter. So it's doubly crap: it's
*wrong*, and it didn't actually fix anything to begin with.

There aren't enough swear-words in the English language, so now I'll
have to call you perkeleen vittupää just to express my disgust and
frustration with this crap.


Apple’s Macintosh computers are notoriously locked down. In many cases, users can’t even install more RAM or replace a hard drive, let alone swap out a video card or add something extra, like pro audio and video processing hardware.
Frankly, most Mac users don’t care. Apple (rightly) pointed out for years that most customers never upgraded their Macs, so designing for expansion was a waste of space and money. Further, high-speed Thunderbolt ports now allow external add-ons to do some serious work just by plugging in. The days of lazy serial ports are long gone.
But for Mac users who do care, Apple’s Macintosh lineup gets more frustrating with every upgrade cycle – especially now that the venerable Mac Pro is losing its expansion slots (and much more) while assuming the shape of a humidifier.

Geeks3D's GPU Tools / GpuTest 0.5.0 released
« on: July 12, 2013, 01:10:26 PM »
GpuTest 0.5.0 has been released. More information and downloads are available HERE.

English forum / 3D surface plots
« on: July 03, 2013, 03:03:03 PM »

LibreOffice and AMD are working together to create a faster version of the office suite's spreadsheet that will make use of AMD's GPUs within its Heterogeneous System Architecture (HSA) based Accelerated Processing Units (APUs). The work is only just beginning though and there is no timescale for a production release of the software. AMD is joining the LibreOffice Advisory Board as part of the collaboration, sitting alongside Google, Intel, Red Hat, SUSE and the FSF, among others.
At its core, the aim of the work is to take the formulae of Calc spreadsheets, convert them into OpenCL, compile that OpenCL for the GPU and execute those formulae through the GPU. In a typical PC architecture, this would be rather complex because of the difficulty of feeding a large amount of data to the GPU through small memory apertures, but with AMD's HSA, the CPU and GPU have equal access to memory, resulting in an easier environment in which to GPU accelerate applications.

3D-Tech News Around The Web / Qt 5.1 released
« on: July 03, 2013, 01:57:17 PM »
I’m very happy to announce that Qt 5.1 is now available. It has taken us a little more then 6 months since we released Qt 5.0 end of last year. While we originally planned that Qt 5.1 will mainly focus on bug fixing and stability of Qt 5.0, we have actually achieved a lot more. The release contains a large amount of new functionality in addition to numerous smaller improvements and bug fixes. For more information, please have a look at our Qt 5.1 launch page.

The final version of 2013 will also include a few C99 features. Microsoft has long avoided supporting C99, the major update to C++'s predecessor that was standardized last millennium, claiming that there was little demand for it among Visual Studio users. This was true, but only to a point; it's true that many Windows developers weren't especially interested in C99 because they had no good tooling to support it. Open source developers, however, embraced the update, as it makes C a lot less awkward to work with.

After 2013 is released, a CTP will deliver a bunch more C++11 features, with C++14's generic lambdas and return type deduction likely to be included, along with a selection of C++11 features. The remaining C++11 and C++14 features will be implemented in subsequent releases (as will a couple of C++98 features that Visual Studio doesn't quite get right).

Full story:

Pages: 1 ... 18 19 [20] 21 22 ... 35