I’d just like to put in a word for the source engine used in the Half Life 2 series (and other Valve games). It had some of the best physics I’d ever seen. Especially considering my dinky 1.6 GHz processor and ATI X300 graphics card.
I like Crayon Physics (http://www.kloonigames.com/crayon/).
It was the first physic game I played and I like it very much: it’s a good mix between the cold physic and the drawing!
Talking of GPUs, see what these Belgians have done: http://fastra.ua.ac.be/en/index.html
Size isn’t everything. The GTX 280 (1.4 billion transistors) is built on a 65nm architecture. The Penryn on 45nm. So, that picture doesn’t really say much. This blurb from AnandTech is more enlightening:
“Intel’s Montecito processor (their dual core Itanium 2) weighs in at over 1.7 billion transistors, but the vast majority of this is L3 cache (over 1.5 billion transistors for 24MB of on die memory). In contrast, the vast majority of the transistors on NVIDIA’s GT200 chip are used for compute power.”
I think that the picture is a little misleading. The Penryn is fabbed on a much smaller process.
I’ve never done anything of the level of these simulations, but I’ve spent some time programming physics for a simple XNA game I’ve been working on.
Why just play with physics engines when you can write your own… assuming you don’t fear math too much.
whoops, beaten to the punch
Isn’t that a dual core chip in your comparison?
I played the demo of Trials 2, and it wasn’t very much fun. It just felt too constrained. For more fun, try Elastomania. http://www.elastomania.com/
Someone seriously needs to dust off that series, or at least just remake C2 for current-gen computers. Some of the most wonderful physics-engine-based moments emerged from that game!
Yet another physics game that I enjoy is free also.
It’s called “Phun”: http://phun.cs.umu.se/wiki
It is 2d only and only few primitives but it seems to be quite actively developed.
Thanks for the Fun-Motion link! Always great to hear that other people enjoy physics games as much as I do.
A few of the games mentioned in the comments don’t have Fun-Motion reviews yet (been a bit busy with Jetpack Brontosaurus). But the review faucet should be turning back on soon…
I want to point out the Video Encoding comparison is quite inaccurate. First they never stated which encoder they use, second x264 is much faster then the results shown, not to mention the quality difference between different settings or encoders.
The true difference will be shown once the x264 CUDA version is out. Although it is still many months away.
I have no doubt GPU are extremely powerful, and doing encoding on GPU means you could finally record you TV shows while still do something useful with your computer without the lag from intensive CPU usage.
Issue Number Two of the Intel Visual Computing Community is out and makes for a additional read.
“Why just play with physics engines when you can write your own… assuming you don’t fear math too much.”
I don’t know if you’ve ever touched a physics engine, but they all require the same level of math as writing your own would. Besides that, writing extensive physics engines such as Havok will also require you to do SIMD optimizations manually.
“I just hope that widespread use of GPU’s encourages laptop hardware manufacturers to put real GPU’s in, instead of using the Intel’s main memory sharing rubbish chips that can’t compute their way out of a wet paper bag.”
There is hardly a problem with sharing memory; the real problem with the Intel chips is OpenGl support - or lack thereof.
It’s impressive to see the performance of these GPU units but it does make me wonder a little about Intels strategy in this area.
Their idea is to create a 10+ core chip (http://arstechnica.com/articles/paedia/hardware/clearing-up-the-confusion-over-intels-larrabee.ars), each core being a “simple” x86 (along the lines of but not actually an Atom). Maybe I’m missing some of their magic but I don’t see how even a 32-core chip could keep up with the current NVidia much less what will be out in a year.
Is writing code for a GPU really so hard that Intel will have an advantage here?
Don’t leave out “Porrasturvat” and it’s successor “Somethingelsesturvat” … in english they’re called “Stair Dismount” and “Truck Dismount”. The goal of the game is to hurt the little crash test dummy as much as possible, either by pushing him downstairs or by accelerating his truck against a concrete wall. The dummy’s body parts that are in severe pain will flash red for a brief moment. Other than that it’s pretty non-violent and a lot of fun!
I would like to be able to utilize the GPU for processing power in addition to the CPU for parallel processing tasks in C#. I hear about it being used for physics or complex math(folding) all the time, but does anyone know where I can find some information about how to offload some of my processing threads to the GPU from a multi-core CPU in a multithreaded application?
Some FPS games, such as half life 2, have amazing physics these days. They make use of the HAVOC physics engine.
@Naren more info here on the H.264 GPU encode assist: