Another successful night with Paris JUG on Concurrency & Performances. For once, we had a guest star: Kirk Pepperdine, a seasoned practitioner of projects with performances issues (he worked on Cray machines, yes sir!). He managed to attract the largest crowd yet for Paris JUG: more than 80!
To be honest, many of the points that Kirk mentioned went above my head. But here is the main take-away point:
with ever more cores in a single processor, concurrency issues are only going to get worse, and we, as developers, just have to understand more about them
In a nutshell, the ‘Quake strategy’ does not work anymore (the Quake Strategy is when you promise 2x the speed to your client, spend 18 months playing Quake, then go and buy new hardware)
While developers until now could ‘wait’ from new, faster hardware to appear, they will now have to keep up with the hardware guys, doubling parallelizing every 18 months! In fact, there are already hardware available with more the 768 hundred cores. And the worse hit by this fact are database developers. The database has become the bottleneck of server applications, and locks (which need a centralized system) are killing them. They cannot easily take advantage of multi-cores.
Another point is that, more than ever, we need to test on the target hardware. The strategies used by processor manufacturers to optimize processors and low-level cache are sufficiently different that we will notice significant performance issues between Intel, AMD and Sparc. So ‘write once, test anywhere’ will also get worse.
Amid all this gloom, there are actually some good news. Some smart people have came up with real solutions for clustering. Functional languages such as Scala and State-Driven Architecture are also options that should be investigated. And there is money to be made in data grid solutions for databases; witness the recent acquisition of Tangosol by Oracle.
For more details: