ACCU
Russel Winder
At the ACCU Conference back in April Russel
gave one of the keynotes on the future of computer architectures in the face of
the stagnation that has occurred with single processor speeds. The
The premise of Russel’s talk was similar to that of his keynote, but whereas previously the bias seemed to be towards hardware, there was a larger software and programming language element to this presentation which made the trip equally worthwhile for us conference attendees. By the time I arrived Russel was in full swing describing the multi-core revolution and how threads are merely a distraction on the way to the true goal of independent computation units that consist of a straight pairing of CPU and memory. This was well and truly rammed home by the continual reuse of one presentation slide that showed two columns of CPU/RAM pairs connected by an “interconnect”. It has been apparent for some time that the chasm between CPU and memory performance is widening rapidly and multi-core CPUs will only fan the flames. Russel contends that although the likes of Intel have a 48 core chip in the wings the memory contention this introduces means that its possible that 16 cores could well be the ceiling (in compute bound scenarios).
On the software side the picture painted was even less rosy as Russel pointed out that none of the major programming languages (i.e. Java, C++, Ruby, Python etc) have any natural support for the programming models of the future. Yes there are libraries designed to make the task less painful but you still have to get your feet wet to some degree. I’ve never quite been sure what the actual distinctions are between the Actor, CSP and DataFlow models and so I was pleased that Russel spent the time spelling this out. Of course the code we write in our high-level languages still needs to execute within some runtime environment that itself is likely hosted by an Operating System. And the picture looks no better here either as the key players are all monolithic architectures with an inherent limit that assumes all memory is globally addressable. One answer it seems may be in the form of Hypervisors and micro-kernels where each computation unit runs its own (possibly different) OS. Naturally Russel was quick to point out that none of this is new, it’s just that most of us have managed to avoid it until now.
What made this presentation an improvement on the keynote was the acknowledgement of how all this affects those outside the world of Super Computing. Yes it all makes sense for the big number crunchers like meteorology and quantum physics, but how does this affect the man on the street who’s PC spends the majority of its time waiting for user input? It probably won’t, at least for a traditional PC set up, but a move to a Thin Client model might provide the kind of catalyst whereby small chunks of processing would need to be farmed out, e.g. spell checking paragraphs in parallel. Interesting times lie ahead that’s for sure.
Chris Oldwood
24/01/2011
Bio
Chris started out as a bedroom coder in the 80s, writing assembler on 8-bit micros. These days it’s C++ and C# on Windows in big plush corporate offices. He is also the commentator for the Godmanchester Gala Day Duck Race and can be contacted via gort@cix.co.uk.