back to article Here's the multi-core man coding robots, 3-D worlds and Wall Street

"You have to do some really good work and become famous." That's what Stanford President John Hennessy said would be required if then associate professor Kunle Olukotun wanted to secure tenure. So, Olukotun set after that goal with some ground-breaking work in the field of multi-core processors. His research helped form the …

COMMENTS

This topic is closed for new posts.
  1. Dave Ashe
    Thumb Up

    Interessant

    Thanks, that was very interesting.

  2. Niall
    IT Angle

    immense challenges

    Er.. not really. Multi-threaded coding has been around and an integral part of server-side Java for many years. Since well before there were multiple cores to run it on.

  3. Ashlee Vance (Written by Reg staff)

    Re: immense challenges

    So, you've got the new 128-way desktop from Dell too then? Or did you go with Apple's iFrame?

  4. Pete Wilson

    Immense challenges?

    Eh?

    There's a big difference between

    - I don't know how to write parallel programs

    - my program is sequential and I don't know how to convert it to be parallel

    - my program is sequential and no-one will give me an auto-parallelising tool

    - writing parallel programs is hard in and of itself

    - there are no good languages for writing parallel programs

    - shared memory is an idiot's choice of parallel programming and leads to all sorts of difficulties (even with TM)

    As an aside, anyone who's written Verilog (or VHDL) to implement a pipelined processor (or similar) has written a parallel program. Lots of folk seem to do this as part of daily life.

    And anyone remember the Inmos transputer? Eh, like any worthwhile British innovation, before its time and crippled by poor decisions all over...

    -- P

  5. ROlsen

    Sun Not First

    "Sun led with the Afara-based Niagara line of processors, and now every major chip company has both "regular" multi-core chips and research underway into far more radical designs"

    IBM's multi-core Power4 was out years ahead of any multi-core cpu from Sun/Intel/AMD.

  6. Ashlee Vance (Written by Reg staff)

    Re: Sun Not First

    Power4 was dual-core and arrived at about the same time Afara was working on eight-core chips.

    I certainly consider Sun the leader in the multi-core era, as it was the first mainstream vendor with a very, very aggressive design thanks to Afara.

    If you want to count dual-cores as multi-core, so be it.

  7. ROlsen

    Can Sun Keep Up?

    "I certainly consider Sun the leader in the multi-core era, as it was the first mainstream vendor with a very, very aggressive design thanks to Afara."

    I wouldn't disagree Sun has been the most aggressive. Although, at the same time, my thinking has really changed on all of this over the last few months as I've looked for an inexpensive way to get lots of processing power for a highly parallel application. I looked at multiple x86, cell, sun, fpga, etc. and finally found the answer (for me): GPU and in this case NVIDIA CUDA.

    Clearly the battle is on as CPU and GPU head in each others direction. I seriously wonder if Sun can compete due to lack of volume compared to intel and gpu mfg's.

  8. Destroy All Monsters Silver badge
    Boffin

    ...but I hear Erlang is friendly to parallelize

    I suppose everybody has heard about it by now.

  9. Anonymous Coward
    Anonymous Coward

    transcript?

    Sounds interesting, any chance of a transcript? Information's much easier to absorb in the form of written text than it is in audio.

  10. Anonymous Coward
    Anonymous Coward

    Textual transcription, please?

    There are also deaf software developers, who would like not to be left behind as technology advances. Therefore they, too, would like to be able to learn from the Kunle Olukotun interview. However for this, they need textual transcription of the lecture.

    Please?

    Thanks!

  11. Louis Savain

    The Future of Parallel Programming Is Non-Algorithmic

    The rabid Turing Machine worshipers on Slashdot are censoring my comments as usual. LOL. So, I'll post my message here. It is a good thing The Reg believes in freedom of expression.

    It is time for professor Olukotun and the rest of the multicore architecture design community to realize that multithreading is not part of the future of parallel computing and that the industry must adopt a non-algorithmic model (see link below). I am not one to say I told you so but, one day soon (when the parallel programming crisis heats up to unbearable levels), you will get the message loud and clear.

    http://rebelscience.blogspot.com/2008/05/parallel-computing-why-future-is-non.html

  12. Destroy All Monsters Silver badge
    Thumb Up

    Absolutely Hot!

    ...watch out for Sarah Connor though.

    "Multi-threaded coding has been around and an integral part of server-side Java for many years."

    This part of Java is filed under "the automated horrible foot mangler". And DIY multithreading in a "managed server-side environment" is a no-no anyway. DO NOT WANT.

  13. BlueGreen

    @Louis Savain

    >censoring my comments

    Good on them. Say something sensible, though, and maybe they'll stop?

    As before, you don't understand what CSP is, nor what a programming model is and how it differs from an implementation, you sneer at academics, and - the piece de resistance - you try to put responsibility for validating your proposed solution onto everyone else by demanding that they "... can always write me a check for a few million bucks and I'll deliver a COSA-compliant multicore CPU (that will blow everything out there out of the water), a COSA OS, a full COSA desktop computer and a set of drag-and-drop development tools for you to play with." From our previous chat on <http://www.theregister.co.uk/2008/06/06/gates_knuth_parallel/comments/>.

    As before, how about *you* produce something modest yet working so we can all have a gander and decide. An actual executing quicksort as outlined on your site, well, that would be... more than just a ton of empty talk.

    And what on earth do you mean by "non-algorithmic"? I think you're confusing algorithms with von Neumann architectures.

  14. Anonymous Coward
    Anonymous Coward

    Everything returning to "the server" will never happen again.

    The idea that "the Cloud" is a resurrection of the timesharing terminal server of yesteryear keeps cropping up, as it did at the end of this interview. Neither the interviewer nor the interviewee mentioned the obvious objection to the idea: communication, especially Internet communication, is not reliable enough nor broad band enough to make that idea even remotely feasible.

    Across most of the developed world, broadband is exceedingly unreliable, with service interruptions of multiple hours still a regular occurrence. I can't imagine having my software development environment 100% beholden to the phone company or the cable company for access. That duopoly made in hell already causes me enough trouble, without them being able to totally disable every function of my desktop PC.

    Add to that the fact that so-called "broadband" connections are an asymmetrical, throttled pittance of an excuse for a high speed link everywhere except Sweden, and it becomes even less likely that we'll ever cede control of our own destinies back to some central computing provider.

    Communications would have to be radically different before the idea is even worth mentioning again, and I question whether or not we'd give up what we have today even if the situation did dramatically improve. In the end, latency gets you every time. No matter how far we can reduce packet switching times, the speed of light brooks no arguments. Internet packets take 2 orders of magnitude more time to make their round trip than interactions with the resources in your desktop. That's a heavy penalty, no matter how you slice it. At the most basic level, Internet packets take 6 orders of magnitude more time than your local processor, single digit nanoseconds vs. 100+ milliseconds. We'll never give that up.

    Dr. Olukotun is working on a problem that will apply both in data centers and on our here-to-stay desktops. When he and the rest of the folks at Stanford put "Pervasive" in the name of the lab, they knew what they were doing. It's easy to imagine computing becoming even more diffuse and widespread than it is now. Not only will we keep our desktops, but all kinds of new processors will start cropping up. Autonomous vehicles is only the beginning. There are enough people on the planet demanding resources that the old dumb mechanical systems of yesteryear are going to go through a forced upgrade in order to squeeze out inefficiencies. Those systems aren't too bad already, so the only slack remaining will be in making them more adaptive to the humans that interact with them. Adaptive means electromechanical, with sensors and processors.

    Talk about pervasive.

    I look forward to Dr. Olukotun's software making my job easier.

  15. Steve Hochschild

    one approach is dataflow

    In an article in last month’s SD Times the challenge facing the technology industry was succinctly laid out:

    “I wake up almost every day shocked that the hardware industry has bet its future that we will finally solve one of the hardest problems computer science has ever faced, which is figuring out how to make it easy to write parallel programs that run correctly,” said David Patterson, professor of computer sciences at the University of California at Berkeley.

    We have seen that there are a number of approaches to trying to deal with this challenge. In our case, we have reached back to use an application architecture first implemented decades ago -- DataRush is a Java implementation of the dataflow approach.

    "The technical problems were all solved long ago with the invention of dataflow programming. What remains is to educate programmers and to bring dataflow ideas into mainstream languages."

    Peter Van Roy - co-author "Concepts, Techniques, and Models of Computer Programming."

    steveh@pervasivedatarush.com

  16. BlueGreen

    @Steve Hochschild

    dataflow seems to me more of a programming model generally than a genuinely scalable implementation technique. I guess it takes many forms, including functional programming, Linda (javaspaces?), etc.

    None yet scale well at all levels; instruction-level to MPP level. I think they should be treated as implementations where they fit, but as programming models where they don't (and dealt with using the right tools, which afaik don't exist outside the lab).

    In other words dataflow's not a magic wand, sadly.

  17. BlueGreen

    @Steve Hochschild

    BTW one totally weird approach that is almost never mentioned is that of Path Pascal.

    It's a limited trick that but so odd and novel I'd like to note it here (Caveat: I've never used it or anything like it. It's just so inventive it bears flagging up). Goggle it.

  18. JamesF

    @BlueGreen

    "dataflow seems to me more of a programming model generally than a genuinely scalable implementation technique. I guess it takes many forms, including functional programming, Linda (javaspaces?), etc."

    It is a genuinely scalable implementation. Check out this

    http://java.sys-con.com/node/523054. It is a dataflow success story that was published in the JDJ.

This topic is closed for new posts.