back to article ARM creators Sophie Wilson and Steve Furber

The Story so Far At Acorn, Sophie Wilson and Steve Furber have designed the BBC Micro, basing the machine on the ageing MOS 6502 processor. Their next challenge: to choose the CPU for the popular micro's successor. Now read on... While Sinclair attempted to move upmarket with the launch of the QL in early 1984, Acorn was …

COMMENTS

This topic is closed for new posts.
  1. JDX Gold badge

    Buy ARM?

    I had ARM shares which plummeted in the .com crash. I wonder if they're a good buy, or if Intel is about to mount its first serious step into mobile devices.

    1. Anonymous Coward
      Anonymous Coward

      Re: Buy ARM?

      http://uk.finance.yahoo.com/echarts?s=ARM.L#symbol=arm.l;range=my;compare=;indicator=volume;charttype=area;crosshair=on;ohlcvalues=0;logscale=off;source=undefined;

      You bought at the peak last time and you're wondering if you should buy at the peak now?

      And with Bubble 2.0 getting into full swing too ...

      I'd say look at their 10-year valuation unless you're day trading.

      1. Glenns
        WTF?

        Re: Buy ARM?

        I'm glad I'm not the only one wondering when bubble 2.0 will burst... instagram for $1billion !! WTF !!

        They are expecting facebook to be valued around $100 billion...for a company with increasing costs and decreasing profits ($205m last quarter) I don't get it :S

    2. Anonymous Coward
      Anonymous Coward

      Re: Buy ARM?

      I bought ARM during the .com bubble, have held on to them, and I'm still waiting for them to get back to where they were then! Just about made it during the last couple of months... think I'll hold on to them a bit longer though.

    3. Christian Berger

      Hard to predict

      I'd say it depends on whether Intel or ARM manages to implement a stable "PC-like" mobile platform leading the mobile market out of the "home computer" market to the "PC era", when suddenly platforms _really_ span multiple vendors and you can separate the software from the hardware.

      Currently the mobile world is like home computers used to be. You couldn't be sure that software bugs ever were fixed. Back then that only meant some minor inconvenience, today it can mean serious security holes. While on a "PC" (or whatever stable hardware platform) you can simply update, upgrade or exchange your operating system without your hardware vendors consent. It is, in fact even quite easy. You boot from an USB-stick or CD-Rom and there you go, having another operating system without touching any stored data.

      This would be the next revolution.

  2. Chika
    Thumb Up

    for not mentioning the name of the person responsible for Acorn's "unravelling".

    Good article, though.

  3. Jefe Mixtli
    Stop

    NOOOOOOO!!!!

    If the The Register has one habit that bugs me, it's that their prime articles, and the accompanying banner images, stick around on the right-side bar FOREVER!

    I swear we've been staring at those two guys holding that PARIS plane for MONTHS now.

    Please, don't tell me I'm still going to be looking at this article's banner pic when Christmas comes around!

    (Great articles thought)

    1. Anonymous Coward
      Thumb Up

      Re: NOOOOOOO!!!!

      Agreed, they stick around forever. I could swear they do it on purpose - there's nothing more annoying than having a photo of some lego characters regularly on your work screen over a period of months, people get the wrong impression ;)

  4. Mr C
    Thumb Up

    Nice informative article.

    If this article had anything resembling a "like" button, i'd press it now.

    GJ

    1. VinceH

      You could always use the "Rate this article ..." gadget underneath the link for the comments.

      (Disclaimer: The only time I ever do that is by accident, when reading on my tablet.)

      1. Graham Dawson Silver badge

        I'm not alone!

        I do that all the time, it's so very annoying.

        1. ratfox
          Thumb Up

          Re: I'm not alone!

          Oh no, you are not alone. I suspect web designers will learn to place important links right next to ads in the near future.

  5. Eddie Edwards
    Thumb Up

    Superb

    Almost brought a tear to my eye. A beautiful article about a beautiful piece of technology. My jaw dropped when I read the ARM worked without applying Vcc :)

    1. Will Godfrey Silver badge

      Re: Superb

      Very much enjoyed the article.

      I might be wrong, but I think the VCC issue was originally flagged up because the test unit would occasionally crash for no apparent reason, so they started to monitor just about everything, and discovered the apparent lack of current. They then backwards reasoned that it was getting power via its inputs and that the crash happened when all inputs (data & address) were zero.

  6. ForthIsNotDead
    Pint

    Brilliant!

    I would just like to congratulate the author of this article. Beautifully written. A real credit to yourself, and The Register, sir. Thank you very much.

    It struck me that Hauser, despite not being the technical guy, is as bright as they come. He clearly recognised the talent he had with Wilson and Furber.

    I completely riviting read. Actually, the story of Acorn & ARM would make a very good book. Ditto Inmos IMHO.

    Have a good day all.

  7. Glenns
    Thumb Up

    Fantastic article

    This is why I come to the Reg, so much nicer to read a well informed structured article as opposed to the usual "my dads bigger than your dad" fanboy rantings on other sites

    A*

  8. This post has been deleted by its author

    1. Christine Hedley Silver badge

      Quite a few years back now, though evidently not that far back... Don't know the exact date, sorry.

      1. AfternoonTea
        Paris Hilton

        He's been dressing as a woman since the 80's. Not sure when he had the chop though....

        1. Christine Hedley Silver badge

          "He's been dressing as a woman since the 80's. [...]"

          *She* has. Sorry to nitpick, it's a subject that's rather close to my heart.

  9. Eldnah

    s/pus/put/ on page 3

    Great article. Wow ... running on leakage.

  10. Peter Gathercole Silver badge
    Boffin

    IBM ROMP vs. ARM

    The IBM ROMP chip (aka the 801) was never intended to be a general purpose RISC processor. It was intended to power an office automation product (think of a hardware word-processor like WANG used to sell).

    As a result, although it could function as a General Purpose CPU, it was not really that suited for it. It was never a success because at the time, IBM could not see justification for entering the pre-Open Systems UNIX world. RT 6150 and the 6151 were intended as niche systems mainly for education, although they did surface as channel attached display front ends for CADAM and CATIA run on mainframes (and could actually run at least CATIA themselves). This changed completely with the RIOS RISC System/6000 architecture, where IBM was determined to have a creditable product, and invested heavily.

    In comparison, the ARM was designed from the ground up as a general purpose CPU. Roger Wilson (as he was then) greatly admired the simplicity and orthogonality of the 6502 instruction set (it is rather elegant IMHO), and designed the instruction set for the ARM in a similar manner. Because the instruction set was orthogonal (like the 6502, the PDP11, and the NS320XX family), it makes the instruction decoding almost trivial. It also made modelling the ARM on an econet of BBC micro's (in BBC Basic, no less) much easier, which allowed them to debug the instruction set before committing anything to silicon.

    They had to make some concessions on what they wanted. There was no multiply-add instruction, which appeared to be a hot item in RISC design at the time, and to keep it simple and within the transistor budget, all they could do was a shift-add, (the barrel shifter), which although useful, was a barrier to ultimate performance, but great for multi-byte graphics operations.

    It was also simple enough so that they could design the interface and the support chips (MEMC, VIDC and IOC) themselves, achieving early machines with low chip counts.

    This is all from memory of articles in Acorn User, PC World, Byte and other publications. Feel free to correct me if my recollections are wrong.

    1. breakfast Silver badge
      Joke

      Re: IBM ROMP vs. ARM

      If ARM and WANG had worked together they could have greated the Jenerally Executable Reliable Kompiler, which would have been a major release.

      1. This post has been deleted by its author

      2. Mike Moyle
        Coat

        @ breakfast -- Re: IBM ROMP vs. ARM

        Ah...! You're thinking of the ShortARM™ project.

    2. Anonymous Coward
      Anonymous Coward

      Re: IBM ROMP vs. ARM

      "There was no multiply-add instruction."

      The ARM processor in the original Archimedes had multiply and multiply-add instructions (MUL and MLA), though I seem to remember them being very slow. Perhaps the designers were disappointed because they didn't have the transistor budget for a fast multiply.

      1. starsilk

        Re: IBM ROMP vs. ARM

        the ARM1 chip had mo multiply (or multiply add) instructions. it was used in the Tube coprocessor for the BBC Micro that the article talks about (the £4500 one..). I actually got to use one briefly long after it was obsolete...

        lack of multiply was discovered to be causing performance problems, so a slightly revised chip (ARM2) was used from then on (ie: all the Archimedes series) which had MUL/MLA implemented, although it was a bit of a hack - every instruction took a single clock cycle *except* MUL or MLA which could take up to 16 clock cycles (still way faster than emulating multiply in software).

        1. Peter Gathercole Silver badge

          Re: IBM ROMP vs. ARM

          @starsilk. Thanks for the correction. I certainly knew about the multiply-add being missing, but I deliberately avoided talking about the multiply instruction being missing, because I just could not remember.

        2. Sandtreader
          Boffin

          Re: IBM ROMP vs. ARM

          MUL & MLA were indeed slow when both sides of the multiplication were variable, but lots of multiplies have a constant one one side, often sparse in bits (e.g. 2^N - 8, 16, 256 - or 2^N+2^M - 10) and the great trick (of ARM assembler hackers like me, and the - at the time - brilliant Norcroft compiler) was to unfold the multiply into shift-adds (one per bit) using the barrel shifter, one cycle each.

          One my most treasured possessions is an original ARM-1 dot-matrix instruction set description with CONFIDENTIAL scrawled over it in red ink...

      2. Wilco 1
        Happy

        That brings back memories...

        ARM1 didn't have multiply instructions. ARM2 did have MUL and MLA, and used the ALU and shifter to perform multiplies 2 bits per cycle. It used early termination so that multiplies by small values was much faster. When writing assembler for the ARM2 I always made sure that the smallest value was in the right place as X * Y would use a different number of cycles as Y * X... Given the low transistor budget it was the right design, just a pain to optimize for. Today even the smallest ARM CPU has a single cycle multiply.

  11. cdilla
    Thumb Up

    An excellent read

    Thanks for this well presented and interesting article.

    More of this kind of thing!

  12. Steven Gray
    Thumb Up

    Great article

    I had an Archimedes around 1987 - specifically an A3000, and it's power became apparent running some simple Mandelbrot fractal code in BBC Basic. I had the original code from some magazine listing and had applied it to my A3000s predecessor, a Sinclair QL.

    The QLs SuperBasic was as wonderful as it was slow – the Mandelbot set took 24hrs to draw. Even compiled from Pascal code, it still took 8 hours.

    The Archimedes? 45 of your earth minutes. Astounding.

    1. defiler

      Re: Great article

      The A3000 wasn't out until about 1989/1990 - the 1987 models were the A310, A410 and A440 if I recall correctly. I had the A420/1 which came out in 1989, and the A3000 came out soon after. A great piece of miniaturisation, but hamstrung slightly by the lack of a hard drive. 2.5" drives appeared that could fit inside later.

      As for the Mandelbrot drawing, I converted a BASIC program to ARM assembly and then hand-optimised it. The innerloop was 13 instructions long, and the rest of it was just dumping a value to the framebuffer. It could do 320x256 fractals at 5fps (although, to be fair, it mirrored one half, so 2.5fps). Such was the power of hand-optimised ARM code. You could bash out a program like that in half an hour, and then spend a week teasing out every extra clock cycle. And figuring out that ot only was inline conditional execution faster than branching, but LT is faster than GE (by one cycle).

      Still that was ARM2/2.5/3, so I guess things have changed since. Now please excuse me - I've been typing this whilst being assaulted by a 3-year-old...

  13. Andrew James

    Great article

    This is the sort of thing I like reading on here. Something that's had some true care taken to make it worthy of it's audience.

    And you even managed not to refer to Apple as a Foxconn-rebrander when they were mentioned! Bravo.

    Its staggering to think that the chip that powers billions of phones, cameras, hard drives, tablets, etc started life as nothing more than an accident of design.

    And i agree, i dont think failure is a word you can use to describe Acorn. It didnt fail, it just evolved beyond itself.

    1. Dave Bell

      Re: Great article

      I agree, it wasn't Acorn and the ARM which failed.

      But there is something about the British industrial and financial environment which seems to let the winnings from these works of genius drift away out of reach.

      Its not just globalisation, and some factory that is so expensive to build that there can only be one on the entire planet. And we can't expect to spot the right investment choice every time. But what is it about this country which turns a successful entrepreneur into somebody fronting a TV show that tests how people can run a market stall in Essex?

  14. chipxtreme

    I remember my first ARM chip, the ARM250 fitted to the Acorn A3010 bought for £500 from Dixons which ran at the massive 12Mhz, still a lot faster than the A3000's at school. I then went on to a Risc PC 600 which had an ARM610 chip running at 30Mhz, later replaced the CPU board with the 40Mhz ARM710 and eventually put a StrongARM 203Mhz CPU in which I later overclocked to 287Mhz. I even put a deposit down on the cancelled Phoebe computer.

    When that fell through and Acorn collapsed I eventually bought a Castle Iyonix PC which had an Intel XScale 80321 600 MHz.

    I still have the Risc PC in my mothers loft but the only ARM chips I use these days are in my Samsung Galaxy Nexus.

    1. geekbrit

      I think you probably have quite a few ARM cores around - iPods use them (I'm not sure about today's iPods, but early ones used three), your hard disk drives have them, in fact most of the major components of a PC have ARM in them - network cards, video cards, SSD, even USB flash sticks, and then there's printers, routers, car dashboards, gps, brake systems - ARMs are everywhere!

  15. mark1978

    The sad thing is that Acorn should be occupying the same space that Apple is in now. Instead they let it fall apart, and so Apple is the biggest company in the world, not Acorn.

    1. Steven Jones

      Acorn's demise in perspective...

      The reason the Acorn project fell to pieces was partly because so many journalists, politicians and other influential parties were vociferous in opposing the use of a "non-standard" (that is non-Wintel) architecture machines in education especially. That, and the domination of business by MS Office and the need to exchange documents essentially lead Acorn into an ever declining market. There were some great applications written for the Acorn RISC machines (like Sibelius), but it was inevitable that it could not be sustained on that architecture. It's simply impossible to maintain a thriving development community of applications in such a narrow market based largely in one country.

      Don't forget there were many other non-Wintel casualties in the US and a whole raft of alternative processors. Apple only just survived as manufacturer of an alternative architecture because of its dominance in some important niche areas, such as the "creative" sectors along with a somewhat grudging support from MS via a porting of Office (grudging, because it was something of a sop to US competition authorities). Acorn were never able to do what Apple did with the non-computing products, like IPod. With all its troubles, Apple was much better financed with a much more supportive investment sector and a larger market.

      As it is, it was ineviable that Acorn would end up, as its name indicated, the seed for a number of small/medium enterprises specialising in niche areas. Competing with Wintel was always going to be near impossible. That ARM emerged from it is something of a miracle, but to keep things in perspective, the vast majority of the income from products using this architecture acrues outside the company. Essentially ARM does not compete just on the excellence of its low-power processor designs and associated eco-systems, but because it is very, very cheap. ARM is not Intel who can command income per processor perhaps 100x that of the royalties the former achieves.

      1. Anonymous Coward
        Anonymous Coward

        Re: Acorn's demise in perspective...

        While MS's continued support of Office for Mac may well be given grudgingly. It's worth pointing out, however, that the two central constituents of Office (Word and Excel) were both released for Mac in 1985 and weren't really what I'd consider ports. The first Word for Mac was actually the first graphical WYSIWYG version of that software and the first Excel for Mac preceded the DOS/Windows version by 2 years (that not being released until 1987). That the Mac had Excel is often given as one of the contributing reasons (along with DTP applications) for its continued existence following a rather lukewarm couple of years from its launch.

      2. Giles Jones Gold badge

        Re: Acorn's demise in perspective...

        That's always been the problem. It doesn't matter how good the hardware is, if there's no software available that you wish to use then it no good.

        This is what affects Linux for instance. It may be great at many things, but it's not much good for running the popular tools people want to use. Office, Photoshop etc.

        Even the Amiga and ST which did have many cool tools (ST was popular for Cubase in music studios) gave way to the PeeCee. So what hope did the Arc have when even the big US alternatives dried up?

    2. Anonymous Coward
      Anonymous Coward

      I heard

      I had heard (at the time) that one of the big problems Acorn had was that Apple would sue them out of existence if they released RiscOS systems in the US. This basically killed their potential for worldwide sales.

      Does anyone have any links about this?

      1. ThomH

        Re: I heard (@AC)

        I'm pretty sure Acorn systems were sold in the US via Olivetti. I think the main reason that RISC OS never gained the staying power of Mac OS is that Apple did the graphical desktop four years before Acorn and so managed to grab niches in publishing and design that sustained them when Microsoft came along and did the GUI for everyone else. Acorn's educational niche wasn't sustainable because, as noted above, there's a lot of political meddling in education and it's easy to score points with 'business picked Microsoft, we should be training them on Windows'.

        I guess it's a shame but the triumph of ARM makes it difficult to be very upset.

        1. Anonymous Coward
          Anonymous Coward

          Re: I heard (@AC)

          As I recall Acorn tried selling to education in a small geographical location in the States. Then Apple rushed in and dontated large numbers of computers to those specific schools which of course killed the Acorn initiative.

        2. Anonymous Coward
          Anonymous Coward

          Re: I heard (@AC)

          As I recall, Arthur was out and about in 1987.

    3. Chika
      Facepalm

      Not sure about who you mean here by "they". Acorn was mercilessly picked apart by beancounters for want of a decent BOFH and a lift shaft! All that on the eve of the launch of the fabled "Phoebe" or RISC PC 2 which, from accounts of the lone prototype known to have existed beyond Acorn's end, was a pretty stunning machine by the standard of computers back then.

      It's yet another example of how short sighted money men are asset stripping our industry, our inventiveness, heck even our culture! The only bright point is that, on occasion, we can still shock these idiots with what we can do - I suspect that the Raspberry Pi, for example, must be giving some of the big corporates something to think about.

    4. Giles Jones Gold badge

      We don't seem to dominate much as we don't seem to produce products that have a good reputation for build and engineering

      Acorn machines were the Linux of the day, less games and commercial software, more educational and development.

  16. Anonymous Coward
    Anonymous Coward

    More, please! I'm really enjoying all these retro-themed articles, especially the Acorn ones.

    Acorn may live on in various other forms, but the day they went belly up was a seminal day in my computing life. I'd started off with an Electron, graduated to a Beeb and then taken a three year time out in the form of an Atari ST as the Archimedes was way too expensive for a 11 year old school boy. I remember seeing it - finally - at a computer shop in York, and marvelling at it in wonderment.

    In 1992, I finally got my RISC OS machine, in the form of an A3010, purchased from Dixons. Four years after that, I got a RISC PC 600. It was utterly brilliant. Friends of mine were predominantly PC users, and my RISC PC was light years ahead.

    Then Acorn died. And I was sad. The workstations division closed down for purely financial reasons. I moved to a PC after that, something I was sure I'd never do, and moved to OS X six years back.

    Nothing will ever recapture those golden Acorn years for me. Back then, computing was fun. Now, programming for a living, all the joy has gone.

    I still have two Electrons, my A3010 and my RISC PC. One day I'll get the RISC PC out again and relive those golden days.

    1. Mark Cathcart
      Pint

      History of VM..

      There are a couple of really good history documents on the creation and evolution of IBM's Virtualization Technology and the development of the IBM Mainframe 360 and follow-on 370 architecture. The former was written by Melinda Varian formally of Princeton University, the latter written by Jeff Gribbin formally of Rolls Royce in the UK. Varians contains some great pictures and generally less understood and aspects on the evolution of virtualization.

      http://web.me.com/melinda.varian/Site/Melinda_Varians_Home_Page.html

    2. Dan 55 Silver badge

      RISC OS is still a going concern. There will supposedly be a version of RISC OS ported to the Raspberry Pi. I might try it, just because.

      Back then I was a Spectrum fanboi and everything else was crap, but looking back the 'right way' of designing those 80s systems was arguably Acorn's way.

      Can we have an 'old codger' icon, please Reg sir?

  17. This post has been deleted by its author

  18. moonface

    Unsold Acorn Electrons

    I remember there was huge demand for personal computers at Chrismas in 1983. It was the Christmas I remember, when kids amongst the general populace were demanding personal computer to play games as pressies. I was lucky to get one as my parents queued for hours outside some independent computer shop in Edgware on Christmas Eve and I think they only got about 200 in. What happened when starting the new year at school term was to find out, other parents couldn't get Electrons so they substituted to C64's and Spectrums. The popularity of the Electron compared to the C64's particularly, never recovered.

    This is confirmed by wiki.

    The Electron was developed during 1983 as a cheap sibling for the BBC Micro with the intention of capturing the low-cost Christmas sales market for that year. Although Acorn were able to shrink substantially the same functionality as the BBC into just one chip, manufacturing problems meant that very few machines were available for the Christmas period — to the extent that some shops reported eight presales for every delivered machine.

    1. trevj

      Re: Unsold Acorn Electrons

      And didn't they also have to compete against CD players that Christmas?

      1. moonface
        Happy

        Re: Unsold Acorn Electrons

        compete against

        Cabbage Patch dolls = no competition for me, not sure about you? !-)

      2. Anonymous Coward
        Headmaster

        Re: Unsold Acorn Electrons

        I think that rumour was spread by Micromen.

        CD players were painfully expensive. The best figure I can find for 1983 is £700. In 2012 that's just shy of £2000. Our local radio station didn't even get a CD player until 1986 and even then they only had one. And that was the most profitable station outside of London in that era.

        So CD players weren't really a factor. I think it was just some dramatisation in Micromen. More realistically Acorn were taking a spanking from Sinclair/Commodore and having to compete against the likes of Dragon, Oric, and Atari. Everyone and their dog had a computer out and the Electron died for the same reasons the Dragon and Oric, etc died.

  19. Anonymous Coward
    Anonymous Coward

    Acorn for schools

    I was a school kid at the time looking to work in computing. While the Acorn Archimedes machines had a flashy GUI and powerful GPU, I wanted to be using industry standard software. And the Acorns certainly didn't offer that.

    Windows 3 was where it was at, even if it was absolutely f*cking terrible in comparison.

    1. Dan 55 Silver badge
      Windows

      Re: Acorn for schools

      I believe the more powerful Model Bs or Masters did PC emulation via Tube. There was also PC emulation software for the Archimedes. Finally RiscPC had PC compatibility with a card. None of this would have convinced you if you were really set upon having a proper 80s-early 90s masochistic PC experience though.

      1. defiler

        Re: Acorn for schools

        Don't forget that Aleph1 offered a podule with a Cyrix 486SLC to fit into the 300/400/500 series. Not as tidy or capable as the RiscPC option (which would take a Pentium Overdrive chip (remember them?), but bloody clever, and a big step up from the 80186 offered by the software emulator.

  20. nemo20000
    Facepalm

    Unravelled? Who was pulling the string?

    Acorn didn’t unravel, it was systematically and clinically dissected to liberate its incredibly valuable ARM shares whose dividends were subsidising Acorn’s latter-day navel gazing.

    Acorn shares were worth a fraction of the ARM shares it owned, and cunning bean-counters worked out a cunning wheeze to get much richer more quickly. In the process they destroyed one of the most innovative and influential computer companies in the world.

    Psion committed suicide, but Acorn was murdered.

  21. BlueGreen

    Alternative multicore model?

    This isn't quite the right place to ask, but there is a question I have about future ARM directions, so if anybody (especially if from ARM) is knowledgeable, I'd be very interested.

    But first off, an utter pox on people who start technical threads with questions about share prices.

    So: multicore ARMs seem to use the traditional shared-memory model, which I understand takes some very fast hardware and adds a lot of complexity (and therefore a rich source of hardware bugs), and it doesn't scale too well, there being only so much memory bandwidth you can plug into a chip. Also low-level synchronisation primitives are notoriously easy to mess up without either having a lot of experience, or using formal tools for verification. Therefore would a different model based on non-shared memory be viable, perhaps something to support CSP or modern variants?

    Given the relatively small size of an ARM core and its relatively small memory needs (which is I guess the main pressure towards multicore, with shared-memory following as a presumption, cos that's what all the others do) you could plonk many of these independent-memory cores on a chip hopefully very much more easily.

    I guess if you've already got shared-memory core designs then perhaps you would still want to use them but perhaps graft the extra CSP-ish model on the sides. And if one's problem suits it and I'm sure many do, it would be available, which is more than can be said for any other common architecture. Fairly straightforward to add, I expect (ducks hard. Very hard).

    Thoughts welcome.

    1. Anonymous Coward
      Anonymous Coward

      Re: Alternative multicore model?

      Haven't you just reinvented the Transputer, and maybe Occam with it, implemented in modern silicon with multiple cores per chip?

      Which wouldn't necessarily be a bad thing.

      The complexity of multiple interconnected subsystems on a modern car, for example, together with the minimal likelihood that any volume manufacturer is capable of having them properly analysed, designed, coded, tested and maintained in any realistic way, doesn't bear thinking about. So I don't.

      MISRA? Who dem? http://www.eetimes.com/discussion/beginner-s-corner/4023981/Introduction-to-MISRA-C (now somewhat dated)

      1. BlueGreen

        Re: Alternative multicore model?

        > Haven't you just reinvented the Transputer,

        No, the transputer was designed for CSP from quite a low level. I'm talking about using an existing arch and bolting stuff on

        > and maybe Occam with it,

        No, occam had an implementation of CSP. I believe CSP (or more modern versions, or actors, or whatever) is entirely usable without recreating occam as such, although it would have to have the relevant features.

        > implemented in modern silicon with multiple cores per chip?

        yes but possibly *without* the multicores using shared mem - which would be somewhat novel. But optionally *with*, though that would bring back the complexity I'm trying to avoid (but shared mem is fast, so swings & roundabouts)

        > The complexity of multiple interconnected subsystems on a modern car...

        I'm not sure if you're comparing an interconnect for cpus with that of a car & saying 'one's tough so the other must be', which I don't buy, or are leading on to your MISRA link, which (/pace/) I don't see the relevance of. A lot of work on safe C subsets has been done, never seems to go mainstream.

        You may be implying that C would be too tough & unsafe to use for a design as I've given, but I don't buy that either, nor the assumption that C would be the language of choice. We're now getting to the point of using jscript + CSS on browsers to develop front ends, and unbelieveably, back ends (<http://en.wikipedia.org/wiki/Nodejs>. I like JS but this is bloody nuts!) then evidently the time for C-for-everything is well coming to an end (thankfully. Win32 api GUI dev in C ... <shudders from the memory>). I'd happily lose some performance for more expressive, scalable and safe languages for app (rather than system) development.

        To be clear, what I'm really after is reducing hardware complexity, and removing intrinsic limits (shared mem for everything, basically) to scalability maybe comes as a bonus. I don't like complexity; it poisons everything. I'm wondering if one can get very high performance without huge complexity.

        BTW I am not a hardware/low-level guy.

        1. BlueGreen

          Re: Alternative multicore model? (clarification)

          Said I "I'm wondering if one can get very high performance without huge complexity."

          The answer is of course yes, depending on the tradeoffs. It's about those tradeoffs, really.

        2. Anonymous Coward
          Anonymous Coward

          Re: Alternative multicore model?

          "C would be too tough & unsafe to use for a design as I've given"

          C is rarely a language for a design (apologies if I've mis-parsed). It may well be a language for an implementation. Ada *might* arguably be a language for a design, but it is often for various reasons a bad choice for an actual implementation. [continues ad infinitum].

          "I'm not sure if you're comparing an interconnect for cpus with that of a car & saying 'one's tough so the other must be', which I don't buy"

          What do you think the conceptual difference is? Channels and pools from CSP, that's basically all you need in both cases. Separate asynchronous activities with shared data which must respond in a co-ordinated manner to asynchronous external and internal events. The message rates may be different, but if there are things going on more complex than C+P in either case, then trouble likely awaits (as you have rightly noted, complexity is usually the enemy both of productivity and of quality).

          "I'm wondering if one can get very high performance without huge complexity."

          As have a few other people in years gone by, and as rightly pointed out in the previous reply, the answer is "it depends".

          1. BlueGreen

            Re: Alternative multicore model?

            > C is rarely a language for a design (apologies if I've mis-parsed).

            I think I was just unclear. I'm just saying we should be moving away from C if we can. Nothing to do with design of anything (except perhaps libraries). I like C but it's not right for many problems.

            > What do you think the conceptual difference is?

            I think I originally misunderstood what you were saying. I guess your point was that it's a problem somone else has solved, why should you worry about it.

            To answer you question, huge I'd say. A car's loom is designed for a specific purpose, a cpu interconnect is arguably as general as possible as it has to pass messages from x to y where x and y are unspecified in advance. For a car, the requirements are known beforehand. Also each new generation of cars isn't doubling the number of engines in a chassis, unlike cpu's multicore growth. And a loom interconnects very different components whereas cores on a chip are too a large degree as identical as possible.

            All that's assuming I understand you correctly now.

            > Channels and pools from CSP, that's basically all you need in both cases.

            Umm, pools? I know it was a long time ago that I did occam but pools? Don't remember them, wiki page doesn't mention them, nor does <http://www.wotug.org/occam/documentation/oc21refman.pdf> only ref I can find is for pi-occam and that concerns memory pools. Pointer would be helpful.

            Anyway, I still don't see your point, sorry. CSP can be implemented fine over shared memory, I agree. Maybe shared memory will scale to an unlimited degree and you know it but I don't. Maybe the hardware requirements of MESI over hundreds or thousands of cores isn't a problem. Maybe the consequent complexity just isn't an issue. Maybe someone who does know something about cache coherency and has issues with the ARM's current behaviour may have a valid opinion <http://www.mail-archive.com/bitc-dev@coyotos.org/msg02516.html> although he's also a rude & arrogant sod. I dunno. I ask questions to learn.

            > As have a few other people in years gone by, and as rightly pointed out in the previous reply, the answer is "it depends".

            Quite, and I'm asking if the problem space can be better matched by a different or augmented design. Just asking.

            1. Anonymous Coward
              Anonymous Coward

              Re: Alternative multicore model?

              "For a car, the requirements are known beforehand."

              That may depend on whether you're building cars or the computer systems inside them. It's nice to be able to use the same controller hardware (engine, ABS, entertainment, instrument, etc) across a range of models without re-inventing the hardware each year. But the software will change, and the software is the important bit, the critical bit.

              "each new generation of cars isn't doubling the number of engines in a chassis, unlike cpu's multicore growth"

              How many interconnected computerised control systems were in a volume-market car or van of ten years ago? Five years ago? This year?

              "A car's loom ..."

              With the greatest respect, if you've not heard of CANbus, you probably should be careful before commenting on computer systems in cars. There was a specific reason I chose cars as my example, and you've highlighted it. I'd love to expand but I've got to fix what the IT department broke over the weekend.

              1. BlueGreen

                Re: Alternative multicore model?

                > With the greatest respect, if you've not heard of CANbus, you probably should be careful before commenting on computer systems in cars

                Walked right into your elephant trap, didn't I? Showed myself up, but if your point was to illustrate that I didn't know much about hardware/very low level design, and should keep L33T arm designing skills to myself, then it was a bit redundant - I was pretty explicit about my ignorance. But something #smells# wrong with current designs, and I trust my nose.

                I'll have a look at your CANbus link, thanks and if you have any relevant readings (other than H&Patterson), do lay them out.

            2. Anonymous Coward
              Anonymous Coward

              Re: Alternative multicore model?

              CANbus overview for nontechies - illustration of how many interconnected systems there are on a typical modern volume car, and the kind of things they do:

              http://c4owners.org/plugins/faq/faq.php?0.cat.10.57

  22. Anonymous Coward
    Anonymous Coward

    Element 14? Wtf?

    A nice article indeed. Much of which was good to read, but one bit in particular caught my attention.

    Who (other than me) is now interested in the lifecycle of the organisation called Element 14?

    Acorn->Element14->Broadcom->Farnell (massive oversimplification, but)?

    And who's now got half the RPi manufacture and distribution business?

    Who wants to paint a bit more detail of that bit of the picture?

    Networks. After Econet, the best one is the old boy one. And sometimes it works out nicely.

  23. Highlander

    ARM emulator - 1991

    Well, not that this adds much to the story, but way back in the mists of time I wrote a Arm emulation in C. It was binary compatible with the ARM processor that was in the original Archimedes series of computers. The idea was actually to test the impact of cache memory in a high performance architecture. To do that I needed complete traces of binaries running on a processor. Being a totally logical student I decided to build my own ARM emulator and virtual cache memory system.

    I recorded the processor traces for memory access and then fired them at the cache memory simulation. I got what you might call the expected results too - cache memory turns out to be really effective (surprise, surprise). But the fun thing was building the emulation of a complete CPU in C. It ran at 3000-4000 instructions per second which wasn't bad considering all the debug code and trace code embedded within it. Of course it never saw the light of day outside my college days, but I still remember coding the thing. The design of the ARM was so clear that you could almost psuedo code the instruction set.

    The minds behind the original design are some of the best i the industry. Truly.

    1. BlueGreen

      Re: ARM emulator - 1991

      Take a gander at the alpha instruction set if you want another kind of elegant.

This topic is closed for new posts.

Other stories you might like