back to article Arm Inside: Is Apple ready for the next big switch?

One day we'll look back and wonder why it took PCs so long to move from RISC chips that had to pretend to be CISC chips to RISC chips that didn't have to pretend to be anything. From CISC to RISC The Apple Mac made its debut in 1984 running Motorola's 68k chips, then the most efficient in the PC industry. Apple switched to …

  1. trevorde Silver badge

    When you only have 10% of the desktop/laptop market, and don't care about backward compatibility, you can do what you like

    1. Dan 55 Silver badge

      If they didn't care they wouldn't have made developers upload bitcode to their App Store.

      You might need to wait a bit before Non-App Store apps and Steam games are available on the new machines though. I guess universal binaries will return for third party developers but Rosetta probably won't.

      1. Anonymous Coward
        Anonymous Coward

        @Dan 55

        The bitcode is an intermediate format, but still CPU architecture dependent and not nearly sufficient to translate x86 to arm64. The reason Apple did that was to allow recompiling if optimizations broke something or improved enough to make it worth it.

        1. ThomH

          Re: @Dan 55

          I think bitcode is merely alignment plus calling convention plus data size dependent. You could use it to port to a different instruction set if all of those things were the same.

          That being said, if Apple didn't care about backwards compatibility then why did it expend so much effort on the 68k emulator and on Rosetta? The company even skipped the very first PowerPCs because the instruction cache wasn't quite large enough to fit the 68k emulator no matter what they did, so didn't produce acceptable performance with 68k applications.

      2. P. Lee

        >You might need to wait a bit before Non-App Store apps and Steam games are available on the new machines though.

        Unless its basically two systems in a box with ARM picking up all the lower power/always on work and x86 running high-power applications with a fast interlink between them. You could "sleep" the x86 side while keeping ARM running to save power.

    2. Sandtitz Silver badge
      Mushroom

      bootcamp?

      There's still those who want to run Windows with their Macs. Apple could do x86 emulation -> Intel would go thermonuclear on them.

      1. Mad Hacker

        Re: bootcamp?

        It's called Virtual Box. Been there done that.

        1. macjules

          Re: bootcamp?

          VirtualBox is for the 'can't pay, won't pay' brigade. VMWare and Parallels are much, much faster.

          1. Thomas Wolf

            Re: bootcamp?

            Is there a link that compares performance for desktop VMs? When I worked at Lenovo (and used their WS class laptop) I used VMWare. Now I'm using an MBP with VirtualBox - and don't see any performance differences in similar use cases. I realize my experience may not representative - thus my question.

            VMWare is definitely a much better solution if you have to manage many VMs on servers.

            1. ThomH

              Re: bootcamp? @Thomas Wolf

              The main difference I've seen is that VirtualBox supports only either software rendering or passthrough of antiquated version of OpenGL. It's pretty easy to demonstrate the difference: on my MacBook if I enable accelerated rendering then no browser supports WebGL. If I disable it then they run WebGL in software.

              VMWare supports passthrough of relatively modern versions of OpenGL. So WebGL is accelerated. As is the desktop compositor I happen to use, which makes a massive difference for ordinary productivity if, like me, that means X11.

          2. Griffo

            Re: bootcamp?

            Or for those who want the hardware, but don't want to run OSX...

            I have booted into OSX maybe twice on the last 4 years on my iMac.

            1. Dave 126 Silver badge

              Re: bootcamp?

              And course OSX was made from NeXTstep which itself was designed to be ported across CPU architectures.

          3. joeldillon

            Re: bootcamp?

            None of these are going to be in any way fast if they are actually emulating the x86 instruction set on ARM, hth. An x86 VM on x86 is a very different proposition.

            It can be done (and was, when Apple went from 68k to PowerPC for example) but it's not as simple as 'lol just run Virtualbox'.

            1. Dave 126 Silver badge

              Re: bootcamp?

              Genuine questions for a more technical bear than me:

              Is it feasible to have OSX run on ARM but offload some tasks to an x86 processor as required? Or:

              Is is feasible for OSX to run on ARM, and then Xwindows (or whatever the equivalent is) into an x86 instance of OSX that is spun up when required?

              Either way, the ARM chip is primary, but the computer contains an x86 chip to be used occasionally.

              It seems to me that the applications that people use 80% of the time (Safari, email, LightRoom etc) are those that can be quickly compiled for ARM by Apple or others, leaving the x86 chip for legacy applications on occasion.

        2. T. F. M. Reader

          Re: bootcamp?

          It's called Virtual Box.

          Does it run on ARM? Even if it does, I don't see how one could run an x64 guest on an arm host with anything approaching native performance. When your host and guest architecture are the same most instructions (except privileged ones) are executed directly on HW. WIth different architectures you would have to translate. Slowly. Until MSFT port Windows to ARM, and El Reg reported in the past that they were on their way to that noble goal (though probably not because of Apple). Linux would be simpler, by the way.

          However, given Apple's history, I would not put it past them to switch to ARM (partly) to prevent virtualization. Years ago, when Apple kit was still PowerPC and Intel and AMD did not have virtualization support in HW (this makes it pre-2006), I grabbed an Apple computer with an idea to install Linux and play with LPARs, etc. I discovered, to my surprise then (the surprise dissolved very quickly), that the otherwise perfectly normal PPC 970 (I think) had virtualization support disabled on Apple's demand. Once they switched to Intel whose arm [sic!] they could not twist too painfully to disable the VT-x/VT-d/friends they had just introduced, virtualization became possible. Maybe now it's time for ARM-twisting of a new kind?

    3. J. R. Hartley

      Acorn, we hardly knew ye :'(

  2. defiler

    Problems looking for a solution?

    I think you mean solutions looking for a problem, such as AI and Face ID...

    1. Dave 126 Silver badge

      Re: Problems looking for a solution?

      If read AOs sentence in context, you'll see that is what he meant and what he wrote.

      In any case, creativity stems from waste and redundancy - it's more fun to have an excess of processor power than vice versa!

    2. annodomini2

      Re: Problems looking for a solution?

      I just read battery draining..

  3. IanTP
    Pint

    Will it run Crysis?

    Sorry, having a daft day here, have an almost midweek beer on me :)

    1. Korev Silver badge
      Joke

      Re: Will it run Crysis?

      Can't you see the ARM that bad jokes like this cause?

    2. defiler

      Re: Will it run Crysis?

      To be fair, Tom's Hardware revisited Crysis recently, and concluded that yes. Yes, it will run Crysis. With max settings, at 4k.

      If you're running a GTX1080Ti...

      1. Dave 126 Silver badge

        Re: Will it run Crysis?

        The Touchbar on the MacBook Pro has been made to run Doom*... it's not Crysis but ya gotta start somewhere!

        * What hasn't been made to run Doom?

  4. Korev Silver badge
    Go

    Speed

    Lightroom on my gen 1 iPad Pro runs very quickly and feels more responsive than the "classic" client running on my 32GB Core i5 PC.

    Assuming that other ISVs can write similarly fast code for ARM then this could be very good for the end users. If they're already targeting IOS devices then that might be a head start for them

    1. Dave 126 Silver badge

      Re: Speed

      And one assumes that those vendors with capable iPad apps won't have much trouble rolling out an ARM Macbook application - it'd largely just be changes to the UI.

  5. chivo243 Silver badge
    Meh

    meh

    as long as it is FAST... saves on power... get two hamsters inside for all I care.

    1. Anonymous Coward
      Anonymous Coward

      Re: meh

      I thought the current trend is to have 4 big hamsters and 4 little hamsters.

    2. Anonymous Coward
      Joke

      Re: meh

      Or they slice up one hamster into two and call it hyperhamstering...

  6. werdsmith Silver badge

    "longer battery life and lower power consumption."

    Ooh you get both! Imagine that!

    1. Adam 1

      Apparently the other mob are going for better performance and less waiting around.

    2. david bates

      Or same battery life but a couple of mills thinner...shall we place bets?

    3. DeeCee

      or a smaller battery , like on iphone8(battery life is still good, but i would prefer a bit fatter phone with bigger battery)

  7. Steve Davies 3 Silver badge
    Thumb Up

    Well Done

    Mr O for a nice informative article without the usual bad mouthing of Apple.

    If what you say is true then the sweetner for Intel must be the increasing use of Intel Modems in iPhones.

    Sadly it makes building Hackintoshes a whole lot more difficult. Still, the one I built last month should last 3-4 years.

    1. Charlie Clark Silver badge

      Re: Well Done

      Much as I like ARM I think the article is hinting at proposed "secure boot" extensions for MacOS to stop users doing what they want. :-/

  8. R 11

    Reminder of Acorn Advert

    I'm reminded of Acorn's advert in The Times following Apple's PowerPC move to RISC in 1994:

    *************************************************************************

    [white text on black background, large text]

    As a founder member,

    Acorn is delighted

    to welcome Apple

    to the RISC Club.

    [2/3 of way down page, switch to black on white, smaller text]

    After 11 years of development and 7 years of production, we at Acorn are

    still marvelling at the sheer power, performance and potential of 32-bit

    RISC technology.

    Our ARM 32-bit RISC processors have delivered these capabilities to our

    many customers in education, the home and industry worldwide, in our

    products since 1987.

    So it comes as little surprise to hear that Apple's new desktop range

    also incorporates 32-bit RISC technology.

    [large italic text, stands out prominently]

    Oh well. Better late than never.

    ...

    https://groups.google.com/d/msg/comp.sys.powerpc/I2AlOpqdSik/KbTTGJbAAVoJ

    *************************************************************************

    Looks like Acorn get's the last laugh.

    1. Frank Zuiderduin

      Re: Reminder of Acorn Advert

      Pity Acorn ceased to exist in 1998.

      Their RISC OS operating system still lives on, but it's nowhere near as advanced (compared to other OSs) as it was in the 90s.

      1. Mark 110

        Re: Reminder of Acorn Advert

        In name maybe. Their technology, and probably some of their share value - can't be arsed to research - lives on as Arm Holdings. That phone you have - its an Acorn!!

        1. ThomH

          Re: Reminder of Acorn Advert

          But don't they live on only because ARM was spun off in order to allow adaptation for the world beyond Acorn... as a joint venture with Apple?

        2. Anonymous Coward
          Anonymous Coward

          Re: Reminder of Acorn Advert

          Acorn RISC Machine

  9. Mike Moyle

    Alternate subhead:

    Mademoiselle from Arm-Intel, Parlez-vous...?

    1. Anonymous Coward
      Anonymous Coward

      Re: Alternate subhead:

      Unfortunately if you recall she hadn't been [redacted] in 40 years. (The French "baiser" used as a verb does not mean to kiss.)

      1. Mike Moyle

        Re: Alternate subhead:

        Well, Apple isn't baisé yet, so...

  10. ecarlseen

    I suspect Apple has reached peak frustration with Intel's mobile graphics

    This has been a thorn in Apple's side for a long time - I personally thought the A6 was more of a shot across Intel's mobile CPU bow than it was at the smartphone industry. It was powerful enough to run a low- to mid-end MacBook, and suddenly Intel's drivers and compatibility got a *lot* better, but were still (and are still) a long way from great. Hence their recent partnership with AMD on a high-priced, oddball mobile CPU/GPU package (which other manufacturer has the price flexibility to use a high-cost part like that in meaningful volume?) and their sudden expansion of their internal GPU initiatives. Of course, there are other problems that center around Intel's inability to deliver chips in volume on new process technology over the last few cycles. 14nm was a mess, 10nm is still a mess, and 7nm is probably not going to be better. This hurts their mobile performance per watt quite badly, especially when competing with more efficient architectures like ARM.

    Apple very much wants a single-die (or at least a single-package) high-end CPU/GPU solution in the MacBook and low-end iMac product lines, similar to what they've achieved with their A-Series chips. Like the A6, the A11 is far more powerful than it needs to be (again, powerful enough to run a low- to mid-range MacBook) and coincidentally Intel is suddenly making very public moves in this space.

    I suspect that Apple will move their macOS lines to ARM eventually, but the priority is somewhat dictated by Intel's ability to deliver serviceable parts in the immediate- and near-term.

    1. Anonymous Coward
      Anonymous Coward

      Re: I suspect Apple has reached peak frustration with Intel's mobile graphics

      Well they are already solving this problem, for now at least, with the recently announced combination of Intel CPU and AMD graphics. There's no way Intel makes a deal like that without Apple twisting their arm heavily and threatening to go with an all-AMD x86 solution.

      1. ecarlseen

        Re: I suspect Apple has reached peak frustration with Intel's mobile graphics

        "twisting their arm" - pun intended?

        1. Dave 126 Silver badge

          Re: I suspect Apple has reached peak frustration with Intel's mobile graphics

          The hybrid Intel AMD chips are possible because of:

          Intel announced its EMIB technology over the last twelve months, with the core theme being the ability to put multiple and different silicon dies onto the same package at a much higher bandwidth than a standard multi-chip package but at a much lower cost than using a silicon interposer.

          No technical reason Intel can't combine an Intel CPU with an Apple GPU, if that was something Apple want.

  11. Anonymous Coward
    Windows

    I hope...

    That someday they'll release an OS X version for the full Intel platform. I know it's not likely to happen because every piece of Apple hardware is also registered with the "homeland" which allows you to gain access to your OS updates (and the OS itself if I heard right) which would be a little more difficult to accomplish on Intel. Heck, Microsoft tried (you know: change too much hardware in your PC and you'll end up with an unregistered version) but that got so much backlash...

    So I don't think it'll happen all too soon but it would be very interesting to see what might happen. Back in the days OS/2 wasn't exactly cheap (also because of its very niche market share) but even so several people still bought into it because it was actually a very solid operating system (one which I truly miss from time to time).

    I'm convinced that even more people would buy into OS X if Apple were to take this route and place their flagship directly in opposition to Windows. I probably would!

    1. Anonymous Coward
      Anonymous Coward

      Re: I hope...

      Apple has a lot of money but they didn't get where they are by having to make stuff work on a highly fragmented platform. Microsoft have got something that works with an awful lot of stuff. I dread to think what's in there to get it working with all the different hardware. It's not a problem Apple have had to address.

      Google has demonstrated how hard it is to move from software to hardware nowadays. I suspect Apple wouldn't want to risk falling flat on their faces.

    2. Shane Sturrock

      Re: I hope...

      Why would Apple (a hardware company) let other hardware companies install the software they develop to sell their own hardware? Seriously, this comes up many times. Apple is not a software company. They sell integrated solutions with hardware and software designed to work together. Microsoft is trying to get into this game but the two companies come from very different backgrounds with Microsoft originally being all about software and Apple originally being all about hardware. Apple sells very little of their software, they just bundle it with their hardware so they're simply not going to be motivated to sell macOS to run on competing hardware. They tried it back in the 90's and it nearly killed them because there was the usual race to make cheaper and cheaper macOS compatibles. If you want a Mac, buy a Mac. Otherwise there's Windows or if you prefer to OS to be unrestricted Linux is very good these days.

      1. Anonymous Coward
        Anonymous Coward

        Re: I hope...

        >> "They tried it back in the 90's and it nearly killed them because there was the usual race to make cheaper and cheaper macOS compatibles."

        I'm glad someone reminded the resident Apple bashers of that fact !

        Back in those days, I was working for a large Apple reseller that was also an AASP (Apple Authorised Service Provider). The same reseller also sold and repaired many of the clones.

        My role gave me an "all areas" pass.

        First, looking at and playing with the Apple units side-by-side with the clone units on the shop-floor. This was the 90's remember, so industrial design had not yet made it into IT, so neither the Apple or clone units were particularly pretty to look at, but it was clear whose hardware was better built, 1-0 to Apple. The tech-specs also frequently told a similar story. 2-0 to Apple.

        But the service department is where the story was really told. At the rear of the service department there were rows of shelves dedicated to the temporary storage of customer equipment in various stages disrepair.

        Walking through the service shelving, the number of clone units vastly outnumbered the Apple units. This was not down to lack of Apple sales, the reseller was doing good business in Apple units and had a strong corporate/educational customer base, so they were doing the volumes.

        The number of clone units awaiting repair was simply down to what has already been said by the other poster. The introduction of "authorised clones" simply ended up following the same old sad IT story of a race to the bottom.

        Speaking to the service engineers, and looking at the machines opened up on their anti-static benches, the difference in quality between Apple and the clones was palpable. Whether we're talking the neatness of the chassis cabling or the quality of parts. When you saw them side-by-side there was little argument. 3-0 to Apple.

        The 90s clone era is thus not one I would like to see repeated.

        One of Apple's core strengths is the structural integration of hardware and software.

        This integration is *NOT* as the Apple bashers would like you to believe, some sort of "closed garden just to be spiteful".

        The integration is there because the user experience matters to Apple. Unlike Microsoft and Linux they don't want to bloat their software with a litany drivers, kludges and work-arounds just to it will work on any old random hardware. Apple optimise the hardware to work with the software and vice versa.

        Apple continue to spend more money on R&D than the vast majority of manufacturers out there, and if you put your subjective Apple-bashing hat to one side and look at it on a purely objective basis, it does show in terms of the quality of the products that Apple puts out.

        I'm not saying Apple is perfect. No manufacturer is. Apple like any other has had their fair share of issues whether manufacturing defects or otherwise. But when considered objectively as a whole, the old SWOT test would easily show you that Apple's strengths far outweigh any perceived weaknesses.

        P.S. Before the Apple-fanperson accusations start flying... to this date, I use Apple, Microsoft, Linux and BSD in equal measure. In a business environment each have their own purpose and utility. Yes "at home" I personally use Apple kit, but that's because I prefer to invest in robust, well-built and reliable equipment and software that I know from practical experience that the "whole package" (hardware+software) will outlive "cheaper" PC desktop or laptop. Headline price is not everything in this world.

        1. Dave 126 Silver badge

          Re: I hope...

          When Steve Jobs returned to Apple and killed off the clones, he wanted to make an exception for Sony VAIO computers, though in the end Sony were too far down the path of building x86 Windows machines. The first x86 builds of OSX were demonstrated to Apple management on VAIO laptops, and it only took a small team a few days - but then NeXTstep was always designed to run on a variety of CPUs.

          - https://www.theverge.com/2014/2/5/5380832/sony-vaio-apple-os-x-steve-jobs-meeting-report

    3. Malcolm Weir Silver badge

      Re: I hope...

      I very much doubt Apple has the slightest interest in selling OS X for anything other than their own hardware, and in fact I reckon one of the benefits of the dedicated ARM processor(s) is that they can lock the thing down tight.

      I'd also question quite how interested Apple really is in backward compatibility; sure, they _need_ it to lure developers, but they don't appear to bothered at requiring end users to buy new "stuff", so I reckon they'd be very happy with planned obsolescence of user apps as long as developers will produce the new version.

      And on that point, note that Apple's amount of clout today is vastly different from when they made the 68K->PPC and PPC->i86 transitions. Back then, if (say) Adobe said "no", then hardware sales would have plummeted. These days, there are many Apple devices used as general purpose systems, whose users care much less about flagship software as long as they have something. I mean, exhibit "A" would have to be "Mail"...

  12. Herby

    There is ALWAYS an option...

    One wonders what Intel might be had IBM picked the 68k as its processor for the PeeCee. Then (1980 or so as I recall) IBM had a chunk of ownership in Intel, so the choice was made for them. Fast forward to around 1990 when Apple, IBM, and Motorola chummied up and eventually decided to go the PPC route. Who knows what the world might be if they had different decisions. Sadly the 68k processors didn't get the backing from Motorola (Freescale, whoever it is now), and the shift was on again. It happens. For the most part if all you need to do is cycle the source through a different compiler chain, and you get something that "works", I suspect that nobody will really care. Rare is the application now that touches the instruction set directly, it is all some sort of compiled code.

    Personally, the 68k instruction set was pretty good, and could have improved given the chance (*SIGH*). Who knows, maybe there is a 68k emulator that runs under ARM (at reasonable speed).

    Life goes on.

    1. Anonymous Coward
      Anonymous Coward

      Re: There is ALWAYS an option...

      "One wonders what Intel might be had IBM picked the 68k as its processor for the PeeCee"

      There is a story that may be apocryphal that the minicomputer division of IBM demanded the PC be limited to an 8 bit bus because they were worried about the competition. Intel made the 8088; at that time Motorola had the 6809 which wasn't adequate, and the 68000 which had a 16 bit bus. So Intel got the job. And then Compaq produced PC clones which had a proper 8086 running at 8MHz instead of a crippled 8088 running at 4.77.

  13. Androgynous Cow Herd

    "Problems looking for a solution"

    I think you meant "Solutions looking for a problem".... Touch bar etc are answers to problems I never have had.

    1. Dave 126 Silver badge

      Re: "Problems looking for a solution"

      It's not a 'must haven't feature, but I can see it being useful for music and graphic applications. Still, the secure CPU side of it is interesting, especially in the light of Intel security flaws and hidden 'management engines'.

      It seems that Apple have been canny or lucky in not trusting Intel with user's biometric data:

      https://arstechnica.com/information-technology/2017/11/intel-warns-of-widespread-vulnerability-in-pc-server-device-firmware/

  14. Ian Joyner Bronze badge

    Complete rethink

    What we need is a complete rethink of computer architectures. We have two kinds of processing requirements - scientific and the rest. Scientific needs raw speed, but everything else needs security. In scientific number crunching very little is shared – the processor is dedicated to a single task. Low-level RISC seems a good thing with a flat memory model.

    However, for the rest, they system is shared between many applications and connected to the net. Security should be built in. If security is not built in at the lowest levels, it must be added as an afterthought. This means it is less secure since you are always trying to catch up. This wastes more processor time and people time.

    Shared systems work on non-linear memory allocated in blocks. Memory blocks should not be shared among processes (perhaps an exception can be made for very well-controlled and tightly-coupled processes – but memory-block sharing results in tight-coupling which is mostly not desirable).

    Processes should only communicate via messaging. This means that applications can also be distributed, since processes are loosely coupled.

    Since memory blocks should be kept separate and memory viewed as blocks and not sequential, support for blocks and memory management should be built into the way a processor works – not added on as a memory-management unit (MMU). Bounds checking would become fundamental and a whole class of security problems in viruses and worms destroyed.

    Program correctness would also receive a great boost, since one of the most common unwitting errors that programmers make is out-of-bounds access.

    A special systems programming language for lowest layers (memory allocation, etc) should be developed, and only used for the lowest layers of OS, nowhere else. Higher levels of the OS begin to look like applications programming for which a more general purpose language could be used. C should be relegated as a legacy systems language with languages that avoid the undesirable non-orthogonality of C (and especially C++) being developed to improve the software development process.

    There is a lot of work to do to achieve secure and correct computing. It does need to start at the lowest levels with a complete rethink.

    1. Anonymous Coward
      IT Angle

      Re: Complete rethink

      Basically Rust vice C and, if it should develop nicely, Redox as OS. Minimal adjustments there as it's a bit harder to fuck things up. Possible, but harder.

      That's an iteration. What I see is that we need to go back and reexamine the structure and behaviors of the operating system from scratch. SCM, hybrid stacked silicon and other "gee, neat!" developments are calling into question the assumptions many of us work with in IT. {Shrug} That'll happen eventually when we get around to our next session of jumping up and down on our layers of abstraction to increase efficiencies in hardware, software and systems. I probably won't be around to see it.

      1. Ian Joyner Bronze badge

        Re: Complete rethink

        JoS. I mostly agree. We can get so much on a chip now that we really don't need RISC anymore. But we should also beware the excesses of CISC. I was going to say the Niklaus Wirth was doing RISC as Regular Instruction Set Computers a long time ago. But a little search found he is still working on such ideas:

        https://www.inf.ethz.ch/personal/wirth/FPGA-relatedWork/RISC.pdf

        Rust? Although I have had a quick look, I think we need to get further away from C and move towards zero-syntax or at least syntax-independent programming. C syntax now looks incredibly dated.

    2. Milton

      Re: Complete rethink

      I'm rarely averse to seeing the results of some clever folks doing a "complete rethink"—in fact, it's a healthy thing to do in almost any discipline every couple of decades or so, because it shakes things up, old assumptions get challenged, and cruft gets ditched.

      If there is a requirement for rethinking computer architecture—and I am doubtful right now only because it seems to me that the subject is getting a lot of attention anyway—then indeed it should focus on security, because that is one of the things we're doing incredibly badly. But it's not the *only* thing we're getting wrong: bloat and inefficiency are appalling too. With the help of Moore's Law, we the IT community have been complicit in creating systems and software which are unnecessarily, crazily complex and which use a hundred times more resources in CPU cycles and memory than is required for many tasks. Ok, we're not in the days of writing assembler printer drivers for 6502s, and no one doubts the usefulness of libraries and full-featured OSs and a host of daemons, but it's far too easy to find yourself creating a 400Mb package of code to do something which, if you'd had to do it 25 years ago, would have run to 200kb. We're writing monstrously obese code to run on porcine operating systems, and—to come full circle—that makes things so complex that we're also constantly building in security defects. Complexity is the enemy of reliability and security.

      Some famous engineer is reported to have told his disciples: "Simplify, and add lightness". Damn right. That's the kind of rethink I'd love to see.

      As for security vs science, I respectfully suggest that may be missing the point. The last three major projects I worked on (two on encryption, one molecular modelling), all three eventually ran their computing work on a shedload of GPUs, and two of them required air-gapped systems in a basement. My (admittedly subjective, limited) perception is that if you're doing scienc-y stuff, it's very complex and needs a ton of security too.

      1. Dave 126 Silver badge

        Re: Complete rethink

        > Some famous engineer is reported to have told his disciples: "Simplify, and add lightness". Damn right. That's the kind of rethink I'd love to see.

        Unfortunately, he didn't add security - a few of his racing drivers died in crashes.

        1. Anonymous Coward
          Anonymous Coward

          Re: Complete rethink

          Well, security doesn't sell unless it's part of your job description. Look at all the Facebook/Google/Twitter traffic, and all the stories about people giving away their privacy (which is a security issue itself). Basically, the average Joe usually doesn't get it, and he takes everyone else with him.

          1. Ian Joyner Bronze badge

            Re: Complete rethink

            "Well, security doesn't sell unless it's part of your job description."

            Right. Security is a negative - it is trying to stop people doing things. An unfortunate but regrettably necessary aspect. The philosophy of 'customer knows what they want so do what they ask' has given way to 'customer does not know what they want, so do what they need'. Security is in this category.

            But we do know that security must be build at lowest levels, or we are always chasing our tail, trying to install some magic software to provide security, and that doesn't work.

            Security is also the balance between making computing easy for a legitimate user, but as hard as possible for a malicious attacker. As far as the legitimate user is concerned security facilities built in at lowest levels, such as bounds checking, actually makes no difference and certainly does not adversely impact anything that is computable. In fact, in addition to security, it helps developers develop correct programs.

            1. Charles 9

              Re: Complete rethink

              "Security is also the balance between making computing easy for a legitimate user, but as hard as possible for a malicious attacker. As far as the legitimate user is concerned security facilities built in at lowest levels, such as bounds checking, actually makes no difference and certainly does not adversely impact anything that is computable. In fact, in addition to security, it helps developers develop correct programs."

              And yet you don't see things like tagged memory in most processors? Why? Because of the other two legs of the triangle: cost and performance. You either take a noticeable performance hit or pay through the nose. And yes, people pay attention to those two. Media encoding jobs (such as home video editing) still take time even on relatively recent hardware (last I checked, you still can't do realtime 1080p HEVC even on an i7, let alone 4K down the road). And of course, there's still gaming, business calculations, and so on. At the same time, people don't want to spend a lot on their computers because, unlike things like cars, computers can't kill them. Wanting peace of mind takes a direct threat to make it desirable. Otherwise, it isn't worth it.

              As for balancing between ease of use and difficulty, remember there's always the dreaded overlap. The paths you MUST leave for the users to get through can just as easily become the way in for the enemy, and there's no real way to stop this because there's no real way to prevent a sufficiently-disguised imposter (and we already know adversaries are ready, willing, able, and even eager to steal identities for this purpose, no matter how insignificant the identity).

              1. Ian Joyner Bronze badge

                Re: Complete rethink

                "You either take a noticeable performance hit or pay through the nose."

                No, you are way overstating your case against a straw man argument, since you introduce tag architectures, which I have not mentioned in this thread. Tagged architectures are one way of enhancing security. But the performance hit probably negligible because you can offset the need to concatenate tag+instruction with sophisticated look-ahead logic. Cost? Well memory is really cheap these days so for a four bit overhead on a 64-bit word, not much.

                Compared to the cost of security breaches these days, the cost of hardware is negligible. Many of these arguments come from a time when hardware was expensive and the equation was to make hardware cheaper. But the assumptions have changed. Now security problems cost a bomb and hardware is cheap.

                "computers can't kill them"

                Oh, yes they can.

                1. Charles 9

                  Re: Complete rethink

                  Well, if things like tagged memory are so cheap to implement, why haven't they been implemented in an opt-in fashion already and sold as a business security point (unlike consumers, businesses will at least have an eye on security--to keep company secrets being wired out the door)? Maybe the change would be too radical and could break legacy implementations that don't expect stuff like this. Another possibility may be to implement this at an OS level to make it less architecture-dependent. But either approach is probably going to break some things which makes it risky.

                  "Oh, yes they can."

                  Name an instance where a computer directly killed a user (analogous to when a car crashes into a tree or the like).

                  1. Ian Joyner Bronze badge

                    Re: Complete rethink

                    "Well, if things like tagged memory are so cheap to implement, why haven't they been implemented in an opt-in fashion already"

                    So what is your point here? You are not making a technical point of any credibility. Besides, I told you it is not necessarily tagged memory (a topic you raised), but it is the kind of security tags give you. If there are other ways of doing this fine.

                    Yes, change is radical and breaks things. In my experience I - and others - have taken buggy code (from well-known developers), run them in environments where their behaviour is checked and found many latent bugs. That is a good breaking of legacy applications, it tightens them up.

                    "Another possibility may be to implement this at an OS level to make it less architecture-dependen"

                    That will be much slower and not nearly as secure.

                    "Name an instance where a computer directly killed a user (analogous to when a car crashes into a tree or the like)."

                    I'm sure there have been many - like the recent Tesla car crash. Remember the Ariane V disaster - luckily no one was killed, but software interacts with the real world. Again you have no point because your original point that software doesn't kill people is patently false.

      2. Ian Joyner Bronze badge

        Re: Complete rethink

        Hello Milton. "As for security vs science, I respectfully suggest that may be missing the point."

        I'm suggesting science as a very broad spectrum, but maybe it needs to be more tightly defined as really time-critical computations. These can indeed benefit from very low-level coding, compilers directly generating microcode as in RISC. When computers were big and expensive, of course getting every ounce out of a CPU was of primary concern and a large proportion of computing was used for scientific purposes.

        But now that is a really small part of the market. And such systems should most likely not be connected to anything. But that is now a small part of the market, so I'm saying the rest is where security is critical, much more so than performance. However, building security such as bounds checks into hardware can get security and performance for most applications.

        Unfortunately the performance considerations of the 1950s have long dominated the thinking of many computing people and in the minds of many, performance still trumps security and correctness. That is what I am saying needs a complete rethink, or at least a change to this perspective.

  15. StuntMisanthrope

    Look a unicorn!

    With dual native boot, on pads and flippy things .. #flyingspacemonolith

  16. Torben Mogensen

    About time, I think

    When Apple started making their own ARM chips, I predicted that they would move to ARM on the Mac line also. It has taken longer than I expected, but Apple has good reasons for this:

    1. It would make Macs the ideal tool for developing iPhone software, as it can be made to run it without emulation.

    2. More parts of the OS can be shared between Mac and iPhone.

    3. It allows Apple to make a SoC to exactly their specifications instead of relying on what Intel produces.

    4. It removes dependency on Intel (or AMD).

    It is not impossible for Apple to make a 64-bit ARM processor that will outperform the fastest Intel processor. I'm sure Apple would love having the fastest laptops around, so people would migrate from Wintel to Apple for performance reasons. Apple need to do more work on the FP side to make this happen, but it is not impossible.

    1. Dave 126 Silver badge

      Re: About time, I think

      It's a question of how to roll it out to the public. You could imagine an ARM Macbook and x86 MacBook Pro range, for example... or maybe not. I really don't know. Apple like to keep their message as simple as possible.

  17. J. Cook Silver badge
    Joke

    I have nothing to add to this discussion except...

    ... the title of a song from the movie "The Lion King":

    It's the Ciiiiiirrrrccllle of life!!!

    *bows*

    Amusingly enough, the POWER range of CPUs are still in production; IIRC, we are looking at getting a new iSeries here at [REDACTED] to replace the POWER5 based ones that are nearing the end of their support lifespan with ones running on POWER8 processors. (I think) We also had a few POWER6 boxes running around a couple years ago; even though they took 20 minutes to get to the point where you could turn them on, they ran FAR faster than the machines they replaced, even with AIX running on them (and the various vendor specific programs that were programmed in *shudder* BASIC!)

  18. IGnatius T Foobar
    Megaphone

    ARM *and* x86?

    All these posters suggesting that Apple build a machine with both ARM *and* x86 CPU on board reminds me of my Commodore 128, with both 6502 and Z-80 on board, and my Amiga, with the DOS-running Sidecar ... neither one gained too much traction in the real world.

    1. Charles 9

      Re: ARM *and* x86?

      Well, the Z-80 on the C128 was hobbled by the bus design. And it couldn't be used in standard Commodore mode (it was simply used to handle behind-the-scenes stuff). Even in CP/M mode, it ran at 1/4 its rated speed in 40-column mode and 1/2 speed in 80-column mode (turning off the 40-column VDC removes a bottleneck, which is why FAST mode requires using the CGA-compatible 80-column mode).

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like