When you only have 10% of the desktop/laptop market, and don't care about backward compatibility, you can do what you like
Arm Inside: Is Apple ready for the next big switch?
One day we'll look back and wonder why it took PCs so long to move from RISC chips that had to pretend to be CISC chips to RISC chips that didn't have to pretend to be anything. From CISC to RISC The Apple Mac made its debut in 1984 running Motorola's 68k chips, then the most efficient in the PC industry. Apple switched to …
COMMENTS
-
-
Tuesday 21st November 2017 16:07 GMT Dan 55
If they didn't care they wouldn't have made developers upload bitcode to their App Store.
You might need to wait a bit before Non-App Store apps and Steam games are available on the new machines though. I guess universal binaries will return for third party developers but Rosetta probably won't.
-
-
Tuesday 21st November 2017 19:25 GMT ThomH
Re: @Dan 55
I think bitcode is merely alignment plus calling convention plus data size dependent. You could use it to port to a different instruction set if all of those things were the same.
That being said, if Apple didn't care about backwards compatibility then why did it expend so much effort on the 68k emulator and on Rosetta? The company even skipped the very first PowerPCs because the instruction cache wasn't quite large enough to fit the 68k emulator no matter what they did, so didn't produce acceptable performance with 68k applications.
-
-
Wednesday 22nd November 2017 11:34 GMT P. Lee
>You might need to wait a bit before Non-App Store apps and Steam games are available on the new machines though.
Unless its basically two systems in a box with ARM picking up all the lower power/always on work and x86 running high-power applications with a fast interlink between them. You could "sleep" the x86 side while keeping ARM running to save power.
-
-
-
-
-
Tuesday 21st November 2017 21:55 GMT Thomas Wolf
Re: bootcamp?
Is there a link that compares performance for desktop VMs? When I worked at Lenovo (and used their WS class laptop) I used VMWare. Now I'm using an MBP with VirtualBox - and don't see any performance differences in similar use cases. I realize my experience may not representative - thus my question.
VMWare is definitely a much better solution if you have to manage many VMs on servers.
-
Wednesday 22nd November 2017 19:14 GMT ThomH
Re: bootcamp? @Thomas Wolf
The main difference I've seen is that VirtualBox supports only either software rendering or passthrough of antiquated version of OpenGL. It's pretty easy to demonstrate the difference: on my MacBook if I enable accelerated rendering then no browser supports WebGL. If I disable it then they run WebGL in software.
VMWare supports passthrough of relatively modern versions of OpenGL. So WebGL is accelerated. As is the desktop compositor I happen to use, which makes a massive difference for ordinary productivity if, like me, that means X11.
-
-
Wednesday 22nd November 2017 10:30 GMT joeldillon
Re: bootcamp?
None of these are going to be in any way fast if they are actually emulating the x86 instruction set on ARM, hth. An x86 VM on x86 is a very different proposition.
It can be done (and was, when Apple went from 68k to PowerPC for example) but it's not as simple as 'lol just run Virtualbox'.
-
Wednesday 22nd November 2017 10:58 GMT Dave 126
Re: bootcamp?
Genuine questions for a more technical bear than me:
Is it feasible to have OSX run on ARM but offload some tasks to an x86 processor as required? Or:
Is is feasible for OSX to run on ARM, and then Xwindows (or whatever the equivalent is) into an x86 instance of OSX that is spun up when required?
Either way, the ARM chip is primary, but the computer contains an x86 chip to be used occasionally.
It seems to me that the applications that people use 80% of the time (Safari, email, LightRoom etc) are those that can be quickly compiled for ARM by Apple or others, leaving the x86 chip for legacy applications on occasion.
-
-
-
Wednesday 22nd November 2017 10:10 GMT T. F. M. Reader
Re: bootcamp?
It's called Virtual Box.
Does it run on ARM? Even if it does, I don't see how one could run an x64 guest on an arm host with anything approaching native performance. When your host and guest architecture are the same most instructions (except privileged ones) are executed directly on HW. WIth different architectures you would have to translate. Slowly. Until MSFT port Windows to ARM, and El Reg reported in the past that they were on their way to that noble goal (though probably not because of Apple). Linux would be simpler, by the way.
However, given Apple's history, I would not put it past them to switch to ARM (partly) to prevent virtualization. Years ago, when Apple kit was still PowerPC and Intel and AMD did not have virtualization support in HW (this makes it pre-2006), I grabbed an Apple computer with an idea to install Linux and play with LPARs, etc. I discovered, to my surprise then (the surprise dissolved very quickly), that the otherwise perfectly normal PPC 970 (I think) had virtualization support disabled on Apple's demand. Once they switched to Intel whose arm [sic!] they could not twist too painfully to disable the VT-x/VT-d/friends they had just introduced, virtualization became possible. Maybe now it's time for ARM-twisting of a new kind?
-
-
-
-
-
Tuesday 21st November 2017 16:15 GMT Korev
Speed
Lightroom on my gen 1 iPad Pro runs very quickly and feels more responsive than the "classic" client running on my 32GB Core i5 PC.
Assuming that other ISVs can write similarly fast code for ARM then this could be very good for the end users. If they're already targeting IOS devices then that might be a head start for them
-
Tuesday 21st November 2017 17:28 GMT Steve Davies 3
Well Done
Mr O for a nice informative article without the usual bad mouthing of Apple.
If what you say is true then the sweetner for Intel must be the increasing use of Intel Modems in iPhones.
Sadly it makes building Hackintoshes a whole lot more difficult. Still, the one I built last month should last 3-4 years.
-
Tuesday 21st November 2017 18:11 GMT R 11
Reminder of Acorn Advert
I'm reminded of Acorn's advert in The Times following Apple's PowerPC move to RISC in 1994:
*************************************************************************
[white text on black background, large text]
As a founder member,
Acorn is delighted
to welcome Apple
to the RISC Club.
[2/3 of way down page, switch to black on white, smaller text]
After 11 years of development and 7 years of production, we at Acorn are
still marvelling at the sheer power, performance and potential of 32-bit
RISC technology.
Our ARM 32-bit RISC processors have delivered these capabilities to our
many customers in education, the home and industry worldwide, in our
products since 1987.
So it comes as little surprise to hear that Apple's new desktop range
also incorporates 32-bit RISC technology.
[large italic text, stands out prominently]
Oh well. Better late than never.
...
https://groups.google.com/d/msg/comp.sys.powerpc/I2AlOpqdSik/KbTTGJbAAVoJ
*************************************************************************
Looks like Acorn get's the last laugh.
-
Tuesday 21st November 2017 19:02 GMT ecarlseen
I suspect Apple has reached peak frustration with Intel's mobile graphics
This has been a thorn in Apple's side for a long time - I personally thought the A6 was more of a shot across Intel's mobile CPU bow than it was at the smartphone industry. It was powerful enough to run a low- to mid-end MacBook, and suddenly Intel's drivers and compatibility got a *lot* better, but were still (and are still) a long way from great. Hence their recent partnership with AMD on a high-priced, oddball mobile CPU/GPU package (which other manufacturer has the price flexibility to use a high-cost part like that in meaningful volume?) and their sudden expansion of their internal GPU initiatives. Of course, there are other problems that center around Intel's inability to deliver chips in volume on new process technology over the last few cycles. 14nm was a mess, 10nm is still a mess, and 7nm is probably not going to be better. This hurts their mobile performance per watt quite badly, especially when competing with more efficient architectures like ARM.
Apple very much wants a single-die (or at least a single-package) high-end CPU/GPU solution in the MacBook and low-end iMac product lines, similar to what they've achieved with their A-Series chips. Like the A6, the A11 is far more powerful than it needs to be (again, powerful enough to run a low- to mid-range MacBook) and coincidentally Intel is suddenly making very public moves in this space.
I suspect that Apple will move their macOS lines to ARM eventually, but the priority is somewhat dictated by Intel's ability to deliver serviceable parts in the immediate- and near-term.
-
Tuesday 21st November 2017 19:04 GMT Anonymous Coward
Re: I suspect Apple has reached peak frustration with Intel's mobile graphics
Well they are already solving this problem, for now at least, with the recently announced combination of Intel CPU and AMD graphics. There's no way Intel makes a deal like that without Apple twisting their arm heavily and threatening to go with an all-AMD x86 solution.
-
-
Wednesday 22nd November 2017 09:49 GMT Dave 126
Re: I suspect Apple has reached peak frustration with Intel's mobile graphics
The hybrid Intel AMD chips are possible because of:
Intel announced its EMIB technology over the last twelve months, with the core theme being the ability to put multiple and different silicon dies onto the same package at a much higher bandwidth than a standard multi-chip package but at a much lower cost than using a silicon interposer.
No technical reason Intel can't combine an Intel CPU with an Apple GPU, if that was something Apple want.
-
-
-
-
Tuesday 21st November 2017 20:12 GMT Anonymous Coward
I hope...
That someday they'll release an OS X version for the full Intel platform. I know it's not likely to happen because every piece of Apple hardware is also registered with the "homeland" which allows you to gain access to your OS updates (and the OS itself if I heard right) which would be a little more difficult to accomplish on Intel. Heck, Microsoft tried (you know: change too much hardware in your PC and you'll end up with an unregistered version) but that got so much backlash...
So I don't think it'll happen all too soon but it would be very interesting to see what might happen. Back in the days OS/2 wasn't exactly cheap (also because of its very niche market share) but even so several people still bought into it because it was actually a very solid operating system (one which I truly miss from time to time).
I'm convinced that even more people would buy into OS X if Apple were to take this route and place their flagship directly in opposition to Windows. I probably would!
-
Tuesday 21st November 2017 22:00 GMT Anonymous Coward
Re: I hope...
Apple has a lot of money but they didn't get where they are by having to make stuff work on a highly fragmented platform. Microsoft have got something that works with an awful lot of stuff. I dread to think what's in there to get it working with all the different hardware. It's not a problem Apple have had to address.
Google has demonstrated how hard it is to move from software to hardware nowadays. I suspect Apple wouldn't want to risk falling flat on their faces.
-
Tuesday 21st November 2017 22:11 GMT Shane Sturrock
Re: I hope...
Why would Apple (a hardware company) let other hardware companies install the software they develop to sell their own hardware? Seriously, this comes up many times. Apple is not a software company. They sell integrated solutions with hardware and software designed to work together. Microsoft is trying to get into this game but the two companies come from very different backgrounds with Microsoft originally being all about software and Apple originally being all about hardware. Apple sells very little of their software, they just bundle it with their hardware so they're simply not going to be motivated to sell macOS to run on competing hardware. They tried it back in the 90's and it nearly killed them because there was the usual race to make cheaper and cheaper macOS compatibles. If you want a Mac, buy a Mac. Otherwise there's Windows or if you prefer to OS to be unrestricted Linux is very good these days.
-
Wednesday 22nd November 2017 08:33 GMT Anonymous Coward
Re: I hope...
>> "They tried it back in the 90's and it nearly killed them because there was the usual race to make cheaper and cheaper macOS compatibles."
I'm glad someone reminded the resident Apple bashers of that fact !
Back in those days, I was working for a large Apple reseller that was also an AASP (Apple Authorised Service Provider). The same reseller also sold and repaired many of the clones.
My role gave me an "all areas" pass.
First, looking at and playing with the Apple units side-by-side with the clone units on the shop-floor. This was the 90's remember, so industrial design had not yet made it into IT, so neither the Apple or clone units were particularly pretty to look at, but it was clear whose hardware was better built, 1-0 to Apple. The tech-specs also frequently told a similar story. 2-0 to Apple.
But the service department is where the story was really told. At the rear of the service department there were rows of shelves dedicated to the temporary storage of customer equipment in various stages disrepair.
Walking through the service shelving, the number of clone units vastly outnumbered the Apple units. This was not down to lack of Apple sales, the reseller was doing good business in Apple units and had a strong corporate/educational customer base, so they were doing the volumes.
The number of clone units awaiting repair was simply down to what has already been said by the other poster. The introduction of "authorised clones" simply ended up following the same old sad IT story of a race to the bottom.
Speaking to the service engineers, and looking at the machines opened up on their anti-static benches, the difference in quality between Apple and the clones was palpable. Whether we're talking the neatness of the chassis cabling or the quality of parts. When you saw them side-by-side there was little argument. 3-0 to Apple.
The 90s clone era is thus not one I would like to see repeated.
One of Apple's core strengths is the structural integration of hardware and software.
This integration is *NOT* as the Apple bashers would like you to believe, some sort of "closed garden just to be spiteful".
The integration is there because the user experience matters to Apple. Unlike Microsoft and Linux they don't want to bloat their software with a litany drivers, kludges and work-arounds just to it will work on any old random hardware. Apple optimise the hardware to work with the software and vice versa.
Apple continue to spend more money on R&D than the vast majority of manufacturers out there, and if you put your subjective Apple-bashing hat to one side and look at it on a purely objective basis, it does show in terms of the quality of the products that Apple puts out.
I'm not saying Apple is perfect. No manufacturer is. Apple like any other has had their fair share of issues whether manufacturing defects or otherwise. But when considered objectively as a whole, the old SWOT test would easily show you that Apple's strengths far outweigh any perceived weaknesses.
P.S. Before the Apple-fanperson accusations start flying... to this date, I use Apple, Microsoft, Linux and BSD in equal measure. In a business environment each have their own purpose and utility. Yes "at home" I personally use Apple kit, but that's because I prefer to invest in robust, well-built and reliable equipment and software that I know from practical experience that the "whole package" (hardware+software) will outlive "cheaper" PC desktop or laptop. Headline price is not everything in this world.
-
Wednesday 22nd November 2017 10:42 GMT Dave 126
Re: I hope...
When Steve Jobs returned to Apple and killed off the clones, he wanted to make an exception for Sony VAIO computers, though in the end Sony were too far down the path of building x86 Windows machines. The first x86 builds of OSX were demonstrated to Apple management on VAIO laptops, and it only took a small team a few days - but then NeXTstep was always designed to run on a variety of CPUs.
- https://www.theverge.com/2014/2/5/5380832/sony-vaio-apple-os-x-steve-jobs-meeting-report
-
-
-
Tuesday 21st November 2017 22:29 GMT Malcolm Weir
Re: I hope...
I very much doubt Apple has the slightest interest in selling OS X for anything other than their own hardware, and in fact I reckon one of the benefits of the dedicated ARM processor(s) is that they can lock the thing down tight.
I'd also question quite how interested Apple really is in backward compatibility; sure, they _need_ it to lure developers, but they don't appear to bothered at requiring end users to buy new "stuff", so I reckon they'd be very happy with planned obsolescence of user apps as long as developers will produce the new version.
And on that point, note that Apple's amount of clout today is vastly different from when they made the 68K->PPC and PPC->i86 transitions. Back then, if (say) Adobe said "no", then hardware sales would have plummeted. These days, there are many Apple devices used as general purpose systems, whose users care much less about flagship software as long as they have something. I mean, exhibit "A" would have to be "Mail"...
-
-
Tuesday 21st November 2017 22:25 GMT Herby
There is ALWAYS an option...
One wonders what Intel might be had IBM picked the 68k as its processor for the PeeCee. Then (1980 or so as I recall) IBM had a chunk of ownership in Intel, so the choice was made for them. Fast forward to around 1990 when Apple, IBM, and Motorola chummied up and eventually decided to go the PPC route. Who knows what the world might be if they had different decisions. Sadly the 68k processors didn't get the backing from Motorola (Freescale, whoever it is now), and the shift was on again. It happens. For the most part if all you need to do is cycle the source through a different compiler chain, and you get something that "works", I suspect that nobody will really care. Rare is the application now that touches the instruction set directly, it is all some sort of compiled code.
Personally, the 68k instruction set was pretty good, and could have improved given the chance (*SIGH*). Who knows, maybe there is a 68k emulator that runs under ARM (at reasonable speed).
Life goes on.
-
Wednesday 22nd November 2017 09:33 GMT Anonymous Coward
Re: There is ALWAYS an option...
"One wonders what Intel might be had IBM picked the 68k as its processor for the PeeCee"
There is a story that may be apocryphal that the minicomputer division of IBM demanded the PC be limited to an 8 bit bus because they were worried about the competition. Intel made the 8088; at that time Motorola had the 6809 which wasn't adequate, and the 68000 which had a 16 bit bus. So Intel got the job. And then Compaq produced PC clones which had a proper 8086 running at 8MHz instead of a crippled 8088 running at 4.77.
-
-
-
Wednesday 22nd November 2017 11:08 GMT Dave 126
Re: "Problems looking for a solution"
It's not a 'must haven't feature, but I can see it being useful for music and graphic applications. Still, the secure CPU side of it is interesting, especially in the light of Intel security flaws and hidden 'management engines'.
It seems that Apple have been canny or lucky in not trusting Intel with user's biometric data:
https://arstechnica.com/information-technology/2017/11/intel-warns-of-widespread-vulnerability-in-pc-server-device-firmware/
-
-
Tuesday 21st November 2017 23:40 GMT Ian Joyner
Complete rethink
What we need is a complete rethink of computer architectures. We have two kinds of processing requirements - scientific and the rest. Scientific needs raw speed, but everything else needs security. In scientific number crunching very little is shared – the processor is dedicated to a single task. Low-level RISC seems a good thing with a flat memory model.
However, for the rest, they system is shared between many applications and connected to the net. Security should be built in. If security is not built in at the lowest levels, it must be added as an afterthought. This means it is less secure since you are always trying to catch up. This wastes more processor time and people time.
Shared systems work on non-linear memory allocated in blocks. Memory blocks should not be shared among processes (perhaps an exception can be made for very well-controlled and tightly-coupled processes – but memory-block sharing results in tight-coupling which is mostly not desirable).
Processes should only communicate via messaging. This means that applications can also be distributed, since processes are loosely coupled.
Since memory blocks should be kept separate and memory viewed as blocks and not sequential, support for blocks and memory management should be built into the way a processor works – not added on as a memory-management unit (MMU). Bounds checking would become fundamental and a whole class of security problems in viruses and worms destroyed.
Program correctness would also receive a great boost, since one of the most common unwitting errors that programmers make is out-of-bounds access.
A special systems programming language for lowest layers (memory allocation, etc) should be developed, and only used for the lowest layers of OS, nowhere else. Higher levels of the OS begin to look like applications programming for which a more general purpose language could be used. C should be relegated as a legacy systems language with languages that avoid the undesirable non-orthogonality of C (and especially C++) being developed to improve the software development process.
There is a lot of work to do to achieve secure and correct computing. It does need to start at the lowest levels with a complete rethink.
-
Wednesday 22nd November 2017 04:30 GMT Anonymous Coward
Re: Complete rethink
Basically Rust vice C and, if it should develop nicely, Redox as OS. Minimal adjustments there as it's a bit harder to fuck things up. Possible, but harder.
That's an iteration. What I see is that we need to go back and reexamine the structure and behaviors of the operating system from scratch. SCM, hybrid stacked silicon and other "gee, neat!" developments are calling into question the assumptions many of us work with in IT. {Shrug} That'll happen eventually when we get around to our next session of jumping up and down on our layers of abstraction to increase efficiencies in hardware, software and systems. I probably won't be around to see it.
-
Wednesday 22nd November 2017 04:55 GMT Ian Joyner
Re: Complete rethink
JoS. I mostly agree. We can get so much on a chip now that we really don't need RISC anymore. But we should also beware the excesses of CISC. I was going to say the Niklaus Wirth was doing RISC as Regular Instruction Set Computers a long time ago. But a little search found he is still working on such ideas:
https://www.inf.ethz.ch/personal/wirth/FPGA-relatedWork/RISC.pdf
Rust? Although I have had a quick look, I think we need to get further away from C and move towards zero-syntax or at least syntax-independent programming. C syntax now looks incredibly dated.
-
-
Wednesday 22nd November 2017 10:28 GMT Milton
Re: Complete rethink
I'm rarely averse to seeing the results of some clever folks doing a "complete rethink"—in fact, it's a healthy thing to do in almost any discipline every couple of decades or so, because it shakes things up, old assumptions get challenged, and cruft gets ditched.
If there is a requirement for rethinking computer architecture—and I am doubtful right now only because it seems to me that the subject is getting a lot of attention anyway—then indeed it should focus on security, because that is one of the things we're doing incredibly badly. But it's not the *only* thing we're getting wrong: bloat and inefficiency are appalling too. With the help of Moore's Law, we the IT community have been complicit in creating systems and software which are unnecessarily, crazily complex and which use a hundred times more resources in CPU cycles and memory than is required for many tasks. Ok, we're not in the days of writing assembler printer drivers for 6502s, and no one doubts the usefulness of libraries and full-featured OSs and a host of daemons, but it's far too easy to find yourself creating a 400Mb package of code to do something which, if you'd had to do it 25 years ago, would have run to 200kb. We're writing monstrously obese code to run on porcine operating systems, and—to come full circle—that makes things so complex that we're also constantly building in security defects. Complexity is the enemy of reliability and security.
Some famous engineer is reported to have told his disciples: "Simplify, and add lightness". Damn right. That's the kind of rethink I'd love to see.
As for security vs science, I respectfully suggest that may be missing the point. The last three major projects I worked on (two on encryption, one molecular modelling), all three eventually ran their computing work on a shedload of GPUs, and two of them required air-gapped systems in a basement. My (admittedly subjective, limited) perception is that if you're doing scienc-y stuff, it's very complex and needs a ton of security too.
-
-
Thursday 23rd November 2017 02:38 GMT Anonymous Coward
Re: Complete rethink
Well, security doesn't sell unless it's part of your job description. Look at all the Facebook/Google/Twitter traffic, and all the stories about people giving away their privacy (which is a security issue itself). Basically, the average Joe usually doesn't get it, and he takes everyone else with him.
-
Thursday 23rd November 2017 02:55 GMT Ian Joyner
Re: Complete rethink
"Well, security doesn't sell unless it's part of your job description."
Right. Security is a negative - it is trying to stop people doing things. An unfortunate but regrettably necessary aspect. The philosophy of 'customer knows what they want so do what they ask' has given way to 'customer does not know what they want, so do what they need'. Security is in this category.
But we do know that security must be build at lowest levels, or we are always chasing our tail, trying to install some magic software to provide security, and that doesn't work.
Security is also the balance between making computing easy for a legitimate user, but as hard as possible for a malicious attacker. As far as the legitimate user is concerned security facilities built in at lowest levels, such as bounds checking, actually makes no difference and certainly does not adversely impact anything that is computable. In fact, in addition to security, it helps developers develop correct programs.
-
Thursday 23rd November 2017 03:30 GMT Charles 9
Re: Complete rethink
"Security is also the balance between making computing easy for a legitimate user, but as hard as possible for a malicious attacker. As far as the legitimate user is concerned security facilities built in at lowest levels, such as bounds checking, actually makes no difference and certainly does not adversely impact anything that is computable. In fact, in addition to security, it helps developers develop correct programs."
And yet you don't see things like tagged memory in most processors? Why? Because of the other two legs of the triangle: cost and performance. You either take a noticeable performance hit or pay through the nose. And yes, people pay attention to those two. Media encoding jobs (such as home video editing) still take time even on relatively recent hardware (last I checked, you still can't do realtime 1080p HEVC even on an i7, let alone 4K down the road). And of course, there's still gaming, business calculations, and so on. At the same time, people don't want to spend a lot on their computers because, unlike things like cars, computers can't kill them. Wanting peace of mind takes a direct threat to make it desirable. Otherwise, it isn't worth it.
As for balancing between ease of use and difficulty, remember there's always the dreaded overlap. The paths you MUST leave for the users to get through can just as easily become the way in for the enemy, and there's no real way to stop this because there's no real way to prevent a sufficiently-disguised imposter (and we already know adversaries are ready, willing, able, and even eager to steal identities for this purpose, no matter how insignificant the identity).
-
Friday 24th November 2017 00:00 GMT Ian Joyner
Re: Complete rethink
"You either take a noticeable performance hit or pay through the nose."
No, you are way overstating your case against a straw man argument, since you introduce tag architectures, which I have not mentioned in this thread. Tagged architectures are one way of enhancing security. But the performance hit probably negligible because you can offset the need to concatenate tag+instruction with sophisticated look-ahead logic. Cost? Well memory is really cheap these days so for a four bit overhead on a 64-bit word, not much.
Compared to the cost of security breaches these days, the cost of hardware is negligible. Many of these arguments come from a time when hardware was expensive and the equation was to make hardware cheaper. But the assumptions have changed. Now security problems cost a bomb and hardware is cheap.
"computers can't kill them"
Oh, yes they can.
-
Saturday 25th November 2017 16:08 GMT Charles 9
Re: Complete rethink
Well, if things like tagged memory are so cheap to implement, why haven't they been implemented in an opt-in fashion already and sold as a business security point (unlike consumers, businesses will at least have an eye on security--to keep company secrets being wired out the door)? Maybe the change would be too radical and could break legacy implementations that don't expect stuff like this. Another possibility may be to implement this at an OS level to make it less architecture-dependent. But either approach is probably going to break some things which makes it risky.
"Oh, yes they can."
Name an instance where a computer directly killed a user (analogous to when a car crashes into a tree or the like).
-
Sunday 26th November 2017 05:20 GMT Ian Joyner
Re: Complete rethink
"Well, if things like tagged memory are so cheap to implement, why haven't they been implemented in an opt-in fashion already"
So what is your point here? You are not making a technical point of any credibility. Besides, I told you it is not necessarily tagged memory (a topic you raised), but it is the kind of security tags give you. If there are other ways of doing this fine.
Yes, change is radical and breaks things. In my experience I - and others - have taken buggy code (from well-known developers), run them in environments where their behaviour is checked and found many latent bugs. That is a good breaking of legacy applications, it tightens them up.
"Another possibility may be to implement this at an OS level to make it less architecture-dependen"
That will be much slower and not nearly as secure.
"Name an instance where a computer directly killed a user (analogous to when a car crashes into a tree or the like)."
I'm sure there have been many - like the recent Tesla car crash. Remember the Ariane V disaster - luckily no one was killed, but software interacts with the real world. Again you have no point because your original point that software doesn't kill people is patently false.
-
-
-
-
-
-
-
Thursday 23rd November 2017 02:45 GMT Ian Joyner
Re: Complete rethink
Hello Milton. "As for security vs science, I respectfully suggest that may be missing the point."
I'm suggesting science as a very broad spectrum, but maybe it needs to be more tightly defined as really time-critical computations. These can indeed benefit from very low-level coding, compilers directly generating microcode as in RISC. When computers were big and expensive, of course getting every ounce out of a CPU was of primary concern and a large proportion of computing was used for scientific purposes.
But now that is a really small part of the market. And such systems should most likely not be connected to anything. But that is now a small part of the market, so I'm saying the rest is where security is critical, much more so than performance. However, building security such as bounds checks into hardware can get security and performance for most applications.
Unfortunately the performance considerations of the 1950s have long dominated the thinking of many computing people and in the minds of many, performance still trumps security and correctness. That is what I am saying needs a complete rethink, or at least a change to this perspective.
-
-
-
Wednesday 22nd November 2017 09:43 GMT Torben Mogensen
About time, I think
When Apple started making their own ARM chips, I predicted that they would move to ARM on the Mac line also. It has taken longer than I expected, but Apple has good reasons for this:
1. It would make Macs the ideal tool for developing iPhone software, as it can be made to run it without emulation.
2. More parts of the OS can be shared between Mac and iPhone.
3. It allows Apple to make a SoC to exactly their specifications instead of relying on what Intel produces.
4. It removes dependency on Intel (or AMD).
It is not impossible for Apple to make a 64-bit ARM processor that will outperform the fastest Intel processor. I'm sure Apple would love having the fastest laptops around, so people would migrate from Wintel to Apple for performance reasons. Apple need to do more work on the FP side to make this happen, but it is not impossible.
-
Wednesday 22nd November 2017 20:39 GMT J. Cook
I have nothing to add to this discussion except...
... the title of a song from the movie "The Lion King":
It's the Ciiiiiirrrrccllle of life!!!
*bows*
Amusingly enough, the POWER range of CPUs are still in production; IIRC, we are looking at getting a new iSeries here at [REDACTED] to replace the POWER5 based ones that are nearing the end of their support lifespan with ones running on POWER8 processors. (I think) We also had a few POWER6 boxes running around a couple years ago; even though they took 20 minutes to get to the point where you could turn them on, they ran FAR faster than the machines they replaced, even with AIX running on them (and the various vendor specific programs that were programmed in *shudder* BASIC!)
-
-
Wednesday 24th January 2018 20:38 GMT Charles 9
Re: ARM *and* x86?
Well, the Z-80 on the C128 was hobbled by the bus design. And it couldn't be used in standard Commodore mode (it was simply used to handle behind-the-scenes stuff). Even in CP/M mode, it ran at 1/4 its rated speed in 40-column mode and 1/2 speed in 80-column mode (turning off the 40-column VDC removes a bottleneck, which is why FAST mode requires using the CGA-compatible 80-column mode).
-