back to article Stephen Hawking: The creation of true AI could be the 'greatest event in human history'

Pioneering physicist Stephen Hawking has said the creation of general artificial intelligence systems may be the "greatest event in human history" – but, then again, it could also destroy us. In an op-ed in UK newspaper The Independent, the physicist said IBM's Jeopardy!-busting Watson machine, Google Now, Siri, self-driving …

COMMENTS

This topic is closed for new posts.

Page:

  1. Pete Spicer
    Boffin

    I have no fear of true AI being created, mostly because if we can't consistently and accurately solve simpler algorithmic problems (c.f all the Big Security issues lately), what chance is there of us creating a program many more levels of complex that doesn't have fatal bugs in it?

    1. asdf
      Boffin

      two words

      Quantum computers. They might give us true AI or they might be a dead end like string theory or always a decade away like economical fusion energy production. Going with the boffin icon to deflect the fact I am making shit up.

      1. Destroy All Monsters Silver badge

        Re: two words

        No. Quantum computers are good for the following:

        1) Factoring large numbers

        2) Rapid searching in databases

        3) Simulation of quantum processes

        Otherwise they are mostly useless for any calculation of importance in the macro world.

        "True AI" is anyway fucking bullshit. It's like saying that you will finally attain "a true combustion engine".

        1. asdf

          Re: two words

          Human thought is both analog and digital if I remember right and obviously electrochemical so I am sure you are right. Still factoring shit is something I guess especially if you can find others dumb enough to pay you hundreds of dollars per bitcoin.

        2. Anonymous Coward
          Anonymous Coward

          Re: two words

          "True AI" is anyway fucking bullshit. It's like saying that you will finally attain "a true combustion engine".

          Yes. There's no reasonable definition of what it will mean in practice, but whatever they ultimately pin the tag on, it won't be Skynet, or Agent Smith. It might be a gigantic distributed system that can 'walk around' in a remote body, with a marvelous degree of sophistication in terms of simulated personality, sensors, and capabilities. It will still not be an entity, and will not be a direct threat - leave that to those humans who always seem to crop up with new, unethical 'applications' for tech.

          1. This post has been deleted by its author

        3. Suricou Raven

          Re: two words

          The 2) might help. A lot of AI work is on machine learning - a field which requires the application of truely ridiculous amounts of processor power.

          We know it can work, because it's worked before - it's the algorithmic approach which lead to us. Doing so required four billion years of runtime on a computer the size of a planet.

          1. Michael Hawkes
            Terminator

            Re: two words

            "Here I am, brain the size of a planet... Call that job satisfaction, 'cause I don't."

            Where's the Marvin icon when you need it?

        4. sisk

          Re: two words

          No. Quantum computers are good for the following:

          1) Factoring large numbers

          2) Rapid searching in databases

          3) Simulation of quantum processes

          Otherwise they are mostly useless for any calculation of importance in the macro world.

          Funny. They said something very similar to electronic computers 70 years ago. Supposedly the things were only useful for very large calculations and there's no reason anyone would ever want one outside of a lab.

    2. asdf

      can't resist

      >creating a program many more levels of complex that doesn't have fatal bugs in it?

      Perhaps someone or company will do it right but based on my companies history our AI product will be riding the short bus to school.

    3. Yet Another Anonymous coward Silver badge

      Oh good. A buggy - ie effectively insane - all powerful AI entity.

    4. Mage Silver badge

      Agreed

      None of the present examples are AI. Hawkins should stick to Physics & Mathematics etc.

      We can't even agree on a definition of Intelligence, which is partly tied up with creativity. So how can anyone write a program to simulate it. The history of AI in Computer Science is people figuring how to do stuff previously thought to require intelligence and redefining what counts as AI rather than coming up with a proper rigorous definition of Intelligence.

      1. Michael Wojcik Silver badge

        Re: Agreed

        We can't even agree on a definition of Intelligence, which is partly tied up with creativity. So how can anyone write a program to simulate it. The history of AI in Computer Science is people figuring how to do stuff previously thought to require intelligence and redefining what counts as AI rather than coming up with a proper rigorous definition of Intelligence.

        All true. However, there are some research groups still working on aspects of strong AI, or cognate fields such as unsupervised hierarchical learning1 and heterogeneous approaches to complex problems like conversational entailment, which at least have far more sophisticated pictures of what "intelligence" in this situation might entail. They understand, for example, that anything that even vaguely resembles human cognition can't simply be a reactive system (that is, it has to continuously be updating its own predictive models of what it expects to happen, and then comparing those with actual inputs); that it has to operate on multiple levels with hidden variables (so it would have phenomenological limits just as humans do); and so on.

        That doesn't mean we're close to solving these problems. A quick review of the latest research in, say, natural language processing shows just how difficult they are, and how far away we are. And the resources required are enormous. But we do have a better idea about what we don't know about human-style intelligence than we did a few decades ago, and we do have a better checklist of some of the things it appears to require.

        It's worth noting that Searle's widely-misunderstood2 "Chinese Room" Gedankenexperiment, to take one well-known argument raised against strong AI, was intended simply to discredit one (then-popular) approach to strong AI - what Searle referred to as "symbolic manipulation". That's pretty much what Mage was referring to: a bunch of researchers in the '60s and '70s said, "hey, AI is just a matter of building these systems that use PCFGs and the like to push words around", and Searle said "I don't think so", and by now pretty much everyone agrees with Searle - even the people who think they don't agree with him.

        Strong AI wasn't just a matter of symbolic manipulation, and it won't be just a matter of piling up more and more unsupervised-learning levels, or anything else that can be described in a single sentence. Hell, the past decade of human-infancy psychological research has amply demonstrated that the interaction between innate knowledge and learning in the first few months of human life is way more complicated than researchers thought even twenty or thirty years ago. (We're just barely past the point of treating "innate knowledge" and "learning" as a dichotomy, as if it has to be just one or the other for any given subject. Nicholas Day has a great book and series of blog articles on this stuff for the lay reader, by the way.)

        Strong AI still looks possible3, but far more difficult than some (non-expert) commentators like Hawking suggest. And that's "difficult" not in the "throw resources at it" or "we need a magical breakthrough" way, but in the "hard slog through lots of related but fundamentally different and very complicated problems" way.

        1Such as, but not limited to, the unfortunately-named "Deep Learning" that Google is currently infatuated with. "Deep Learning" is simply a hierarchy of neural networks.

        2John Searle believed in the strong-AI program, in the sense that he thought the mind was an effect of a purely mechanical process, and so could in principle be duplicated by a non-human machine. He says that explicitly in some of the debates that followed the original Chinese Room publication.

        3No, I'm not at all convinced by Roger Penrose's argument. Penrose is a brilliant guy, but ultimately I don't think he's a very good phenomenologist, and I think he hugely mistakes what it means for a human being to "understand" incompleteness or the like. I'm also not so sure he successfully establishes any difference in formal power between a system capable of quantum superposition and a classical system that models a quantum system.

        1. sisk

          Re: Agreed

          I'm not an expert on the subject by any stretch of the imagination, but I would imagine that we're a good century away from what most laymen (and in this regard Hawking is a layman) think of as AI - that is self aware machines. Barring some magical breakthrough that we don't an won't any time soon understand* it's just not going to happen within our lifetimes. We don't even understand self-awareness, let alone have the ability to program it.

          In technical terms I would argue that an AI need not necessarily follow the model for human intelligence. Our form of intelligence is a little insane really if you think about it. Machines would, of necessity, have a much more organized form of intelligence if it were to be based on any current technology. If you'll grant me that point I would argue that it follows that we don't necessarily have to fully understand our own intelligence to created a different sort of intelligence. Even so we're a long ways off from that unless you accept the theory that the internet itself has already achieved a rather alien form of self awareness (I forget where I read that little gem, but I'm not sure about it. The internet's 'self awareness' would require that its 'intelligence', which again would be very alien to us and live in the backbone routers, understand the data that we're feeding it every day, such as this post, and I just don't buy that.)

          *Such breakthroughs do happen but they're rare. The last one I'm aware of was the invention of the hot air balloon, which took a long time to understand after they were invented.

    5. tlhulick

      That is, of course, the problem: even remembering the havoc caused a short while back on Wall Street because of algorithmic problems of "automatic trading" which will never be known, because they can never be identified, (but were, fortunately stopped by shutting down momentarily, and upon "reignition," everything was back to normal) The more important question being ignored, and which I believe underlies Steven Hawking's opinion, remains: what if that black-hole, downward spiral had continued unabated, increasing expotentialy after this "reboot."

    6. brainbone

      Predictions, tea pots, etc.:

      The first AI that we'll recognise as equal to our own will happen after we're able to simulate the growth of a human in a computer in near real time. This ethically dubious "virtual human" will be no less intelligent than ourselves, and what we learn from it will finally crack the nut, allowing us to expand on this type of intelligence without resorting to growing a human, or other animal, in a simulator.

      So, would the superstitious among us believe such a "virtual human" to have a soul?

    7. Robert Grant

      Yeah, and also we haven't done anything like create something that can think. Progress in AI to us means a better way of finding a good enough value in a colossal search space. To non-CS people it seems to mean anything they can think of from a scifi novel.

      Honestly, if it weren't Hawking opining this fluff, it would never get a mention anywhere.

  2. Destroy All Monsters Silver badge
    Facepalm

    Why is Hawking bloviating on AI this and that?

    I can't remember him doing much research in that arena. He could talk about cooking or gardening next.

    1. asdf

      Re: Why is Hawking bloviating on AI this and that?

      Would you rather have Stephen Fry bloviating on the topic?

      1. Yet Another Anonymous coward Silver badge

        Re: Why is Hawking bloviating on AI this and that?

        Surely you cannot be criticizing St Stephen of Fry ?

        The patron saint of Apple Mac fanbois

        1. asdf

          Re: Why is Hawking bloviating on AI this and that?

          Oh so the British Walt Mossberg then ok. Yeah finally looked up his name and cut comment down to about %10 of the size it was before lol.

          1. Dave 126 Silver badge

            Re: Why is Hawking bloviating on AI this and that?

            >Why is Hawking bloviating on AI this and that? I can't remember him doing much research in that arena.

            Well, Hawking's collaborator on black holes, Roger Penrose, is known for writing 'The Emperor's New Mind', in which he 'presents the argument that human consciousness is non-algorithmic, and thus is not capable of being modeled by a conventional Turing machine-type of digital computer. Penrose hypothesizes that quantum mechanics plays an essential role in the understanding of human consciousness. The collapse of the quantum wavefunction is seen as playing an important role in brain function.

            The majority of the book is spent reviewing, for the scientifically minded layreader, a plethora of interrelated subjects such as Newtonian physics, special and general relativity, the philosophy and limitations of mathematics, quantum physics, cosmology, and the nature of time. Penrose intermittently describes how each of these bears on his developing theme: that consciousness is not "algorithmic"'

            The areas in bold are very much up Hawking's street.

  3. Graham Marsden
    Terminator

    Warn: There is another system

    Colossus to Guardian: 1 + 1 = 2

  4. Anonymous Coward
    Anonymous Coward

    Maybe he should have finished reading 'The Two Faces of Tomorrow' by James P Hogan.

  5. Anonymous Coward
    Anonymous Coward

    Thinking about thinking

    The most worrying ethical issue I have involves the causing of pain to another sentient conciousness, and how easily this might happen as soon as an AI (that becomes self aware) or mind uploading starts to kick off.

    The classical depictions of Hell being an eternity of suffering become feasible, once the innate mortality of the bodily support systems for a conciousness is removed and can be replaced by a machine that can be kept running forever. And think of the power an oppressive government could wield, if it's threat isn't just "If you do something we don't like, we'll kill you" but more "If you do something we don't like, we'll capture you, and torture you. Forever. And might make a hundred clones of you, and torture each of those too!"

    I'm not one to say technology shouldn't be pursued due to the ethical considerations of misuse, but AI and related fields seem to be ones to tread carefully down.

    But it also raises interesting scenarios, like forking a simulation of a mind, and then allowing the the clones to interact. Imagine meeting and talking to a version of yourself. And raises the prospect of true immortality, your human body being just the first phase of your existence. "You" being defined as your thoughts, memories, personality, rather than the wet squishy biochemical stuff that keeps that conciousness thinking.

    Does thinking even need an organic brain, or a computer? Could we simulate a conciousness just by meticulously calculating and recording by hand the interactions of an incredibly vast neural network? Sure, it'd take a very long time, but if you imagine a religious order who's only purpose is to multiply and add up numbers, then record the results in a series of books. Centuries passing as generations of them attend their task, volumes upon volumes of these books carrying the state of the system as it evolves, until enough time has passed within the simulation for the mind to have spent a fleeting moment in thought.

    What was it that enabled that thought? The mere act of all those monks making the calculation? Of letting the state feed back on itself? Of writing the state down somewhere? What if the monks never wrote the intermediate states down, does the simulation still think?

    1. nexsphil

      Re: Thinking about thinking

      It's actually frightening to imagine primitives like ourselves getting hold of the kind of power you describe. I'd like to think that if things became that extreme, we'd be swiftly put out of our misery by a benevolent advanced species. One can only hope.

    2. Anonymous Coward
      Anonymous Coward

      Re: Thinking about thinking

      It's thankfully not that "easy" to cause harm or suffering to a virtual "thing" (or AI should it ever exist).

      Why? Well, ask yourself. Do cars suffer? Does a drill or hammer? What about a clock or current computer OS?

      Those do not suffer, because they are not people. What about backing up your PC to a previous state and recovering it after a crash? Does that still meet the definition of it suffering?

      So before we even get to the point of considering if we can program and create something that IS "Alive" and may be close to being a "person" (and not just a bacterium of AI), we have to consider what it would mean for it to experience pain, even if it could.

      1. majorursa
        Terminator

        Re: Thinking about thinking

        Your argument should be turned around. If something is able to feel pain it is 'alive', either intelligent or not.

        A good measure of 'true AI' could be how strongly an entity is aware of its own impending death.

        1. Anonymous Coward
          Anonymous Coward

          Re: Thinking about thinking

          Do you mean pain as in a response, a signal or as in "the feeling of suffering"? Computers can have a response and a signal, do they feel pain? They can "detect" damage, so can my car... does it feel pain?

          As said, for us to worry about something suffering, we first have to worry about it being alive. Currently we are clearly far away from making something fit the alive state (computers, cars, hammers). So we can also say we are safely far away from making those things suffer.

          Even before we worry about how to define a living and intelligent thing we made, we can concentrate on the real biological ones living around us right now. :)

          Once we get close enough to ask "is this alive or not... is it a person or not..." then we can start to ask the other hard questions. Personally I don't think we will reach it (diminishing returns and we already have the solution, it is biological in function and form).

          The simplest way I can put it is that a picture, book or "computer program" of Switzerland is not actually Switzerland. If I was to progress one step up, with a "perfect AI replication of Switzerland simulated in every detail", I'd end up with a simulation the exact same size and with the same mechanics as the thing I was trying to simulate. The "map the size of the country, 1:1 scale" problem. :P

    3. Sander van der Wal

      Re: Thinking about thinking

      Mmmm. Given that humans are the ones coming up with these kinds of nasty applications all the time, the first thing a rational being will do is making sure the cannot do that kind of torture on him.

    4. mrtom84
      Thumb Up

      Re: Thinking about thinking

      Reminds me of a quote out of Permutation City by Greg Egan about creating a consciousness using an abacus.

      Great book...

    5. d3vy

      Re: Thinking about thinking

      "magine meeting and talking to a version of yourself"

      I can honestly say that would be terrible, I've spent quite a bit of time with me and I'm a complete tit.

    6. Michael Wojcik Silver badge

      Re: Thinking about thinking

      Welcome to sophomore Introduction to Philosophy class, courtesy of the Reg forums.

      It is just barely possible that some people have already considered some of these ideas.

  6. Rol

    Still using humans?

    "The model AI 1000 can outperform humans by any measure and have no morals to cloud their judgement"

    "Mmm, you are right, I'll take two for now"

    "Wise choice sir, that'll be 20 million bit coins and 5000 a month for the service contract"

    "Err, on second thoughts I'll stick with David Cameron and his sidekick for now, they're far cheaper and just as easy to control"

    "As you wish Lord Sith, but please take my card, you never know"

    1. d3rrial

      Re: Still using humans?

      20 million bitcoins? Out of 21 million which could possibly exist? That'd be one hell of an AI if it costs over 90% of the total supply of a currency.

      I'd recommend Angela Merkel or the entire German Bundestag. They're all just puppets. You know that carrot on a stick trick? Just put a 5€ note instead of the carrot and they'll do whatever you want (but they salivate quite a bit, so be careful)

  7. Timothy Creswick

    FFS

    Seriously, how many different spellings do you need for one man's name in this article?

    1. Rol

      Re: FFS

      Apparently the colossal works churned out by this man, were in fact a collaboration between Stephen Hawking, Stephen Hawkin and Stephen Hawkins, and they'll be telling us it's just a coincidence.

      I suspect the teacher had the class sit in alphabetical order and they copied off each other at the exam.

      1. JackClark

        Re: FFS

        Hello, unfortunately the article also waffles on about Jeff Hawkins, so Hawking/Hawking's/Hawkins all present but, as far as I can work out, correctly attributed.

  8. RobHib
    Stop

    I just wish....

    ...that 'ordinary' AI had reached a sufficient level of development that OCR would (a) actually recognize what's scanned and not produce gibberish, and (b) provided me with a grammar checker that's a tad more sophisticated than suggesting that 'which' needs a comma before it or if no comma then 'that' should be used.

    Seems to me we've a long way to go before we really need to worry, methinks.

    1. Anonymous Coward
      Anonymous Coward

      Re: I just wish....

      Not that I disagree, but I suspect an AI won't be achieved by logic, but by chaos theory / emergent behavior. We'll end up with something intelligent, but it will be error prone just like humans.

      If Moore's Law stalls out as looks to be the case, we may end up with an AI that's close to us in intelligence. Maybe a bit smarter, maybe a bit dumber, but prone to the same mistakes and perhaps even something akin to insanity.

      Where's that off switch, again?

      1. Mage Silver badge

        Re: I just wish....

        In reality Moore's law started to tail off rapidly about 2002.

        1. Anonymous Coward
          Anonymous Coward

          Re: I just wish....

          I think you have absolutely no idea what Moore's Law is if you think that. It has nothing to do with frequency, as you appear to be assuming.

          Moore's Law says that every 24 months (it was 18 for a while at the start) the number of transistors you can put on a chip doubles. That's still the case, we've still been doubling transistors, even since 2002. That's what we have stupid stuff like quad/octo core phones - it is hard to come up with intelligent ways to use all those extra transistors, so "more cores, more cache" are the fallback positions when chip architects are not smart enough to put them to better use.

          Moore's Law may have trouble going much beyond 2020 unless we can make EUV or e-beam work properly. We've been trying since the 90s, and still haven't got it right, so the clock is ticking louder and louder...

          1. Don Jefe

            Re: I just wish....

            Moore's Law is often mis-categorized as a solely technological observation, but that's inaccurate. Incomplete at any rate. Moore's law is a marketing observation turned, shrewdly, into an investment vehicle.

            I mean, come on, analysts and junior investors get all rabid talking about results just a few months into the future. Here comes Gordon Moore, chiseling 24 month forecasts into stone. When an acknowledged leader in a field full of smart people says something like that everybody listens. He's not know for excessive fluff you know, 'It must be true, they must already have the technology' everybody said.

            The best part, it didn't matter if it was true, or even remotely feasible, when he said it. People threw enough money and brain power into it that they could have colonized another planet if they wanted to. Gordon Moore created the most valuable self-fulfilling prophecy since somebody said their God would be born to the line of David. I think it's all just fucking great.

            1. John Smith 19 Gold badge
              Unhappy

              Re: I just wish....

              "The best part, it didn't matter if it was true, or even remotely feasible, when he said it. People threw enough money and brain power into it that they could have colonized another planet if they wanted to. Gordon Moore created the most valuable self-fulfilling prophecy since somebody said their God would be born to the line of David. I think it's all just fucking great."

              True.

              But by my reckoning a current Silicon transistor gate is about 140 atoms wide. If the technology continues to improve (and S-Ray Extreme UV lithography is struggling) you will have 1 atom wide transistors. 1 electron transistors were done decades ago.

              Still it's been fun while it lasted.

              1. Anonymous Coward
                Anonymous Coward

                Re: I just wish....

                There's still a lot of room for improvement if you were really able to go from 140 atoms wide to 1 atom wide (which you can't, but we can dream) Scaling is in two dimensions, so that would be roughly 14 doublings or 28 more years. Then you can get one more doubling by using a smaller atom - carbon has an atomic radius about 2/3 the size of silicon.

                If you start scaling vertically, then you could keep going until almost the year 2100 - assuming you found a way to keep a quadrillion transistor chip cool! But don't laugh, I remember the arguments that a billion transistor chip would never be possible because it would exceed the melting point of silicon, but here we are...

                I don't agree that Moore's Law is just marketing. The investment wouldn't be made if it didn't pay off, it isn't like foundries are sucking the blood out of the rest of the economy and returning no value. Moore's observation may have been more of a roadmap, since there's no particular reason you couldn't have invested 20x as much and made a decade of progress in a few years. The stepwise approach made planning a lot easier, due to the lead time for designing the big systems typical of the late 60s/early 70s, and designing the more complex CPUs typical of the late 80s and beyond. His observation was intended for system architects, not Wall Street.

                1. Don Jefe

                  Re: I just wish....

                  Sorry to disappoint, but you've got it backwards. Nobody would stump up the money to build larger, more advanced foundries and equipment. Chip foundries are pretty much the most expensive thing in the world to build, take ages to pay for themselves and have extraordinarily high tooling and maintenance costs. Moore's Law reassured the banks the industry would grow, if they would just pony up the cash for new foundries.

                  Moore's Law (of Lending) Kickstarter financial institutions into global discussions, conferences and lobbying groups that leaned on State sponsored commercial banks for certain guarantees and an entirely new type of lending specialty was born. The self fulfilling bit starts there.

                  Once the first advanced foundry was built it became evident to everybody that it was going to take too long to recover their investments so they threw more money into things like R&D (banks will almost never lend for R&D, but they did for silicon), companies building machines to package chips at incredibly fast speeds (hi guys), mineral extraction all the way down to ongoing training for systems people and the development of more industry suitable university programs (Concordia University in Montreal benefitted greatly from that).

                  But the banks did all that because they were able to go to State commercial lenders with Moore's Law as their main selling point and once the money was secured, it was gone. There was no 'not keep investing if the payoff wasn't there', stunning amounts of money were spent. None (few) of the normal banking shenanigans available as options because as soon as millstone related monies were available they were spent. The banks had to keep investing and accelerating the silicon industry because one of the guarantees they had provided to the State banks allowed rates on other loans to be increased to cover any losses in the silicon foundry deals. If the foundries failed so did the banks.

                  It's all very interesting. The stunning amount of cabinet level political influence in the funding of Moore's Law is rarely discussed. That's understandable, politics and marketing aren't sexy, nor do those things play well to the romantic notion that technology was driving Wall St, but the facts are facts. The whole sordid story is publicly available, it wasn't romantic at all. It was business.

                  1. Anonymous Coward
                    Anonymous Coward

                    @Don Jefe

                    Fabs are super expensive today, but when Moore's Law was formulated and for several decades after, they were quite affordable. That's why so many companies had their own. As they kept becoming more expensive over time fewer and fewer companies could afford them - more and more became "fabless".

                    There's probably a corollary to Moore's Law that fabs get more expensive. Fortunately that scaling is much less than the scaling of transistors. Maybe cube root or so.

                    1. Don Jefe

                      Re: @Don Jefe

                      @DougS

                      That's a very valid point about the escalating costs. I'm sure you're correct and there's a fairly correlation between transistor density and 'next gen' green field fab buildout. Especially at this point in the game. For all their other failings, big commercial banks don't stay too far out of their depth when very complex things are in play. Some of the best engineers I've ever hired came from banks and insurance companies. I'll ask around tomorrow. Thanks for the brain food.

                      1. Anonymous Coward
                        Anonymous Coward

                        Re: @Don Jefe

                        One of the interesting things to watch is the battle over 450mm wafer fabs. The large remaining players - TSMC, Samsung and (perhaps to a lesser extent) Intel want to see 450mm fabs because getting 2.25x more chips per wafer is attractive to improve throughput and somewhat reduce cost.

                        The problem is that the tool vendors don't want to build them, because they don't see a return when they really have a handful of customers. They broke even - at best - on 300mm tools, and they may go bankrupt building 450mm tools. They'd have as few as two, maybe four or five customers at the very most. The return for them just isn't there. Intel invested in ASML to try to force the issue but still 450mm keeps getting pushed back.

                        I think the reason Intel's investment didn't help is because even Intel realizes deep down that going to 450mm is not economic for them, at least not unless they become a true foundry and take on big customers like Apple, instead of the tiny ones they've had so far. They have four large fabs now, with 450mm they'll only need two. Not to mention they are supposedly only at 60% utilization on the ones they have!

                        The economies of scale get pretty thin when you only have two fabs, so Intel is probably better off sticking with 300mm fabs. But they daren't let Wall Street know this, or the money men will realize the jig is up as far as Intel's future growth prospects, and that even Intel may be forced to go fabless sometime after 2020. Their stock price would be cut in half once Wall Street realizes the truth.

              2. Anonymous Coward
                Anonymous Coward

                Re: I just wish....

                I recently saw a presentation by IBM at CERN *. They are planning to stack hundreds of chips and supply power & cooling to them by using an electrolyte flowing through um-sized channels between the chip stacks.

                They reckon that by going 3D, they can make Moores Law more exponential.

                *) https://indico.cern.ch/event/245432/

Page:

This topic is closed for new posts.

Other stories you might like