back to article That sinking feeling: Itanic spat's back as HPE Oracle trial resumes

Oracle is back in court, this time fending off a $3bn case brought by Hewlett Packard Enterprise. A week after it lost to Google on Java, Oracle is now resuming its fight with HPE over damages relating to HPE's claims that Oracle back-tracked on a commitment to put its software on HP servers running Intel’s Itanic chips. HPE …

  1. Anonymous Coward
    Anonymous Coward

    Itanic, or IA-64, or Itanium, was Intel’s project that began in 1991 to build a powerful x86 alternative to the RISC brutes such as IBM’s Power and Sun’s SPARC.

    Kind of I suppose. Actually it drew very heavily on HPs own VLIW work - hence the "partnership". An interesting architecture for the eyes of loads that were being predicted - and it might have gone further (CA Wankels rotary engine) in other circumstances.

    1. A Non e-mouse Silver badge

      This was also the time when Intel were saying that there wouldn't be a 64-bit version of the x86 processor architecture as the Itanic would be Intel's 64-bit roadmap.

      That lasted until Intel saw AMD stealing a lead with their Athlon 64 processors....

  2. Paul Crawford Silver badge
    Trollface

    I had forgotten that anyone still made Itanium based machines, and to think HP/Compaq dumped Alpha for this. Still I am sure Larry's pay-off will help HP's executive bonuses next year.

  3. Mikel

    The disconnect

    There is a rather fundamental disconnect between HP's Itanic marketing people and common sense. It is so extreme that is difficult to have meaningful communications with them. It seems that if you are not presold on the value proposition of their USP over price, performance and compatibility with modern technologies, they can't understand you either. And they don't want to understand.

    It makes it hard to discuss issues such as distributed reliability with them.

    1. Alan Brown Silver badge

      Re: The disconnect

      "It is so extreme that is difficult to have meaningful communications with them"

      Especially when what they sold doesn't actually work (personal experience). This was the final straw which ensured that my employer never purchased HP systems again.

  4. Anonymous Coward
    Anonymous Coward

    Itanic is right!

    HPE should have taken the Tandem SQL and the Tandem non-stop architecture to IA-64 istead of depending on Linux and Oracle..... Lost opportunity....

    1. Destroy All Monsters Silver badge

      Re: Itanic is right!

      I am not sure that statement makes particular sense.

    2. Matt Bryant Silver badge
      WTF?

      Re: AC Re: Itanic is right!

      "HPE should have taken the Tandem SQL and the Tandem non-stop architecture to IA-64...." Er, they did, quite a while ago. IIRC, you can still buy Itanium-based Nonstop I servers and now also Xeon-based Nonstop X servers (http://www8.hp.com/us/en/products/servers/integrity/nonstop/nonstopi-rackservers.html#!&pd1=1).

      1. Anonymous Coward
        Anonymous Coward

        Re: AC Itanic is right!

        I can't.

        I can't possibly upvote a Matt Bryant comment.

        Therefore I will simply point out that in this instance Matt is factually correct, unlike the post to which he is replying.

        There is still more to some parts of the world than Windows and assorted UNIXalikes.

        1. Vic

          Re: AC Itanic is right!

          I can't possibly upvote a Matt Bryant comment.

          Sooner or later, it happens to us all...

          Vic.

  5. energystar
    Headmaster

    Could Intel cut...

    Could Intel cut the biggest instructions and just add an emulator module? This way the architecture could extend outside of the Server Wing.

    Come on. Give the Lawyers' tired pockets a rest... </JokeAlert>

  6. Destroy All Monsters Silver badge
    Paris Hilton

    Ho-hum!

    IA-64 is marketed today as Itanium and is targeted at mainframe-like performance and multi-threading

    Well, that's just the hype.

    Did anyone ever manage to produce a compiler that generated good VLIW code? I can imagine that it works wonders for specialized applications. For example. vector processing works wonders for linear algebra operations. However, does this approach work in general or are the compiler overhead and/or the inability to actually use the VLIWs efficiently too costly?

    1. Matt Bryant Silver badge
      Facepalm

      Re: Destroy All Monsters Re: Ho-hum!

      "....Did anyone ever manage to produce a compiler that generated good VLIW code?...." Better than Sun's native SPARC. Fujitsu had a demo of "the World's fastest Solaris server" of Slowaris on Itanium. We used to have a laugh running Slowaris on top of Transitive's QuickTransit emulation software on an Integrity Superdome to show how it was faster than any Sun server, just to annoy the Sun salesgrunts (a trick the hp salesgrunts showed us with some glee - http://www.itjungle.com/breaking/bn062007-story01.html). Such a shame that IBM bought and killed Transitive because they were worried emulation would smack a big hole in their mainframe biz (http://www.theregister.co.uk/2008/11/25/ibm_transitive_options/)

      1. Destroy All Monsters Silver badge
        Paris Hilton

        Re: Destroy All Monsters Ho-hum!

        Thanks for the canned ragesponse, but did anyone mention Sun?

        > hp salesgrunts showed us with some glee

        Must be a first.

    2. Michael Wojcik Silver badge

      Re: Ho-hum!

      However, does this approach work in general or are the compiler overhead and/or the inability to actually use the VLIWs efficiently too costly?

      "good VLIW code" is obviously subjective, and I admit I've never looked into rigorous comparisons between code generation for VLIW and non-VLIW architectures. There was quite a bit of research into VLIW compilation, though, even before Itanium. Monica Lam wrote a well-known piece on software pipelining for VLIW back in 1988, for example (it was included in SIGPLAN's Best of PLDI 1979-1999). Subsequent work by e.g. Gao improved Lam's algorithms. A '96 paper showed software pipelining in a state-of-the-art commercial compiler produced near-optimal scheduling, but that was for the R8000, not Itanium.

      But in her retrospective for Best of PLDI Lam more or less agrees with your point regarding VLIW techniques and non-numeric code:

      The Itanium, however, does not have a dynamic scheduler which is found in all other

      modern processor architectures. Software pipelining is applicable only to codes with predictable behavior like numerical applications; as such, it only expands the number of instructions in innermost loops slightly. On the other hand, the behavior of non-numeric applications is much less predictable; without a dynamic scheduler, an aggressive static scheduler needs to generate codes for many alternate paths, which can lead to code bloat.

      That was in 2003, and the Itanium architecture has since evolved, of course.

      There seems to be much less research being conducted on VLIW in the past decade than the one before it, judging from the ACM Digital Library. And most of the recent stuff seems to be dealing with problems raised by VLIW (e.g. instruction merging when implementing SMT on VLIW cores) rather than on taking advantage of it.

      In the early part of the present century, though, there was quite a bit of VLIW compiler research, so current VLIW compilers may be pretty good. I've on occasion looked at the code generated by the HP-UX 11.31i C compiler1, but I wasn't trying to gauge its quality.

      1Spent far too long debugging an intermittent issue that turned out to be caused by a trap representation in a register. Turns out Itanium supports a trap representation - a Not-a-Value - in its integer registers. There was a piece of code that was calling a function declared with void return type, but without a declaration in scope, so the caller implicitly treated it as having int return type. That meant the caller loaded the "return value" from a register when the call returned. Sometimes there was a valid value left in that register; once in a while it was Not-a-Value, which caused the kernel to raise SIGILL. Elusive. There was a compiler diagnostic for the lack of a declaration, but it was an old code base full of warnings, and a build system that discarded those warnings if the build succeeded. Sigh.

      The Itanium register trap representation is not a bad idea, but SIGILL is a lousy way to report it. It would have helped if, say, the signal(2) man page mentioned this quirk of the CPU.

    3. Anonymous Coward
      Anonymous Coward

      Re: Ho-hum!

      There's at least one relatively non techy whitepaper about how VLIW is inherently unable to exploit runtime parallelism, it can only do parallelism which is visible at compile time, whereas a proper RISC compiler and chip will exploit both compiletime parallelism and runtime parallelism (obviously that's my gross oversimplification):

      http://www.cs.trinity.edu/~mlewis/CSCI3294-F01/Papers/alpha_ia64.pdf

      [In case it's not obvious from the paper itself: it came from the Alpha people]

  7. hellwig

    Decade off?

    I think someone added 10 years to most of the dates in this article. Lawsuit filed in "2011"? Itanium support through "2025"? Published 01 June "2016"?

    People talk about how Intel should separate their design from their fab, but if they don't run the fab, who would continue to burn-out Itaniums?

    I can understand custom/dedicated hardware is needed for certain compute jobs, but I wouldn't think Intel was in the business of those sorts of small-scale projects. Much less in the business of supporting the architecture 20 years past its relevance. I mean, they're cancelling Atom-based mobile SoCs because of low demand, is the demand for Itanium still high enough?

    1. energystar
      FAIL

      Re: Decade off?

      "I mean, they're canceling Atom-based mobile SoCs because of low demand, is the demand for Itanium still high enough?"

      OK, got it. After a decade of stagnation, almost on the impossible to bring it to competitiveness.

    2. joeldillon

      Re: Decade off?

      Intel is in a partnership with HP which contractually obliges them to keep putting out Itaniums, though they're putting very little effort into the architecture. They'd love to drop it.

      HP is presumably still selling enough of the kit into legacy environments that it's worth it to them.

  8. OzBob

    Itaniums were good for monster databases

    that had high IO workload (NHS, Mining companies, finance companies, Defence industries) but with the "virtualise everything and split into components like SAP" methodology, they are fast becoming redundant. Not going back to that particular brand (and note how many HP-UX jobs on the UK sites require security vettings).

  9. Fenton

    Scalability

    This is where Itanium really is falling behind.

    Clock speeds have only just reached 2.57Ghz, and core count is only 8 Max.

    Yes it can scale up to 16 Sockets, but that is still only 128 Cores in a single image.

    An 8 Socket Haswell box can scale up to 144 cores, Broadwell will reach 176 Cores.

    And then you look at the cost and wonder why every software vendor is targeting x86.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like