Didn't sound too bad until; "There's also an AMD-designed Radeon-class GPU that been tweaked by Microsoft to within an inch of its life."
Microsoft aren't exactly known for their prowess in hardware design, they should've left it to AMD.
Microsoft has revealed details of the chip powering its soon-to-be-released Xbox One – and it's one big ol' mofo. How big? Does a 363mm2 footprint – using a 28-nanometer process, no less – filled with five billion transistors impress you? Perhaps Microsoft is planning to use this big boy for Halo: OverReach By comparison, …
"This is of vital importance."
This is of Zero importance.
Microsoft are well known for very secure hardware compared to the competition. If they are relying on drivers not being available to prevent people downgrading to Linux then they already failed...
This post has been deleted by its author
>I hear the PS4 won't play PC games either. It's almost like they're completely different devices which happen to include "xbox" in the name.
You got to give JDX credit for one thing, he is consistently loyal to his employer. A finer Microsoft schill can't be found (except for the numerous AC that only show up for Microsoft articles).
Nah, there's the German dude and the other one who insists on comparing current market shares with the monopoly era. Though it could be that they're just trolling vs. being actual shills ;)
On the claim of the other AC saying that MS does the most secure hardware: the 360 has cracked years before the PS3 was. And even then, it was because of someone doing a boo-boo and sticking a constant in a place where an RNG was required. So it wasn't even the hardware they cracked on the PS3, it was the crypto key itself...
I'm pretty sure they have said the Xbox One OS's reserve 3GB of memory and not 4. I also think the PS4 reserves 3/3.5GB so pretty much the same.
Personally I don't care about clock speed, transistor count etc and who has the biggest/fastest numbers. I only care about my games playing and looking great, if either of the systems can do that, then I'll be happy.
I doubt it 50% more powerful, I do think it has a performance edge though. I doubt there'll be any visible difference in the games cross platform games for a long time. Developers will go with the lowest common denominator. Exclusives will be what to look at maybe?
As it stands though Xbox One has 3GB of RAM reserved for OS and the PS4 3.5GB
I don't think it's fair to say boring old DDR3 :P it's still much faster than GDDR5 in terms of latency so it still has its place. There's a reason why PCs don't use it for it for CPUs.
When it comes to the CPU and GPU sharing the memory I have no idea which is better over all for gaming. I'd guess the GPU ought to have higher priority though.
I don't think we can really say what system is better till there's been some real world testing done.
I doubt I'll buy either for a while anyway, not at these prices, but I will get an Xbox one pad and use it on my PC.
I wonder if either chip will be harder to fabricate than the other.
Seems like some idiots are cherrypicking what to believe from Digitial Foundry, so that it suits their preferred console.
You can't go off ranting about how Digitial Foundry said this and said that when it suits you and then say they are full of crap when it doesn't...
http://www.digitalspy.co.uk/gaming/news/a483743/ps4-has-50-percent-more-raw-power-in-graphics-than-xbox-one-says-report.html
That's nonsense. it's all about the bandwidth baby.. PS4's memory has a bandwith of 170 GB/s with GDDR5, while X1 DDR3 will have only ~70 GB/s
And the PS4 OS is 3GB in debug configuration (which is why the debug units have more memory), in retail, the OS is 1GB of RAM reserved for OS. vs the 3GB used by the X1.
The PS4 also has 30% more GPU pipelines. All this together makes the Digital Foundry claims VERY plausible.
>That's nonsense. it's all about the bandwidth baby.. PS4's memory has a bandwith of 170 GB/s with GDDR5, while X1 DDR3 will have only ~70 GB/s
Nope OP is correct, low latency, 64bit memory controller (GDDR5 uses 32bit) means system/cpu will be markedly faster - and I mean markedly. Though it is also true that the PS4 will have plenty of spare graphics bandwidth.
It's an academic debate anyway, Sony went which an AMD APU which requires GDDR5.
"Two sinking ships."
Seems unlikely to be a fail - the Xbox 360 just over took the Wii in the UK to become top selling console of all time, and it has been outselling the PS3 in the USA for years.
Plus this time the Xbox has better games and exclusives than the PS4 - and Kinect 2 - and an HDMI input and the ability to overlay your sat / cable box and act as a DVR. Microsoft are going into this round with several major advantages over Sony.
>the Xbox 360 just over took the Wii in the UK to become top selling console of all time
>The best-selling console of all time in the UK remains the PlayStation 2 with 10 million consoles. The PlayStation 2 is also still the best-selling console worldwide, with 155 million units sold – a figure slightly ahead of the Nintendo DS, which was on 153.87 million as of this March. - http://metro.co.uk/2013/06/27/xbox-360-beats-wii-as-the-uks-best-selling-console-3858990/
That sucking sound was your credibility. Sony sucks and all and the PS3 after the PS2 was a major embarrassment but so is forcing me to pay $100 more for that crappy kinect I will never use. No thanks.
Um... the whole DRM thingy scared away a lot of people, and a good chunk of 'em might never return even if MS did a 180 on that. The price point is also another downside, especially if you take into account that Kinect is mostly a gimmick me-too Wii. They did get Dead Rising 3 to be an X-bone exclusive ... too bad for DR3. It's probably the only one I was actually interested in playing.
The XBone might not crash and burn so spectacularly as we expected while they kept their DRM stance, but it will probably be a dud. And it should be, because MS attitude during E3 was basically "screw the consumer, we're going to abuse you even if you don't like it".
I'm intrigued by the whole 'shared memory' thing because it's nothing new at all. I'm not talking about the setup that PCs have had in recent times where the video memory was carved out of the main system memory, but every time I've seen it mentioned, I've just remembered the Amiga.
For those not familiar with the Amiga's innards (and this is a simplification, the real picture is more complex but I've forgotten most of the detail), there were essentially two kinds of memory hived out of the total system memory. The first was 'chip' memory, which could be read by all the main chips, which is where graphics and sound had to be stored. The second, was 'fast' memory where only the main controller could access, meaning that you stuffed application code there where possible, because the CPU could access it faster than it could if it were reading from chip memory. It was also possible to switch some from one to the other (e.g. like the later Amigas had a ton of chip memory but a lot of programs expected that if it saw that much memory, some of it had to be fast memory and promptly went splut)
So yeah, sharing memory between subsystems on a more unified level is not a new concept, especially when you're talking about memory that both the CPU and graphics setup can share between and essentially allow the graphics to grab from memory without the CPU being involved... it just reminds me of 1986 or thereabouts...
That was common in the early computing era.
Heck, the ZX spectrum does something similar. If you put your code in the upper 32K it will run at full speed, but in the lower 16k where the video memory is, it will get interruped by the ULA on a regular basis, which among other things, will totally screw up critical timing loops.
As i found out when i tried to write a speccy speedloader back in 1988.
All the cheap and cheerful 1980s computers that I've met socially (Tandy, Commodore, etc.) had up to 64k of address space with the video being mapped into part of it. One could PEEK and POKE right onto the display.
There later came some capabilities to swap banks of memory, to switch in RAM in place of 32k of ROM, or to switch in another 64k of RAM (total 128k). Obviously the code had to copy itself over before making the switch.
"Shared memory"? It was the original default assumption.
Video access to large address ranges of main memory has been around since long before the Amiga. For instance, the Atari 800 and the Commodore 64 - both those had memory-mapped frame buffers which could be set to read from most parts of the RAM.
The Atari 800 custom audio/video chips were IIRC designed by Jay Miner, who went on to design the custom chips in the Amiga. The Amiga had much more CPU memory also addressable by graphics hardware, and added a nifty DMA coprocessor that could do bit-oriented graphics operations over data stored in the 'chip' memory, as well as moving data around to feed the PCM audio channels and floppy controller... but at the core, it was the same kind of architecture, just scaled up.
Things got much more interesting when CPUs got write-back caches; now explicit measures were required to ensure that data written by the CPU was actually in memory instead of just sitting in a dirty cache line at the time the GPU or other bus mastering peripheral went to fetch it. It's all the same cache coherency issues that multiprocessor system architects have been dealing with for years, and in a system like the XBOne, most of the peripherals are more or less peers with the various system CPUs in terms of how they access cached data; in fact, most peripherals look like specialised CPUs, hence the "heterogeneous" part of the HSA. You don't need to explicitly flush CPU caches, or set up areas of memory that aren't write-back cached, in order for the GPU to successfully read data that the CPU just wrote, or vice versa. That's the nifty part.
I'm guessing that the XBOne, like the Xbox 360, will have its frame buffers and Z-buffers integrated on the enormous CPU/GPU chip. That will reduce the bandwidth requirements on main memory by a great deal, as GPU rendering and video output will be served by the on-chip RAM. There are other ways to get some of the same effects - the PowerVR mobile device GPUs render the whole scene one small region ('tile') at a time, only keeping a couple of tiles plus the same size of Z-buffer in on-chip RAM, then squirt the finished tile out to main memory in a very efficient way - but it does create other limitations in how the graphics drivers process a 3D scene; any extra CPU work to feed the GPU takes away from power savings given by the simpler, smaller GPU. Tradeoffs abound.
AMD claimed it would but irrespective of hardware you've got the OS issue. Apple Macs run on an x86 CPU yet its only been in the past couple of years we've seen any volume of games released for them and even that can be attributed to Valve wanting to move away from Windows as opposed to wanting to move to OSX.
"They may have similar hardware, but they are running completely different O/S"
Sony run a Linux like OS. Microsoft run a modified Windows 8 kernel and hypervisor.
If we go by benchmarks of Windows 8 versus the latest Ubuntu, Microsoft will have a performance advantage in the OS side both for large file transfers and for graphics.
>Sony run a Linux like OS. Microsoft run a modified Windows 8 kernel and hypervisor.
>If we go by benchmarks of Windows 8 versus the latest Ubuntu, Microsoft will have a performance advantage in the OS side both for large file transfers and for graphics.
More demented propaganda from RICHTO! Must we?
It sounds exactly the same. My contacts at flex doumen tell me that end of line yields for the xbone have only just crept into double digits.
In other words don't expect one this year, and if you do, expect it to be DOA or soon afterwards.
Sounds all to familiar...
http://venturebeat.com/2008/09/05/xbox-360-defects-an-inside-history-of-microsofts-video-game-console-woes/2/