AMD Responds to Recent Ashes of the Singularity Benchmark Controversy
I’m sure that most of you have already seen AMD’s Computex presentation by now, especially the part where two RX480 graphics cards were compared to one NVIDIA GTX 1080 using two Ashes of the Singularity benchmarks. The benchmarks saw AMD’s Crossfire solution coming ahead of Nvidia’s GTX 1080, which is definitely interesting since two of these AMD cards will reportedly cost less than one GTX 1080. However, the community noticed that the two side-by-side benchmarks were not exactly identical when it came to graphics quality, which is why many have speculated that the one for the RX480s was actually running at lower settings when compared to the other.
Not too long ago, AMD’s Technical Marketing Lead, Robert Hallock has decided to address this controversy by claiming that NVIDIA’s GPUs do not render Ashes of the Singularity properly, which might explain the visual differences spotted during the event.
“Ashes uses procedural generation based on a randomized seed at launch. The benchmark does look slightly different every time it is run. But that, many have noted, does not fully explain the quality difference people noticed.
At present the GTX 1080 is incorrectly executing the terrain shaders responsible for populating the environment with the appropriate amount of snow. The GTX 1080 is doing less work to render AOTS than it otherwise would if the shader were being run properly. Snow is somewhat flat and boring in color compared to shiny rocks, which gives the illusion that less is being rendered, but this is an incorrect interpretation of how the terrain shaders are functioning in this title.
The content being rendered by the RX 480–the one with greater snow coverage in the side-by-side (the left in these images)–is the correct execution of the terrain shaders.
So, even with fudgy image quality on the GTX 1080 that could improve their performance a few percent, dual RX 480 still came out ahead.”
Apparently, AMD ran the test ten times before presenting it to the audience in order to make sure that the figures would check out. As for the system that was used for the benchmarks, Robert also went ahead and revealed its exact specifications, as well as the precise results of the testing sessions.
System specs:
- CPU: i7 5930K.
- RAM: 32GB DDR4-2400Mhz.
- Motherboard: Asrock X99M Killer.
- GPU config 1: 2x Radeon RX 480 @ PCIE 3.0 x16 for each GPU.
- GPU config 2: Founders Edition GTX 1080.
- OS: Win 10 64bit.
- AMD Driver: 16.30-160525n-230356E.
- NV Driver: 368.19.
In Game Settings for both configs: Crazy Settings | 1080P | 8x MSAA | VSYNC OFF
Ashes Game Version: v1.12.19928
Benchmark results:
2x Radeon RX 480 – 62.5 fps | Single Batch GPU Util: 51% | Med Batch GPU Util: 71.9 | Heavy Batch GPU Util: 92.3%
GTX 1080 – 58.7 fps | Single Batch GPU Util: 98.7%| Med Batch GPU Util: 97.9% | Heavy Batch GPU Util: 98.7%.
I thought this was going to address the fact that AMD’s cards ran at 51%, yet beat the 1080.
Why not just use one card AMD to show your $200 card was directly comparable to nvidias $850 card? Why don’t you explain the GPU util differences? Why only use a game that has heavily favoured AMD from day one?
All this announcement from AMD has done for me, is let me now that they’re grasping at straws.
$850? where did you get that? isn’t it around the $600-700 mark?
151%. DX12 is the future of gaming. Would you prefer they did a DX9 benchmark?
Theres plenty of dx12 games other than ashes.
The shpiel definitely says 51%
How would one utilise 151% of their gpu?
You’re not the sharpest tool in the shed eh
Ha ha. I’d hoped you’d fail. 2 GPU = 200% There was a response on Reddit when someone asked what the 51% meant and it was the second GPU’s utilisation with the first being 100%. Guess you best go get on the wet stone, I’m clearly sharper than you.
Burn.
If you knew the first thing abour dx12, you would know if you have 2 identical cards, they get utilised equally.
Its also an unlimited fps benchmark, gpu util shouldn’t fall below 95%
DX12 is down the the game developers to utilise multi GPU. I’m assuming this part; they haven’t done it properly. The 51% was on the less intense benchmarks, it did rise on the higher intensity ones. I’ll give you this though, never been sold on multi GPU and AMD (nvidia too for that matter) will have to do something truly special to make me consider it a good idea… been there done that, didn’t enjoy the experience.
Take care Tim. Hope you enjoyed the mini flame war. I did ?
Yo, you’re still wrong.
Dx12 takes care pf everything on the multi gpu side, has little to nothing to do with the game developers.
As i said before, of you knew the first thing about dx12, you wouldn’t be spouting trash.
And me correcting your mistakes isn’t a flame war.
You’re very sure of yourself, but keep getting it wrong. DX12 gives developers access to the underlying hardware. What they do with it is down to them. Similar to how many threads they run on the CPU, because there are 32 threads doesn’t mean they’ll use them.
This is a flame war because you started with insults. I’m leaving this alone, wasting my time now. I’d suggest you go do a shit load of research, you clearly need it.
Im still missing the whole point of comparing a driver optimized run vs a non driver optimized run (see the fact that the Nvidia did indeed not render correctly, clearly driver issues).
When I run Dark Souls 3 with an old driver it runs @ 60fps, with the DS3 optimized driver i reach 115fps easily. So yes, there is a difference there. My guess is it would be the same for this benchmark, why else pick something that doesn’t work as intended for your opponent.
They both run beta drivers… what else can AMD use as a baseline?
I agree with the driver choice, but the game choice is a bit shady. It looks like (and their comment says about the same) that Nvidia hasnt yet optimized for this benchmark. I would prefer to see a comparison both optimized for.
Yea definitely. If you watch the AMD release they mention that they are on beta drivers and that the developers haven’t done dx12 optimisation for dual Polaris. My advice would 100% be wait for both to be released and available then read and research and buy what you think is the better option. At the moment the 480 is more exciting for me… affordability is high on my list of desirables.
ok so obviously I am a blind idiot as I preferred the way the right hand images looked…….. I thought it would take far more detail to render rock faces than piles of snow, but I’ve only been gaming for four decades what the hell do I know ??
Yeah I don’t buy the explanation, that’s super greasy. You’re telling me that rendering flat snow takes more power than angled 3d objects? HAHAHAHAHA.
Maybe if you understood how it worked. The snow gets rendered on top of the rocks. That’s extra work to do, not instead.
Yeah, sure, I don’t believe AMD’s explanation, you are just reading back what they said, I can do that too. Use your head, the game doesn’t render all the rock angles and textures under the snow once the snow is rendered on top of it.
But it does.
um so why is AMD using an Intel cpu in their benchmark rig?
To benchmark the cards only. They needed the same setup as the 1080 had. If they used other hardware the benchmark would mean nothing.
yeah basic benchmarking, does not explain why AMD a company that makes CPUs used intel it’s another oddity in this controversy
It really isn’t. If you want to compare just GPU you have to have the rest of the PC identical. If they used a different CPU that would be a CPU and GPU benchmark.