MSI GTX 1060 Gaming X Graphics Card Review
Introduction
NVIDIA’s Pascal architecture marks a major shift in performance per watt and revolves around the highly efficient 16nm FinFet manufacturing process. So far, the company has unleashed products designed for higher resolutions and enthusiasts who are prepared to pay extra for a more fluid gaming experience. For example, AIB versions of the GTX 1080 in the UK can cost over £650 and have received hefty price hikes. Whether this will subside once supply increases or Sterling becomes more stable is unclear, but the current market price is unlikely to appeal to a large audience. While the GTX 1070 offers performance beyond the GTX Titan X for significantly less, it’s still too expensive for many people on a tight budget. According to AMD’s internal research, 84% of PC gamers select a graphics card within the $100-$300 price bracket.
Speaking of AMD, they recently unveiled a mainstream GPU entitled, the RX 480 which delivers a “premium VR experience”. Prior to its release, the rumour mill was in full force speculating about the performance and some AMD fans hoped it would defeat the R9 390X while retailing for a mere $200. Clearly, this never came to fruition because Polaris was always intended to target affordability and provide a good entry point for PC gaming. Those with realistic expectations admire the RX 480’s excellent price to performance ratio and believe it’s a superb product. Unfortunately, the launch has been marred by the GPU’s power exceeding PCI-E slot specifications. This led to users reporting hardware damage and they were concerned about the impact of using the RX 480, especially when paired with a cheaper motherboard. Thankfully, AMD resolved this issue in a quick fashion via a driver update and explained the situation pretty well.
Evidently, NVIDIA needed to react to the RX 480’s amazingly cheap price point and showcase their mainstream Pascal range. Originally, it seems that the GTX 1060 was designed to launch in two variants; a 6GB model initially before a 3GB budget edition at a later date. Apparently, this idea has now been shelved and NVIDIA plans to re-brand the 3GB card as the GTX 1050. In a similar vein to the GTX 1070 and GTX 1080, the GTX 1060 will arrive with the option to purchase a Founders Edition card. However, this time, it can only be procured directly from NVIDIA unless their strategy changes. From a more technical standpoint, the GTX 1060 is able to replicate the performance of the GTX 980 at a more reasonable price.
Not only that, the GTX 1060 has a low 120-watt TDP and 6GB GDDR5 memory running at an effective speed of 8012MHz. On another note, the new GPU features 10 SMs, 1280 CUDA Cores, 1506MHz base clock, 1708MHz boost, 80 Texture Units and a 192GB/s memory bandwidth. The GP106 core has a total of 48 ROPs and 4.4 billion transistors. This is much more than the previous generation GTX 960 which has 2.94 billion transistors. As you might expect, the GTX 1060 supports Pascal technologies such as Simultaneous Multi-Projection, Ansel and much more. Since the GTX 1080 and GTX 1070’s release, the reaction from developers to NVIDIA’s Simultaneous Multi-Projection has been incredibly positive and 30 games are in development which take advantage of this technology including Pool Nation VR, Everest VR, Obduction, Adr1ft, and Raw Data. Even the non-VR title Unreal Tournament is adopting SMP to enhance image quality and performance. Since the Pascal architecture is pretty well-known by now, I’m going to move onto the sample sent for review.
Instead of reviewing the Founders Edition on launch, we’ve decided to take a look at the MSI GTX 1060 Gaming X which has a recommended retail price of £299.99. Please note, this may change after the NDA lifts as pricing is still being finalised. MSI has increased the base clock from 1506MHz to 1594MHz and the boost clock runs at an impressive 1809MHz. Also, the memory has been set to 8108MHz and I’m expecting some good overclocking headroom given the remarkable Twin Frozr VI cooling solution. It’s important to note that the quoted specification above is when the OC profile has been enabled using MSI’s Gaming App. There’s been some controversy surrounding this notion and I’d like to disclose that MSI shipped out the card using the retail Gaming mode. However, I wanted to ascertain the maximum performance and decided to implement the OC profile. Putting that aside, I’m expecting the MSI GTX 1060 Gaming X to output impressive numbers and challenge the GTX 980 in a variety of demanding games.
Specifications
Packing and Accessories
The MSI GTX 1060 Gaming X comes in a stylish red box which contains a striking snapshot of the graphics card. This stylish theme is complimented by the NVIDIA branding and looks like the perfect combination. The packaging also displays key details about the card’s video memory and support for the latest DirectX 12 API.
On the rear, there’s information about the product’s RGB illumination, Twin Frozr VI cooler, and MSI Gaming App. As per usual with MSI’s packaging, the presentation is excellent and contributes to a stellar unboxing experience.
The graphics card is bundled with some attractive stickers, a quick user’s guide, driver/software disk and product registration leaflet.
Why not Vulkan for Doom?
This is something I’m looking into for future reviews, wasn’t used because I’ve being doing some comparison with the older drivers and wanted to remove variables.
Thanks.
Because they don’t want to show that RX 480 beats the 1060 on doom.
Not at all, fi this was the case, why would I be so positive about the RX 480 in our review? It’s a simple reason which I’ve given below.
For the same reason they didn’t use the latest DX12 patch for Tomb Raider with Async Compute enabled, or the “Crazy” preset in Ashes of the Singularity. It would be damned hard to justify their “Editor’s Award” if they showed this card performing slower in 3 out of 5 tests than the £70 cheaper RX 480 with the reference design.
Making assumptions with no evidence about how the testing is done? I reduced the preset from Crazy to Extreme so the lower-end GPUs would be able to be able to achieve good figures at higher resolutions. If you look at some of the older reviews, I use the Crazy preset. I’m a little perplexed by your comments since I praised the RX 480 and didn’t just say go out and buy the GTX 1060. The RX 480 is still the price to performance winner and I’d recommend it. In fact, I’ve got a model from Sapphire coming and I’m expecting it to do really well. Honestly, it’s frustrating when comments like this are made, this no ultierior motive, I try to keep things fair.
its not assumptions really most of the reviewes didnt even used doom let alone vulcan
and even fewer actually didnt even tested dx12 games… tpu was the one that didnt even had a single non gameworks games
I see where you’re coming from and it’s often the case that games which do really well on AMD products are omitted. I try to find a balance between DirectX 11 and DirectX 12 titles so that it’s fair. Also, we don’t always get codes from companies and I would have like to add them to the testing process. If Doom is used again, it would be with the Vulkan API.
How the testing was done:
1 – Not using Vulkan in Doom, which has been officially distributed through an official patch for 2 weeks and gives an unprecedented performance boost to AMD cards (not so much for nvidia ones)
2 – Not using the latest DX12 patch for Tomb Raider bringing Async Compute which was also officially distributed over 2 weeks ago and gives a sizeable performance boost to AMD cards (not so much for nvidia ones)
3 – Not using the Crazy preset in AotS even though you used it in other reviews, and then give the reason that “lower-end GPUs would be able to achieve good figures” even though the lowest-end model you’re putting in the comparison is a GTX 970 which
Why are you perplexed by my comment? Why would anyone?
The Crazy preset thing can be more or less subjective, but the other two are bound to raise suspicion, regardless of whatever your “true intentions” are.
Thanks for the response, I’ve decided to re-test all future Tomb Raider results using DX12, also will change the Ashes preset to Crazy. Unfortunately, I no longer have access to Doom due to a weird press code situation, so I’ll probably have to remove that test until another is provided. I didn’t use DX12 in Tomb Raider because it’s been horribly broken for some time, but judging by the comments, this has been resolved and it’s a great addition. Therefore, I’ll use it in the future.
I state in the review that the DX12 patch isn’t used because it has inconsistent results. However, I’ll research the latest patch to see if the situation has changed and if so, DirectX 12 will be adopted for all future reviews.
I like amd and would like to see ’em come out on top, but I’m also fair and would buya a green card if it has obvious advantage. Though some could see your benchmark set as nvidia biased, I dont understand how they can label you as unfair when your statements point otherwise, as if a review consist only of charts. Yours is one of the fairest review around, given the benchmark set. Having said that, it’s better if you include vulkan and async benchmarks in the future.
John, you can’t make everybody happy. If you said it was the best chocolate ice cream in the world and it’s free somebody would complain because you didn’t test vanilla. LOL. I thought it was a good review.
The AMD fanboys in this comment thread are the reason people hate reading comment sections. The editor even calmly and cooly replies to everyone trying to call him out for being some “Paid NVIDIA Shill” when he’s just doing his best to be as unbiased as possible given the available resources.
I just bought this card. I can’t wait to play some Witcher 3 and DOOM on it.
Still no parallel async .. WTF is nvidia doing?