pci vs pcie
For our test, we’re looking at PCI-e Gen3 x8 vs. PCI-e Gen3 x16 performance. That means there’s a 66.7% difference in bandwidth available between the two, or a 100% increase from x8 to x16. But there’s a lot more to it than interface bandwidth: The device itself must exceed the saturation point of x8 (7880MB/s, before overhead is removed) in order to show any meaningful advantage in x16 (15760MB/s, before overhead is removed).
Use Cases, Future Tests, Test Setup
The use cases here are not that large. Maybe you’ve got a thermal concern or a card that butts-up against the CPU cooler, or some sort of liquid routing challenge. HSIO lanes are assigned to ancillary devices – like PCIe SSDs – and won’t eat into the CPU lanes available to the GPU. We’re also not testing multiple GPUs, which is where we’d like to go next once we’ve got two of the same GTX 1080 in the lab. Ideally, we test in x16/x16, x16/x8, and x8/x8 – but that’s not possible right now. We’re also hoping to test dual-GPU, single-card configurations between an x8 and an x16 slot, as those may put more load on the interface.
For the time being, this test strictly looks at a single-GPU, single-card GTX 1080 Gaming X as it passes between x8 and x16 slots. If, for whatever reason, you’re debating the performance reduction from moving to an x8 PCI-e slot with a single card, that’s what this test looks into.
We used our normal test bench (detailed below) for this research. The EVGA X99 Classified motherboard is picky with its PCI-e slot utilization, and uses UEFI to clearly inform whether the connected device is receiving 1, 4, 8, or 16 lanes. We switched between the first x16 slot and the first x8 slot for these numbers, then validated in BIOS and software.
PCI-e generations can also be forced in the EVGA UEFI, but we did not explore the impact of PCI-e 2.x on the GTX 1080 at this time as it seemed even less likely of a use case.
Game Test Methodology
We tested using our GPU test bench, detailed in the table below. Our thanks to supporting hardware vendors for supplying some of the test components.
NVidia’s 368.39 drivers were used for game (FPS) testing. Game settings were manually controlled for the DUT. All games were run at presets defined in their respective charts. We disable brand-supported technologies in games, like The Witcher 3’s HairWorks and HBAO. All other game settings are defined in respective game benchmarks, which we publish separately from GPU reviews. Our test courses, in the event manual testing is executed, are also uploaded within that content. This allows others to replicate our results by studying our bench courses.
Windows 10-64 build 10586 was used for testing.
Each game was tested for 30 seconds in an identical scenario, then repeated multiple times for parity.
Average FPS, 1% low, and 0.1% low times are measured. We do not measure maximum or minimum FPS results as we consider these numbers to be pure outliers. Instead, we take an average of the lowest 1% of results (1% low) to show real-world, noticeable dips; we then take an average of the lowest 0.1% of results for severe spikes.