All technical data and various benchmarks of the GIGABYTE GeForce GTX 1080 Ti Turbo 11G are listed on this page. The graphics card is based on the GP102-350-K1-A1 (Pascal) graphics chip, which is manufactured with a structure width of 16 nm. The GIGABYTE GeForce GTX 1080 Ti Turbo 11G can operate up to 4 screens with a maximum resolution of up to 7680x4320 become.
GPU
The GIGABYTE GeForce GTX 1080 Ti Turbo 11G has 28 execution units with 3584 shaders. This gives the graphics card a theoretical FP32 computing power of 11.61 TFLOPS.
The GIGABYTE GeForce GTX 1080 Ti Turbo 11G is equipped with 11 GB graphics memory of the type GDDR5X. The memory clock of the graphics card is 1.376 GHz and the speed is 11 Gbps.
Memory Size:
11 GB
Memory Type:
GDDR5X
Memory Clock:
1.376 GHz
Memory Speed:
11.0 Gbps
Memory bandwidth:
484 GB/s
Memory Interface:
352 bit
Clock Speeds
The base frequency of this graphics card is 1.480 GHz and the manufacturer specifies the maximum turbo clock as 1.620 GHz. If supported (see below), the performance can be further increased by overclocking.
Base Clock:
1.480 GHz
Boost Clock:
1.620 GHz+ 2 %
Avg (Game) Clock:
1.582 GHz
Overclocking:
Yes
Thermal Design
The GIGABYTE GeForce GTX 1080 Ti Turbo 11G is powered by 1 x 6-Pin, 1 x 8-Pin connectors. The TDP (Thermal Design Power) of the graphics card is 280 W.
TDP:
280 W+ 12 %
TDP (up):
--
Tjunction max:
91 °C
PCIe-Power:
1 x 6-Pin, 1 x 8-Pin
Cooler & Fans
Graphics processor and graphics memory of the GIGABYTE GeForce GTX 1080 Ti Turbo 11G are cooled by a Air cooling.
Fan-Type:
Radial
Fan 1:
1 x 70 mm
Fan 2:
--
Cooler-Type:
Air cooling
Noise (Idle):
--
Noise (Load):
--
Connectivity
Up to 4 monitors can be operated with the GIGABYTE GeForce GTX 1080 Ti Turbo 11G.
Max. Displays:
4
HDCP-Version:
2.2
HDMI Ports:
1x HDMI v2.0b
DP Ports:
3x DP v1.4
DVI Ports:
--
VGA Ports:
--
USB-C Ports:
--
Featureset
With the GIGABYTE GeForce GTX 1080 Ti Turbo 11G it is possible to operate a screen with a resolution of up to 7680x4320 pixels. Furthermore, the DirectX standard version 12_1 is supported.
Max. resolution:
7680x4320
DirectX:
12_1
Raytracing:
No
DLSS / FSR:
Yes
LED:
No LED lighting
Supported Video Codecs
This area lists the video codecs that the GIGABYTE GeForce GTX 1080 Ti Turbo 11G can decode or encode in hardware in order to minimize the processor load, which leads to lower power consumption.
h264:
Decode / Encode
h265 / HEVC:
Decode / Encode
VP8:
Decode
VP9:
Decode
AV1:
No
Dimensions
The GIGABYTE GeForce GTX 1080 Ti Turbo 11G needs the space of 2 PCIe-Slots in a case. It is 270 mm long, 117 mm high and 37 mm wide.
Length:
270 mm
Height:
117 mm
Width:
37 mm
Width (Slots):
2 PCIe-Slots
Weight:
--
Additional data
The GIGABYTE GeForce GTX 1080 Ti Turbo 11G was released in Q1/2017 at a price of 699 $ (Reference). The graphics card manufactured in 16 nm is connected to the system via PCIe 3.0 x 16 lanes.
GIGABYTE GeForce GTX 1080 Ti Turbo 11G 1.480 GHz, 11 GB (280 W TDP)
Buy this graphic card at amazon!
Rate this graphics card
Here you can rate this graphics card and help other visitors with their purchase decision. The average rating for this graphics card is currently 0 stars (0 ratings). Rate it now:
3DMark is a benchmark program that determines the performance of certain components of a computer and then reports the performance as a numerical value.
Geekbench 6 is a cross-platform benchmark for main processors, which also carries out 3 different graphics benchmarks and outputs them in the form of a numerical value.
The theoretical computing power of the graphics card with single precision (32 bit) in TFLOPS indicates how many trillion FP32 floating point operations the graphics card (GPU) can perform per second.
In order to determine the performance of a graphics card, so-called "benchmarks" are carried out. The benchmark software carries out special calculations to determine the performance of a graphics card. We use so-called theoretical or synthetic benchmarks (e.g. 3D Mark) as well as real game benchmarks. To ensure real comparability of the results, we pay attention to the correct execution of the benchmarks as well as the condition of the graphics card and the system.
We use the following benchmarks to measure the performance of a graphics card: