All technical data and various benchmarks of the ZOTAC GeForce GTX 980 Ti ArcticStorm are listed on this page. The graphics card is based on the GM200-310-A1 (Maxwell 2.0) graphics chip, which is manufactured with a structure width of 28 nm. The ZOTAC GeForce GTX 980 Ti ArcticStorm can operate up to 4 screens with a maximum resolution of up to 4096x2160 become.
GPU
The ZOTAC GeForce GTX 980 Ti ArcticStorm has 22 execution units with 2816 shaders. This gives the graphics card a theoretical FP32 computing power of 6.42 TFLOPS.
The ZOTAC GeForce GTX 980 Ti ArcticStorm is equipped with 6 GB graphics memory of the type GDDR5. The memory clock of the graphics card is 1.753 GHz and the speed is 7.01 Gbps.
Memory Size:
6 GB
Memory Type:
GDDR5
Memory Clock:
1.753 GHz
Memory Speed:
7.0 Gbps
Memory bandwidth:
337 GB/s
Memory Interface:
384 bit
Clock Speeds
The base frequency of this graphics card is 1.025 GHz and the manufacturer specifies the maximum turbo clock as 1.140 GHz. If supported (see below), the performance can be further increased by overclocking.
Base Clock:
1.025 GHz+ 2 %
Boost Clock:
1.140 GHz+ 6 %
Avg (Game) Clock:
Overclocking:
Yes
Thermal Design
The ZOTAC GeForce GTX 980 Ti ArcticStorm is powered by 1 x 6-Pin, 1 x 8-Pin connectors. The TDP (Thermal Design Power) of the graphics card is 250 W.
TDP:
250 W
TDP (up):
--
Tjunction max:
91 °C
PCIe-Power:
1 x 6-Pin, 1 x 8-Pin
Cooler & Fans
Graphics processor and graphics memory of the ZOTAC GeForce GTX 980 Ti ArcticStorm are cooled by a Hybrid cooling.
Fan-Type:
Axial
Fan 1:
3 x 90 mm
Fan 2:
--
Cooler-Type:
Hybrid cooling
Noise (Idle):
0 dB / Silent
Noise (Load):
--
Connectivity
Up to 4 monitors can be operated with the ZOTAC GeForce GTX 980 Ti ArcticStorm.
Max. Displays:
4
HDCP-Version:
2.2
HDMI Ports:
1x HDMI v2.0
DP Ports:
3x DP v1.2
DVI Ports:
1
VGA Ports:
--
USB-C Ports:
--
Featureset
With the ZOTAC GeForce GTX 980 Ti ArcticStorm it is possible to operate a screen with a resolution of up to 4096x2160 pixels. Furthermore, the DirectX standard version 12_1 is supported.
Max. resolution:
4096x2160
DirectX:
12_1
Raytracing:
No
DLSS / FSR:
No
LED:
No LED lighting
Supported Video Codecs
This area lists the video codecs that the ZOTAC GeForce GTX 980 Ti ArcticStorm can decode or encode in hardware in order to minimize the processor load, which leads to lower power consumption.
h264:
Decode / Encode
h265 / HEVC:
No
VP8:
Decode
VP9:
No
AV1:
No
Dimensions
The ZOTAC GeForce GTX 980 Ti ArcticStorm needs the space of 3 PCIe-Slots in a case. It is 315 mm long, 141 mm high and -- wide.
Length:
315 mm
Height:
141 mm
Width:
--
Width (Slots):
3 PCIe-Slots
Weight:
--
Additional data
The ZOTAC GeForce GTX 980 Ti ArcticStorm was released in Q2/2015 at a price of 849 $. The graphics card manufactured in 28 nm is connected to the system via PCIe 3 x 16 lanes.
ZOTAC GeForce GTX 980 Ti ArcticStorm 1.025 GHz, 6 GB (250 W TDP)
Buy this graphic card at amazon!
Rate this graphics card
Here you can rate this graphics card and help other visitors with their purchase decision. The average rating for this graphics card is currently 0 stars (0 ratings). Rate it now:
3DMark is a benchmark program that determines the performance of certain components of a computer and then reports the performance as a numerical value.
Geekbench 6 is a cross-platform benchmark for main processors, which also carries out 3 different graphics benchmarks and outputs them in the form of a numerical value.
The theoretical computing power of the graphics card with single precision (32 bit) in TFLOPS indicates how many trillion FP32 floating point operations the graphics card (GPU) can perform per second.
In order to determine the performance of a graphics card, so-called "benchmarks" are carried out. The benchmark software carries out special calculations to determine the performance of a graphics card. We use so-called theoretical or synthetic benchmarks (e.g. 3D Mark) as well as real game benchmarks. To ensure real comparability of the results, we pay attention to the correct execution of the benchmarks as well as the condition of the graphics card and the system.
We use the following benchmarks to measure the performance of a graphics card: