Nvidia Tesla is the former name for a line of products developed by Nvidia targeted at stream processing or general-purpose graphics processing units (GPGPU), named after pioneering electrical engineer Nikola Tesla. Its products began using GPUs from the G80 series, and have continued to accompany the release of new chips. They are programmable using the CUDA or OpenCL APIs.
The Nvidia Tesla product line competed with AMD's Radeon Instinct and Intel Xeon Phi lines of deep learning and GPU cards.
Nvidia retired the Tesla brand in May 2020, reportedly because of potential confusion with the brand of cars.[1] Its new GPUs are branded Nvidia Data Center GPUs[2] as in the Ampere-based A100 GPU.[3]
Tesla cards have four times the double precision performance of a Fermi-based Nvidia GeForce card of similar single precision performance.[citation needed]Unlike Nvidia's consumer GeForce cards and professional Nvidia Quadro cards, Tesla cards were originally unable to output images to a display. However, the last Tesla C-class products included one Dual-Link DVI port.[5]
Applications
Tesla products are primarily used in simulations and in large-scale calculations (especially floating-point calculations), and for high-end image generation for professional and scientific fields.[6]
In 2013, the defense industry accounted for less than one-sixth of Tesla sales, but Sumit Gupta predicted increasing sales to the geospatial intelligence market.[7]
^Core architecture version according to the CUDA programming guide.
^GPU Boost is a default feature that increases the core clock rate while remaining under the card's predetermined power budget. Multiple boost clocks are available, but this table lists the highest clock supported by each card.[8]
^ a b cSpecifications not specified by Nvidia assumed to be based on the GeForce 8800 GTX
^ a b c dSpecifications not specified by Nvidia assumed to be based on the GeForce GTX 280
^ a bSpecifications not specified by Nvidia assumed to be based on the Quadro FX 5800
^ a b c d e fWith ECC on, a portion of the dedicated memory is used for ECC bits, so the available user memory is reduced by 12.5%. (e.g. 4 GB total memory yields 3.5 GB of user available memory.)
^"Tesla K10 GPU accelerator" (PDF). Nvidia.com. Retrieved 11 December 2015.
^"Tesla K20 GPU active accelerator" (PDF). Nvidia.com. Retrieved 11 December 2015.
^"Tesla K20 GPU accelerator" (PDF). Nvidia.com. Retrieved 11 December 2015.
^"Tesla K20X GPU accelerator" (PDF). Nvidia.com. Retrieved 11 December 2015.
^"Tesla K40 GPU accelerator" (PDF). Nvidia.com. Retrieved 11 December 2015.
^"Tesla K80 GPU accelerator" (PDF). Images.nvidia.com. Retrieved 11 December 2015.
^"Nvidia Announces Tesla M40 & M4 Server Cards - Data Center Machine Learning". Anandtech.com. Retrieved 11 December 2015.
^ a b"Accelerating Hyperscale Datacenter Applications with Tesla GPUs | Parallel Forall". Devblogs.nvidia.com. 10 November 2015. Retrieved 11 December 2015.
^"Tesla M6" (PDF). Images.nvidia.com. Retrieved 28 May 2016.
^"Tesla M10" (PDF). Images.nvidia.com. Retrieved 29 October 2016.
^"Tesla M40" (PDF). Images.nvidia.com. Retrieved 11 December 2015.
^"Tesla M60" (PDF). Images.nvidia.com. Retrieved 27 May 2016.
^ a bSmith, Ryan (13 September 2016). "Nvidia Announces Tesla P40 & Tesla P4 - Network Inference, Big & Small". Anandtech. Retrieved 13 September 2016.
^"Tesla P6" (PDF). www.nvidia.com. Retrieved 7 March 2019.
^"Tesla P6 Specs". www.techpowerup.com. Retrieved 7 March 2019.
^Smith, Ryan (5 April 2016). "Nvidia Announces Tesla P100 Accelerator - Pascal GP100 for HPC". Anandtech.com. Anandtech.com. Retrieved 5 April 2016.
^ a bSmith, Ryan (20 June 2016). "NVidia Announces PCI Express Tesla P100". Anandtech.com. Retrieved 21 June 2016.
^ a bSmith, Ryan (10 May 2017). "The Nvidia GPU Technology Conference 2017 Keynote Live Blog". Anandtech. Retrieved 10 May 2017.
^ a bSmith, Ryan (10 May 2017). "NVIDIA Volta Unveiled: GV100 GPU and Tesla V100 Accelerator Announced". Anandtech. Retrieved 10 May 2017.
^ a bOh, Nate (20 June 2017). "NVIDIA Formally Announces V100: Available later this Year". Anandtech.com. Retrieved 20 June 2017.
^"NVIDIA TESLA T4 TENSOR CORE GPU". NVIDIA. Retrieved 17 October 2018.
^"NVIDIA Tesla T4 Tensor Core Product Brief" (PDF). www.nvidia.com. Retrieved 10 July 2019.
^"NVIDIA TESLA A2 TENSOR CORE GPU".
^"NVIDIA TESLA A10 TENSOR CORE GPU".
^"NVIDIA TESLA A16 TENSOR CORE GPU".
^"NVIDIA TESLA A30 TENSOR CORE GPU".
^"NVIDIA TESLA A40 TENSOR CORE GPU".
^"NVIDIA TESLA A100 TENSOR CORE GPU". NVIDIA. Retrieved 14 January 2021.
^"NVIDIA Tesla A100 Tensor Core Product Brief" (PDF). www.nvidia.com. Retrieved 22 September 2020.
^Smith, Ryan (14 May 2020). "NVIDIA Ampere Unleashed: NVIDIA Announces New GPU Architecture, A100 GPU, and Accelerator". AnandTech.
^"NVIDIA H100 Tensor Core GPU". NVIDIA. Retrieved 15 April 2024.
^Mujtaba, Hassan (22 March 2022). "NVIDIA Hopper GH100 GPU Unveiled: The World's First & Fastest 4nm Data Center Chip, Up To 4000 TFLOPs Compute, HBM3 3 TB/s Memory". Wccftech. Retrieved 15 April 2024.
^"NVIDIA H100 PCIe 80 GB Specs". TechPowerUp. 21 March 2023. Retrieved 15 April 2024.
^"NVIDIA L40 GPU for Data Center". NVIDIA. 18 May 2023. Retrieved 15 April 2024.
^"NVIDIA L40 Specs". TechPowerUp. 13 October 2022. Retrieved 15 April 2024.
^"NVIDIA L4 Tensor Core GPU". NVIDIA. Retrieved 15 April 2024.
^"NVIDIA ADA GPU Architecture" (PDF). nvidia.com. Retrieved 15 April 2024.
^"NVIDIA and Google Cloud Deliver Powerful New Generative AI Platform, Built on the New L4 GPU and Vertex AI". NVIDIA Corporation. 21 March 2023. Retrieved 15 April 2024.
^"NVIDIA L4 Specs". TechPowerUp. 21 March 2023. Retrieved 15 April 2024.
External links
Wikimedia Commons has media related to Nvidia Tesla series.