Nvidia explodes on the AI scene with GeForce RTX 30 series with the new Ampere architecture
Once data scientists realized that graphics cards were more efficient in handling AI applications, there was no looking back for Nvidia. With their phenomenally large number of cores as compared to CPU, a graphic card can handle the computing heavy software and processing which is required by AI systems. Nvidia has made a big leap in AI computing by launching the RTX 30 series which aims towards data centers and neural networks.
AI terms explained
Artificial Intelligence – When machines are designed to think and behave like humans, that is Artificial Intelligence or AI at work. One of the main human trait which is being applied to machines is the ability to learn and solve problems like humans. AI is enabling a machine to take decisions. Even a simple action like choosing a colour crayon to draw required tons of data processing at the back end by a machine. It might appear as child’s play to us humans, but that is because we are backed up by millions of years of evolution.
AI is no longer about face recognition which is a common feature on our phones. It is now self driven cars which will take all factors into consideration to ensure a collision free decision based driving.
Intelligence displayed by humans is also based on emotions which takes a decision on a collision choice which will result in least damage.
Artificial Neural Network – A neural network is inspired by the human brain and is built on artificial neurons. Each neuron is a part of a network which takes multiple inputs to give a single output. A neuron creates this output by weighing options just like a human brain. For eg inputs can be number of images and output can be a task like recognizing an object in another image.
There can be multiple layers of network with each layer improving decision making capability by passing data through each layer and improving its accuracy in the process.
Deep Learning – Similar to human learning, a machine processes data with the aim to create patterns. It is the ability of a machine to learn from large amounts of data. But each time a machine takes a decision, it also tweaks the process a little to improve the results. This is the learning part of the machine – to take better decisions and improve the decision making process.
AI finally aims to replace workers in factories and drivers in cars. Data involved in AI is colossal and so is the computing requirement. Some examples where AI is already in use is virtual assistants like Alexa and Siri or language translations.
Graphic cards or GPUs are already doing this function in gaming. Scenes are created for gamers and real life physics are recreated on the go. A racing car performs real life movements based on inputs from drivers. Graphics Processing Units out performed Central Processing Units (CPUs) due to parallel processing. A CPU does one task at a time while graphics card performs multiple tasks at the same time. Multi core CPU can perform multi tasks at the same time. But a CPU has 8 or 16 cores while a graphics card has hundreds of cores, all doing their own tasks at the same time. A GPU is a mini computer in its own right performing a specific task. It renders a 3D scene by using techniques like Rasterisation and Ray-tracing. This looks the same as we would see it by human eyes. Each core of GPU is performs a task independently. Just imagine what hundreds of cores will be capable of.
Asus Nvidia GT 710 graphics card costing less than Rs 5000 has 192 CUDA cores.
CUDA – Compute Unified Device Architecture is a design by Nvidia which optimises parallel computing and better performance. It provides graphics cards the ability to perform general purpose processing also. CUDA platform supports programming on languages like C or C++. This has helped programmers to develop specific applications to enable use of Nvidia graphics cards in AI and Deep Learning. In 2018, Nvidia introduced the 2000 series having tensor cores dedicated to deep learning. These cards were based on volta architecture. In 2020 came the Ampere architecture which improved the performance of tensor cores by two times.
Complex operations required by deep learning can now only be done on a graphics card.
Choosing a graphics card for deep learning
The following parameters are important while choosing a graphics card.
Cores – Larger the number of cores, better will be the performance. Cores allow multiple tasks to be performed parallelly. CUDA cores by Nvidia are more potent due to better software support than AMD.
Memory – GDDR6 is the practical standard today. GDDR6X are slightly faster but more expensive to make.
Clock Speed – Measured in GHz, it is the rate at which work is done by the chips.
RTX 30 series graphics cards are very rare in the market. You will have to pay a premium over the market price.
RTX 3060 – The best beginner graphic card available out there with 3584 CUDA cores and 12 GB memory. Because of the new ampere architecture, this card scores better over RTX 2080 despite having lesser number of cores and memory. In present market you might get this one between Rs 60,000 to Rs 80,000.
RTX 3070 – With a marginal increase in cores to 5888, this one drops the memory to 8 GB. So if you need more cores at a value price, this is the one. Availability will again be an issue.
RTX 3080 – At 8704 CUDA cores and 10 GB memory this card is the perfect option for those who do not want to buy the more expensive RTX 3090. The listed price is Rs 70,000 for the Founder Edition. But the market price will be more that Rs 1,20,000 upwards.
RTX 3090 – Finally this one is the flagship graphics card of RTX 30 series lineup. Obviously it is going to be expensive at Rs 1,33,000. This card too is not easily available and commands a market price of over Rs 2,00,000! If money is not an issue, this is the card which you should aim for.
PC Specifications for AI learning
A good quality 550 Watt power supply unit by Cooler Mater or Corsair brand should cost around Rs 5000. Do not compromise on PSU. Go for the latest generation Intel Core i7 processor. Remember the CPU will be just there for OS related tasks so no need to go for the high end ones like i9 or even the more expensive Xenon. Finally choose a good quality cabinet and RAM to handle the heavy workload of the graphics card.