Traditionally, computing power is associated with the number of CPUs and the cores per processing unit. During the 90s, when WinTel started to invade the enterprise data center, application performance and database throughput were directly proportional to the number of CPUs and available RAM. While these factors are critical to achieving the desired performance of enterprise applications, a new processor started to gain attention – Graphics Processing Unit or GPU.
For many of us, GPUs remind of the video cards that were designed for graphic-intensive games. These were purely optional, which didn’t influence the buying decision of an average user investing in a PC or server. Only those gaming junkies playing popular PC games like Quake and Half-life appreciated the power of GPUs. But in the era of Machine Learning and Artificial Intelligence, GPUs found a new place that makes them as relevant as CPUs.
But, why is the GPU getting so much attention now? The answer lies in the rise of deep learning, an advanced machine learning technique that is heavily used in AI and Cognitive Computing. Deep learning powers many scenarios including autonomous cars, cancer diagnosis, computer vision, speech recognition, and many other intelligent use cases.
Like most of the ML algorithms, deep learning relies on sophisticated mathematical and statistical computations. Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) are some of the modern implementations of deep learning. These neural nets emulate human brain with close resemblance to neuroscience. Each type of neural net is aligned with a complex use case like classification, clustering and prediction. For example, image recognition and face recognition use CNN while Natural Language Processing (NLP) relies on RNN. ANN, the simpler ones of all the neural networks is often used predictions involving numerical data.
Read the entire article at Forbes