AI hardware in data centers

0
1067

As the world is grappling with a tsunami of data, data centers are also rapidly evolving. The rapid growth of intelligent connected devices and a massive increase in data consumption put a huge amount of pressure on the infrastructure of the underlying data center. Data centers have become so complex that, without impacting performance and productivity levels, it is no longer possible for only human beings to handle this rising complexity. Disruptive technology such as artificial intelligence (AI) hardware in data centers will dramatically help increase data operation performance.

NVIDIA aims to acquire ARM with new processors and servers, as the data center hardware industry is experiencing massive disruption, offering several options for consumers needing more power to tame AI workloads. This includes new offers from resurgent players and a herd of start-ups that sell advanced AI computing chips. In the data center industry, the growth of advanced computing has the potential to bring change that must be adapted to new form factors and higher rack densities. But for all the excitement in the chip and server industry, most data halls’ racks and rows continue to be populated by Intel chips, especially in the enterprise sector.

With the introduction of more efficient new chips from many start-ups, analyst Karl Freund of Moor Insights predicts that we will see a “Cambrian Explosion” of new chips designed for AI data crunching. He further explains that it could take longer for these chips to evolve than everyone wants, but there is no way that faster chips will come close to keeping up with model development.

With new models combining billions of data points to make recommendations and decisions, the development of AI algorithms is accelerating. Since more data is embedded into new models, more computing horsepower driving an AI hardware arms race is also required. The rivalry to exploit AI is powered by the brand names of the industry, including Amazon, Facebook, Google, and Microsoft, to introduce intelligence to a wide variety of services and software.

For example, Google’s Tensor Processing Units (TPUs), a specialized chip architecture that has significantly improved the processing capacity of Google. The TPU is a custom Application Specific Integrated Circuit (ASIC) tailored by Google for TensorFlow, an open-source machine learning (ML) software library. ASIC is a tool that can be customized to perform a particular task. Welcoming a domain-specific strategy has allowed Google to drop general-purpose features to save processor space and resources. Most notably, in a smaller footprint, it also allowed Google to deploy massive processing power.