AI was implemented with the objective to cut short work time, produce more and provide results with minimum errors. AI has been applied to every aspect of our life like translation devices, self-driving cars in traffic roads, attending personal issues of clients, etc. Companies adopting AI to make statistical conclusions with the use of data to predict sales have come a long way in minimizing errors by three quarters.
The latest AI has got higher demand forcing companies to drop the idea of using them at all. The new AI algorithm required so much computation that companies find it difficult to accommodate its needs. “They were like, ‘well, it is not worth it to us to roll it out in a big way,’ unless cloud computing costs come down or the algorithms become more efficient,” says Neil Thompson, a research scientist at MIT, who is assembling a case study on the project.
The rapid increase in computation and algorithm techniques have caused a major problem, says in a research paper prepared by Thompson and his colleagues. The paper argues that, sooner or later the increase in pace of computation process is probably going to slow down. This may create an impression within the progression of computation vision, translation and language understanding.
Deep neural networks have an expensive computation cost. The demand for soaring computation demands creates a significant burden on AI technicians’ shoulders. Thompson believes without clever new algorithm, the pace of deep learning could slow advances in multiple fields creating a dilemma in computer replacing human tasks. Automated devices and technologies replacing humans are more likely to happen since the expense of human-performed task will be quite anticipated.
In their study, Thompson and his co-authors checked out more than 1,000 AI research papers outlining new algorithms. The history suggested that creating further advances within the same way are all-but impossible.
For example, the performance of a translation machine and the chances of error is 50% as of the present algorithm used. We can bring it down to a 10% if we are provide a high graded algorithm and extraordinary computation power which takes billions of times alone if it were to rely only on the computation aspect. But improvements in hardware are unlikely to offset because of upper demand for computation.
Ultimately, Thompson and his colleagues believe that improved deep learning won’t just change computation power but also create a wave of applications with the utilization of it.