In 1995, when Allen Newell and Herbert A. Simon created the first Artificial Intelligence program, all it did was prove 38 out of 52 mathematical theorems. Not showing it down in any way- but the amount of advance that we have achieved in the last 25 years is astonishing. AI has grown way beyond proving maths to now predicting what we may see on Youtube and the latest OpenAI project GPT-3 is proof of how far the advancements have reached.
GPT-3 is the third generation of the Artificial Intelligence research outfit OpenAI’s ‘Generative Pretrained Transformer’- which is a general-purpose language algorithm that uses the powers of AI and machine learning to manipulate and carry out tasks like text translation and Text prediction.
It’s predecessor, the GPT-2 had already encountered controversy when it was revealed that it could create extremely coherent and realistic ‘Fake News’ based on something as rudimentary as a single opening sentence. The impact and the potential damage this could cause led to OpenAI holding it’s public release. But now that the world knows what GPT-3 can achieve, the situation is equally risky, yet commendable.
Reports have started to come in citing the power that GPT-3 has in its arsenal. It successfully generated paragraphs of relatively cohesive follow on, from just one half of a health memo. The final product even added a section on risk and long-term strategy. Another account of its prowess was demonstrated when it automatically generated the JSX code if you give it the description of the layout you need.
GPT-3, with it’s mind-bending 175 billion learning parameters has already proven that it was powerful than the-2. This makes it an order of magnitude larger than the second-most powerful language model, Microsoft Corp.’s Turing-NLG algorithm, which has just 17 billion parameters. What comes of such processing powers- is yet to be understood. But one thing’s for sure – this just changed the AI sector. For good, or worse- only time will tell.