Published in AI

OpenAI unveils GPT-4

by on15 March 2023


It's not just about words any more 

OpenAI unveiled GPT-4, a next-generation model that provides the technical foundation for ChatGPT and Microsoft's Bing AI chatbots. It's a major upgrade that opens the way to major advances in the capabilities and features of AI and a further step towards humanities oblivion.

OpenAI announced the new GPT-4 upgrade on its blog and you can already try it out on ChatGPT (It's limited to 100 messages every four hours) Microsoft confirmed that the latest version of its Bing chat tool uses GPT-4.

There are several major upgrades in GPT-4 offers, starting with the biggest and it is not just about words.

According to OpenAI, GPT-4 can accept images as input and generate labels, classifications, and analysis. ChatGPT and Bing can "see" the world around them, or at least interpret visual results.

It means it can work like the app Be My Eyes, a tool for people with visual impairments. Be My Eyes uses a smartphone's camera and visually explains what it sees.

In a GPT-4 video aimed at developers, Greg Brockman, president and co-founder of OpenAI, showed how GPT-4 interprets a sketch, turns it into a website, and then provides the code for that website.

GPT-4 can process more than 25,000 words of text, enabling use cases such as long content creation, extended conversations, and document search and analysis.

OpenAI said. "It can create, edit and repeat creative and technical writing tasks with users, such as composing songs, writing screenplays or learning a user's writing style."

Great language models don't inherently have intelligence. GPT-4 model should understand relationships and context even better. For example, ChatGPT scored in the 10th percentile on a uniform bar exam. GPT-4 scored in the 90th percentile. In the so-called Biology Olympiad, vision-based GPT-4 scored in the 99th percentile, while ChatGPT scored in the 31st percentile.

 

Last modified on 15 March 2023
Rate this item
(2 votes)

Read more about: