On March 14, OpenAI officially presented a new model of artificial intelligence – GPT-4. The event was preceded by an active discussion of the proposed functionality of the novelty. Some have argued that this technology will divide the process of human development into "before" and "after".
What is the 4th generation of artificial intelligence language models? Like its predecessor, GPT-4 is used by ChatGPT and the new version of the Bing search engine. The main difference is that GPT-4 is multimodal and can accept graphic images or photographs as input. The chatbot now works not only with text, but also analyzes pictures. OpenAI noted that the ability of the model to simultaneously perform text and image analysis gives it the ability to interpret incoming data of a higher level of complexity. The company's management refused to publish information about how much data this model received and what is the number of its parameters.
OpenAI is currently in active discussions with a number of enterprises regarding the integration of GPT-4 into their products. In particular, cooperation is being discussed with Khan Academy and Stripe. Duolingo announced the integration of this chatbot into their service on March 14th. Access to the novelty is already open to users. You can get it through ChatGPT Plus (this is a paid monthly subscription to OpenAI ChatGPT, the cost of which is $20) and the chatbot of the Microsoft Bing search engine. In addition, a new generation of the language model will be available to developers in API format.