Open AI released the long-awaited GPT-4 on 14 Mar.
We introduced the GPT-3.5 model when it was launched last year. It can realize autonomous dialogue between humans and machines. Four months after the previous release, GPT4.0 was re-released. So what are the highlights of GPT-4?
OpenAI president and co-founder Greg Brockman joins a developer presentation showing GPT-4 and some of its capabilities. GPT-4 has broader general knowledge and problem-solving abilities and can solve problems more effectively.
"We spent six months making GPT-4 safer and more aligned. GPT-4 is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5 on our internal evaluations."
OpenAI conducts tests on GPT-3.5 and GPT-4, including tests suitable for simulated human tests. In these test results, GPT-4 performed better than GPT-3.5.
GPT-4 is more creative and collaborative and can generate, edit, and iterate with users. For example, give it instructions to describe the story of Sam Altman. Each word must start with letters S without repeating any letters.
In the past, we communicated with GPT-3.5 in the form of text. In the latest release, GPT-4 can accept images as input and explain, classify, and analyze them. When you input a picture, GPT-4 can recognize and understand the meaning of the picture. Compared with the previous generation, GPT-4 has the ability of visual recognition and even thought. It will significantly enhance the application space of GPT-4, such as becoming the eyes of visually impaired users.
GPT-4 can handle the text of more than 25,000 words, allowing for long-form content creation, extended conversations, and document search and analysis. Moreover, for jerky and complex papers, documents, and other materials, you only need to send the link to GPT-4, which can help you summarize the content faster.
System role plasticity
The iterative version can customize the character of GPT-4. Here is a new system input box on the left in this new page, where you can enter the identity attribute of AI. Tune the characters of the system and then ask questions as users. GPT -4 will independently break down the questions step by step and answer more comprehensively and detailed. It can write code and debug capabilities when acting as a programmer.
This update incorporates more human feedback, including feedback submitted by ChatGPT users, to improve GPT-4's behavior. In addition, by continuously enhancing actual use, ChatGPT has more language behavior data, which will be constantly updated and improved.
More complex and powerful language models
Following the research paths of GPT, GPT-2, and GPT-3, GPT-4 utilizes more data and computation to learn deeply continuously, and in the internal adversarial factual evaluation, GPT-4 scores 40% higher than GPT-3.5.
Exact requirements need to be used when using language model output. For example, although GPT-4 has more powerful features, it is still unreliable and has limitations similar to previous models, which can "fantasy" facts and produce inference errors. But the latest GPT-4 scored 40% higher than GPT-3.5 in the internal factuality assessment. GPT-4 also lacks knowledge of events after its deadline (September 2021).
It can also suffer from simple errors of reasoning, especially when capabilities spanning multiple domains are required or glaring misstatements of trusting users too much. It will also fail in solving complex problems like humans and needs continuous improvement and progress.
The official gave six products that used GPT-4 until now to show the broad application space.
A language learning app, Duolingo integrates GPT-4 into the product to perform role-playing, allowing artificial intelligence to talk to users and correct grammatical errors.
Be My Eyes
The visually impaired could only obtain information through text recognition for a long time. GPT-4 helps visually impaired users better understand the world through image recognition analysis.
To better understand the user's business, Stripe tries to understand exactly how each enterprise uses the platform and customizes support accordingly. Using GPT-4 to scan these sites and provide summaries significantly reduces manual time to search, and the performance is better than human-written summaries. Stripe leverages GPT-4 to simplify the user experience and combat fraud.
Morgan Stanley maintains a 100,000-page content library with knowledge covering investment strategies, market research, and analyst insights. Distributed on the internal website in PDF, it is necessary to manually search and browse a large amount of information to find specific answers. With the help of GPT-4, the constant will significantly improve the efficiency of the previous information management and query methods.
As a non-profit organization, Khan Academy's mission is to provide free, world-class education to anyone, anywhere. This organization offers thousands of math, science, and humanities courses for students of all ages.
But it is precise because students of different age groups have different teaching needs, and each person's learning ability is also different. Khan Academy announced that Khanmigo, an artificial intelligence assistant built using GPT-4, will serve as a virtual tutor and classroom assistant for students, helping students solve problems of different levels and needs.
Government of Iceland
Under the initiative of the President of Iceland, Iceland cooperated with OpenAI to use GPT-4 to protect the Icelandic language, providing innovative opportunities to get more protection and attention to the Icelandic language of niche resources. A large amount of OpenAI training is based on English. This cooperation is also conducive to promoting the protection of low-resource languages if the trained language model has more possibilities.
All the introductions are not enough to give you a real sense of experience. We recommend that you try it yourself. If access restrictions hinder you, here we can help you answer how to enter.