GPT-4: The next evolution of AI is here

Even as we wrote our last blog about ChatGPT, the software had already taken a backseat to its successor, GPT-4. Released on the 14th of March, this new titan of AI is set to dwarf the success of the previous bot.

Technology
Posted on
June 25, 2023
GPT-4: The next evolution of AI is here

Even as we wrote our last blog about ChatGPT, the software had already taken a backseat to its successor, GPT-4. Released on the 14th of March, this new titan of AI is set to dwarf the success of the previous bot, with the potential to create 25,000 words over Chat GPT’s 3,000. More than that, it’s trained over a larger library of information and can even create photorealistic images and graphics using the same interface type as the original.

Right now, we’re on a waiting list to begin using it and begin trialling it with our clients, but here’s what we know so far.

GPT-4 is multimodal.

As we said above, GPT-4 has the ability to create images as well as text, and eventually, it’ll likely lead to video as well (we imagine).

However, this is no new feat for an AI generator. You’ve probably seen already a ton of images on social media that purport to be real but are, in fact, created by an AI. Like its chatty counterpart, the visual AI is trained on many different photos and artistic styles from all across the net, allowing it to mix and mingle anything it puts its mind to create a unique image. 

DALL-E is the pre-existing image creator released by the OpenAI team, the geniuses behind GPT. Its accuracy is sometimes questionable, but when it works, it works. Here are some scenarios we chucked at it;

As you can see, some of these examples aren’t quite there. While graphic images tend to come out better as interpretations of real life, photos enter the uncanny valley of looking almost there, usually with a twisted limb spiralling off into the fourth dimension. The selected imagery on site all looks very attractive, but it takes the right word combinations and some keen ‘luck’ to generate a ‘correct’ image.

AI video creators have a similar problem. Most, such as elai and Steve, use text to source video content online, putting together a presentation of sourced images based on a generated script. Others will generate the images themselves and try to make sense of human movement, with variable success (again, best viewed in non-realistic mediums and art styles).

GPT-4, while not offering video, promises to be more advanced than previous image generators. The software understands images as input and can reason with them sophisticatedly. For example, if the software is trying to reason a typical scene between Wile E. Coyote and the Road Runner, and a question is put to it about ‘what happens next’, the software can reason with the likely physics of the world, or the responses of the characters, using previous interactions and examples to arrive at a conclusion.

Better at words.

The main difference comes, as the OpenAI team stated, from GPT-4’s ability to handle nuanced instructions more creatively and reliably than before. It can answer questions put to it with a greater degree of accuracy, having passed a simulated law school bar exam among the top 10% of test takers (whereas ChatGPT scored in the bottom 10%). In short, it has a greater ability to generate original responses using the data it's trained on, with less dissonance between question and answer. 

It can also write in all major coding languages, making it a powerful tool for website and app creation. Using the most basic sketches, you can plan out a website’s design and functionality with greater text capacity.

Limitations.

Unfortunately (or fortunately, depending on how you look at it), that particular spark of genius that allows humans to speculate on current events isn’t there with GPT-4. Like a medieval historian, the AI is only really able to speculate on past events, having been fed information prior to 2021. This has led to some fairly short-sighted responses regarding current or future events, with emotive reactions and factual errors. While it’s better at blocking off disallowed content (scoring 82% to chat-GPT’s 40% acceptance rate), it can still err on the side of bad taste by incorporating negative or misconstrued responses. 

Which is to be expected. Like any promising young writer, GPT-4 is prone to mistakes or ‘hallucinations’, as they’re called. There are still vast limitations to the software, which become more apparent the more one uses it.

The human side of GPT-4.

GPT-4 is making its way into third-party products, such as Microsoft’s AI-powered Bing and other Microsoft products. This does mean that OpenAI has the potential to become user biased, with the open-source platform likely to move to a paid model even at the time of writing. 

So, there’s still a need for humans after all?

No AI can match a human intellect, at least within current coding standards. AIs are trained on the vast amount of human (and, realistically now, previous AI) data to create something that approximates a solution we like. As yet, AIs can’t be so struck by the beauty of a sunset or moved by the poetry of Keats, generating their own unprompted responses.

What is rapidly changing is the way we respond to and use AI interfaces. We can create more, but humans must be behind the architecture and design.

Consulting with IT strategists.

If you’re looking to utilise tools such as GPT in your own marketing and business approach, we can help. We have an understanding, backed by the research we do into new and enterprising digital fields, of what works on a user level and incentivises that buy impulse. 

So, even if you prefer to write the old-fashioned way, with a set of fleshy human hands, we’re able to help you turn that idea into compelling written copy and have users land on your page with proven targeted strategies that hook in potential customers.

An AI can tell you how it should be done. We can tell you what it’s going to be next. Get in contact if you want to know more.