top of page

What GPT-4 Turbo Can't Do Well?

Let's talk about GPT-4 Turbo and what it's not so great at. While it's super smart, there are some things it can find tricky. We'll break down where it might struggle a bit, giving you a clear picture of what to expect from this awesome but not flawless technology.


What GPT-4 Turbo Can't Do Well?

Join us as we explore the limits of GPT-4 Turbo!


About GPT 4 Turbo

GPT-4 Turbo is an upgraded version of OpenAI’s GPT-4 model. It’s a large multimodal model that can accept text or image inputs and output text.


Here are some key features:

  1. Larger Context Window: GPT-4 Turbo can handle up to 128,000 tokens, equivalent to around 300 pages of text1. This allows GPT-4 Turbo to understand more information and make connections across broader documents, leading to more coherent conversations and thoughtful analysis.

  2. Updated Knowledge: GPT-4 Turbo’s knowledge has been updated to April 2023. This allows GPT-4 Turbo to discuss and understand the latest events, research, and facts.

  3. Cost Efficiency: GPT-4 Turbo is more cost-effective for developers to use. Input is just $0.01 per 1,000 tokens, compared to $0.03 for GPT-4. Overall, GPT-4 Turbo is 3 times cheaper than GPT-4.

  4. Image and Text-to-Speech Inputs: GPT-4 Turbo can accept images as inputs as well as text-to-speech prompts.

  5. Faster Response Times: GPT-4 Turbo is almost five times faster than GPT-4, with 48 versus 10 tokens per second.

  6. Higher Accuracy: GPT-4 Turbo has an accuracy of 87% compared to 52% for GPT-4.

  7. More Verbose Output: The output of GPT-4 Turbo is on average 2 times more verbose than GPT-4.



GPT-4 Turbo Limitations

We talked about GPT-4 Turbo and its cool features. But, let's be honest, it's not perfect. Now, let's see some things it can't do so well:


Enhancement restriction: This means that if you want to apply image enhancements (like improving color, sharpness, or resolution), you can only do it to one image at a time within a single chat session. For example, if you have 5 images you want to enhance, you would need to start 5 separate chat sessions.


Size limit: The maximum size for an image that GPT-4 Turbo can process is 20MB. If an image is larger than this, it won’t be processed. For example, a high-resolution photograph taken with a professional camera might exceed this size limit.


Object grounding duplicates: When the model recognizes multiple instances of the same object in an image, it groups them under one label and bounding box, rather than creating separate labels for each instance. For example, if there are three apples in an image, the model might label them all as “apple” within a single bounding box, rather than creating separate boxes for each apple.


Low-resolution accuracy: When the model analyzes images at a low resolution to save processing time and tokens, it might not accurately recognize objects or text within the image. For example, small text or details might be missed.


Image chat restriction: You can only upload a maximum of 10 images per chat call, whether you’re using the Playground or the API. This means if you have more than 10 images, you’ll need to make multiple chat calls.


Low-resolution processing: Just like with images, the model analyzes video frames at a low resolution. This might result in small objects or text within the video not being accurately captured.


File format and size: The model only supports MP4 and MOV video formats. While the Playground has a maximum video length of 3 minutes, the API doesn’t have this limitation.


Prompt limitations: Video prompts can only contain one video and no images. If you want to analyze another video or switch to images, you’ll need to clear the session in the Playground.


Limited frame selection: The service automatically selects 20 frames from the entire video to analyze. These frames can be spread evenly across the video or focused on a specific prompt. However, this might result in key moments or details being missed.


Language support: Currently, the model primarily works with English when it comes to object grounding with video transcripts. Additionally, the accuracy of the transcript might not be reliable for identifying lyrics in songs. For example, if a song is playing in the background of a video, the model might not accurately transcribe the lyrics.


Context window: The context window of GPT-4 Turbo is limited, which means it can only consider a certain amount of information when generating a response. For example, if you’re having a long conversation with the model, it might forget details from earlier in the conversation, leading to inconsistencies in its responses.


Task specialization: While GPT-4 Turbo is a general-purpose model, it might not perform as well on specialized tasks as models specifically designed for those tasks. For example, a model trained specifically to translate between languages might outperform GPT-4 Turbo on translation tasks.


Creativity and originality: AI models like GPT-4 Turbo generate content based on patterns they’ve learned from their training data. They don’t create truly original content. For example, if you ask GPT-4 Turbo to write a poem, it will generate a poem based on patterns and structures it has seen in its training data, rather than creating a completely new and original poem.


Conclusion

To sum up, GPT-4 Turbo is super cool and can do a lot of amazing things. But, it's not perfect. Knowing where it struggles helps us use it better and value what it's good at. As tech gets better, figuring out these challenges helps AI become even more awesome. So, assuming both the good and not-so-good parts of GPT-4 Turbo is how we make the most of its power in the growing world of artificial intelligence.

bottom of page