Another day, another new text-to-image AI model hits the market – welcome Ideogram 2.0.
As we had a closer look at the new FLUX AI image generator a couple of days ago, today we are checking out the latest from Ideogram.
Ideogram claims that its v2 comes with industry-leading capabilities in generating realistic images – which is a bold claim and caught our interest from the detection side of things. How realistic are these images, and can we still detect them?
Follow along as we put the leading AI image detectors to test.
Let’s get started.
Ideogram is a Canadian-based company founded in 2022 by Mohammad Norouzi, William Chan, Chitwan Saharia, and Jonathan Ho, Ph.D.
The company states its mission as helping everyone become more creative through more accessible generative AI.
To make this happen, the company entered the text-to-image AI market with Ideogram 0.2 in late 2023.
It quickly followed up with the advanced Ideogram v1 (1.0) in early 2024, which stood out from competitors by emphasizing design qualities over photorealism.
Less than a year and some funding later, Ideogram released Ideogram 2.0 – one of the most capable text-to-image AI models ever to be seen – on August 21, 2024.
As one of the most capable text-to-image AI models, Ideogram 2.0 has upped the ante on realistic image generation compared to v1.
In more detail, Ideogram 2.0 introduces several new features that enhance user control and creativity:
The model also features improved “Describe” and “Magic Prompt” functions, which allow for more creative iterations and reimaginings of visual concepts.
Next to the improved technological capabilities, the new release includes several new components:
Ideogram claims its v2 model outperforms other leading text-to-image systems, including Midjourney and DALL-E 3, in two key quality metrics:
As we determined last week that FLUX.1 – the hottest model around – is currently ahead of Midjourney and DALL-E 3 in its generative capabilities, let’s compare Ideogram 2.0 with it:
Feature | Ideogram 2.0 | FLUX.1 |
---|---|---|
Key Strength | Excels in user-friendliness | Excels in precision and complex prompts |
Design Approach | Quick, stylish designs | Detailed outputs |
Additional Feature | Flexible privacy options | Open-source nature |
So far, our experience aligns with these claims:
If you need a closer look into how Ideogram 2.0 holds up to competition in more technical and practical terms, check out this awesome resource by Artificial Analysis.
It’s time to put Ideogram 2.0 to the test. Can the top AI image detectors we know about spot images made by this new AI?
For this purpose, we’ve chosen four tools that have performed well in our testing:
For the detection aspect, we created three different images for these AI detectors to analyze, each with a varying level of difficulty:
1. Hard to detect image: A generic sunny remote beach:
2. Moderately difficult to detect image: Two salsa dancers:
3. Easiest to detect: A crowd celebrating and holding supportive signs:
We used the Magic Prompt on auto mode and realistic styles for all the images to simulate how a typical user would use this tool.
So how did it go?
AI Image Detector Results | |||
---|---|---|---|
Detector | Image 1 (Beach) | Image 2 (Dancers) | Image 3 (Crowd) |
AI or Not | 100% AI | 100% AI | 100% AI |
Is it AI? | 10% AI | 81% AI | 81% AI |
Hive Moderation | 21.9% AI | 65.2% AI | 52.7% AI |
Winston AI | 91% AI | 1% AI | 1% AI |
The results weren’t as positive as those with FLUX.1 last week, indicating that it’s now more difficult to detect AI-generated images using an AI detector if the image was created with Ideogram 2.0.
As you can see, AI or Not absolutely crushed the competition – well done. To their credit, they’ve been very active on X (formerly Twitter), demonstrating their accuracy to a wider audience. They must know that they have something good going on.
Is it AI? also performed well in our assessment and can be used to detect images containing people.
The results from the other tools were a mixed bag.
Hive Moderation was not very confidence-inspiring, but it got the overall picture somewhat right. It struggled with the most generic image of them all – a sandy beach with no real telltale signs.
Winston AI was the only tool besides AI or Not that correctly identified the hardest image. However, for the rest of the images, some further training is needed. Also, their tool is in Beta, so there’s no need to worry yet.
We also tried out the well-known Fake Image Detector. It gives text answers without scores, but our tests show it detects Ideogram images well.
So what are our recommendations until all AI detectors catch up?
Use AI or Not, or employ multiple tools simultaneously to be absolutely sure.
It’s important to note that this test should not be considered conclusive. It simply reflects our experience using these tools on the web as regular users would.
Ideogram 2.0 represents a significant leap forward in text-to-image AI technology, offering impressive capabilities in generating realistic images and rendering text accurately.
While it presents challenges for most current AI detectors, with only one tool identifying its outputs 100% correctly, this highlights the rapid evolution of generative AI.
For now, users seeking to identify AI-generated images should consider using multiple detectors or relying on the most up-to-date options like AI or Not.
However, we are certain that, since Ideogram 2.0 is so new to the market, AI image detectors will catch up in the coming months.
So yes – Ideogram 2.0 images can be detected, but more caution is needed when choosing the right tool.
Happy detecting!