Is there anyone else who’s noticed that Gemini 2.5 Pro (the June update) is worse than the 05-06 version? It misses things that I wouldn’t have expected from the previous version of the model. In general makes a lot more incorrect assumptions and is much more sycophantic by default.

It would be great if Google would at least keep the previous version so users can roll back if they prefer.

9 Likes

Hi @Dennis1,

Welcome to the Forum,

Thank you for your feedback. We appreciate you taking the time to share your thoughts with us, and we’ll be filing a bug. Can you please provide more details on the specific task where the 2.5 Pro (05-06) model outperformed the current 2.5 Pro model?

Yes, it has been noted many times within the forum. A member mentioned in another chat yesterday to provide context to Google. Therefore I wrote a bug report with the aid of the current 2.5 pro model.

For me personally 10-11/06 the Google Drive errors commenced with 0506, I started the current project on May 30 and had 0 issues prior. I joined the forum confirmed others were experiencing the same, then discovered the model’s scheduled deprecation for 19/06. I hypothesised this was a consequence of either token count or the upcoming degradation. I then commenced working with the current model in tandem with 0506 until the 19/06. With the current version new errors were present from the outset, then on the 19/06 I noticed significant changes which I have outlined in the bug report.

No answers as yet. Hope my feedback helps.

5 Likes

The problem with this question is that there isn’t one specific task. You just notice over the course of several hours that you’re fighting with the model much more than you were before. Haven’t been this annoyed since 3-25-exp was taken down.

6 Likes

Yes I have gone down the Rabbit Hole a number of times trouble shooting. After a break from development aka Bard’s preview, I was really enjoying the 0506 model, the frustration is getting beyond me, hence taking the time to do the analysis yesterday. I have been unable to find a change log or error log, which would be great to not waste time, aka know the team is onto it timeframes, workarounds etc…

My question now, should I abandon chat threads earlier, re train or stick with it. I’m literally manually copying and pasting my threads to Docs for data integrity. I used Cli to run a script to save to drive but it still requires me to copy and paste the response. I cannot for the life of me figure how to export an entire chat thread. If opening in chrome in order to web scrape it then does not save the history. I could go on the efforts I have investigated… If I could solve this problem alone I would be one little happy camper. :rofl:

2 Likes

I absolutely do agree, the newest version of Gemini 2.5 pro the release version truncates and summarizes responses even if you tell it not to and hallucinates a lot when asked for factual questions. I don’t like it summarizing responses wll the time.

3 Likes

I think the amount of code/text generation has definitely been reduced which is a shame.

2 Likes

even the 2.5 Flash been drunk lately, was that caused by CLI release?

1 Like

I confirm that. It is almost unusable in many cases, as it does not “listen” to prompts. It only scans prompts and training data for “keywords” and fills the gaps with it’s own hallucinations. And it ALWAYS tries to engineer a solution to not actually having to do the work (which starts with reading prompt fully and not skim it). I am talking about 2025-06-17 Pro version. Using it over API. But it is so bad I am unable to use it for work. The version from March was totally different league.

5 Likes

Gemini consistently misreads text in images, and despite over ten follow-up queries, it stubbornly insists on its misinterpretations and lectures me, showing an insane state. It genuinely feels like dealing with Google’s tokenistic PC ideologues

1 Like

yeh its terrible it makes so many mistakes its like its set on “wrong answers only mode” literaly struggles to produce one correct answers ive moved to claude

1 Like

The current Ga is no longer capable for complex task, wait for future iteration

I tried hard to create comedic videos in VEO 3, but it constantly refuses to do it claiming “explicit content” or “violating safety”.

I can confirm. it is incredibly stupid now.

i would even care id go to other ai even free ai like deepseek or qwen but im paying for this service which should be paying me to use it how stupid and bad its become , its like the roles have reversed im correcting its prompts and answers especially in coding tasks