I've definitely experienced this.
I used ChatGPT to write cover letters based on my resume before, and other tasks.
I used to give it data and tell chatGPT to "do X with this data". It worked great.
In a separate chat, I told it to "do Y with this data", and it also knocked it out of the park.
Weeks later, excited about the tech, I repeat the process. I tell it to "do x with this data". It does fine.
In a completely separate chat, I tell it to "do Y with this data"... and instead it gives me X. I tell it to "do Z with this data", and it once again would really rather just do X with it.
For a while now, I have had to feed it more context and tailored prompts than I previously had to.
The federation part was appreciated on my end at least. This instance is a big part of lemmy experience, and would have been odd for it to suddenly disappear.