The problem is that the model is actually doing exactly what it's supposed to, it's just not what openai wants it to do. The reason the prompt extraction method works is because the underlying statistical model gets shifted far outside the domain of "real" language. In that case the correct maximizing posterior becomes a sample from the prior (here that would be a sample from the dataset, this is combined with things like repetition penalties).
This is the correct way a statistical estimator is supposed to work, but not the way you want it to work. That's also why they can't really fix this: there's nothing broken to begin with (and "unbreaking" it would almost surely blow something take up)
Honestly, I recommend everyone without existing Linux experience to use Fedora: it's reasonable modern (nice for, e.g. gaming), while also not being a full rolling release model like Arch (which needs expertise to fix in case something breaks). It's also reasonably popular, meaning you will find enough guidance in case something does break.