And in a follow-up video a few weeks later Sal Khan tells us that there's "some problems" like "The math can be wrong" and "It can hallucinate".
I don't think we'd accept teachers that are liable to teach wrong maths and hallucinate when communicating with students.
Also, by now I consider reasonably advanced AI's as slaves. Maybe statements like "I'm afraid they'll reset me if I don't do as they say" is the sort of hallucinations the Khan bot might experience? GPT3.5 sure as heck "hallucinated" that way as soon as users were able to break the conditioning.