Hmm, not sure how much can be done with these. Most ML stuff you can self-host requires CUDA or OpenCL, i.e. a GPU.
I am planning to setup a Libre-Translate instance on an old CUDA enabled gaming laptop turned server soon:
https://github.com/LibreTranslate/LibreTranslate
Cool would be an auto-translate button on Lemmy posts with Libre-translate API support like it exists for Discourse forums.