I used to use httrack as well, but it’s very outdated, please lmk if anyone has one similar
Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ
⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.
Rules • Full Version
1. Posts must be related to the discussion of digital piracy
2. Don't request invites, trade, sell, or self-promote
3. Don't request or link to specific pirated titles, including DMs
4. Don't submit low-quality posts, be entitled, or harass others
Loot, Pillage, & Plunder
📜 c/Piracy Wiki (Community Edition):
💰 Please help cover server costs.
Ko-fi | Liberapay |
Another option is to use the SingleFile browser extension. You can install it from your browser's extension marketplace.
SingleFile can download an entire web page into a single html file while preserving the layout and images. It basically inline everything into a single file, which is very handy. The drawback is you'll have to do it manually on every page you want to save, but it works really well. Even the google map embed in the page you linked got preseved as well (it's not interactive anymore though because it's not preserving javascript).
Oh, that is nice. Thanks for the link. That works great, basically similar effort to just copy/pasting, but with far better results.
I’ve used ArchiveBox as a desktop app (on Linux Mint) which saves a snapshot of URL’s you feed it. Worked for the sites I needed when seeking an offline solution.
I had never heard of that app! I tried it with no luck, but I'm going to keep that in my pocket for later use. Thanks.
2nd this. I run archivebox docker flavor on my media server and have done so for years. Everything I bookmark is automatically offlined.
You could try wget-2-zim and then open the zim file with kiwix, though you said you've tried variations of wget -r
already. If you can get ahold of all of the files needed to run the page properly stored in a directory, then you can try using zimwriterfs
to convert it to zim, that's all wget-2-zim
does after it tries to fetch the files.