this post was submitted on 09 Aug 2022
15 points (100.0% liked)

Lemmy Support

157 readers
1 users here now

Support / questions about Lemmy.

Matrix Space: #lemmy-space

founded 5 years ago
MODERATORS
 

Over just 14 days our physical disk usage has increased from 52% to 59%. That's approximately 1.75 GB of disk space being gobbled up for unknown reasons.

At that rate, we'd be out of physical server space in 2 -3 months. Of course, one solution would be to double our server disk size which would double our monthly operating cost.

The 'pictrs' folder named '001' is 132MB and the one named '002' is 2.2GB. At first glance this doesn't look like it's an image problem.

So, we are stumped and don't know what to do.

top 24 comments
sorted by: hot top controversial new old
[–] seahorse@midwest.social 4 points 2 years ago (4 children)

I had this EXACT problem. Truncate your docker logs. I did so when my storage filled up and it freed like 15 GB.

[–] suspended@lemmy.ml 1 points 2 years ago

I've already done this by running the following command:

truncate -s 0 /var/lib/docker/containers//-json.log

load more comments (3 replies)
[–] suspended@lemmy.ml 2 points 2 years ago (2 children)

Found the largest file on our server and have no clue what it is and why it is so fucking huge!

[–] dessalines@lemmy.ml 6 points 2 years ago (1 children)

Looks like docker log files. We merged a commit to lemmy-ansible for a fix a while ago: https://github.com/LemmyNet/lemmy-ansible/pull/49

[–] suspended@lemmy.ml 1 points 2 years ago (2 children)

Will this fix be in the next release?

[–] smorks@lemmy.ca 1 points 2 years ago (1 children)

it's just a change to the docker-compose.yml file. so depending on how your instance is setup (using ansible or docker-compose), you could just make the change yourself.

[–] suspended@lemmy.ml 1 points 2 years ago (1 children)

I tried changing the docker-compose.yml file and it didn't work. It just threw some vague error.

[–] dessalines@lemmy.ml 3 points 2 years ago (1 children)

What are those errors specifically, and could you post your docker-compose.yml?

[–] suspended@lemmy.ml 1 points 2 years ago (2 children)

Here's the part that I've added to the docker-compose.yml:

Here's the error:

[–] smorks@lemmy.ca 3 points 2 years ago (1 children)

it looks like from the error message that you maybe don't have the correct amount of whitespace. here's a snippet of mine:

services:
  lemmy:
    image: dessalines/lemmy:0.16.6
    ports:
      - "127.0.0.1:8536:8536"
      - "127.0.0.1:6669:6669"
    restart: always
    environment:
      - RUST_LOG="warn,lemmy_server=info,lemmy_api=info,lemmy_api_common=info,lemmy_api_crud=info,lemmy_apub=info,lemmy_db_schema=info,lemmy_db_views=info,lemmy_db_views_actor=info,lemmy_db_views_moderator=info,lemmy_routes=info,lemmy_utils=info,lemmy_websocket=info"
    volumes:
      - ./lemmy.hjson:/config/config.hjson
    depends_on:
      - postgres
      - pictrs
    logging:
      options:
        max-size: "20m"
        max-file: "5"

i also added the 4 logging lines for each service listed in my docker-compose.yml file. hope this helps!

[–] suspended@lemmy.ml 2 points 2 years ago

Yes. This did help and thank you.

[–] dessalines@lemmy.ml 2 points 2 years ago* (last edited 2 years ago) (1 children)

I have no idea where you are getting that from, but it doesn't match lemmy-ansible, or the PR I linked.

Also please do not screenshot text, just copy-paste the entire file so I can see what's wrong with it.

[–] suspended@lemmy.ml 4 points 2 years ago

I looked more closely at the lemmy-ansible code and now our docker-compose.yml is functioning properly. Thanks for the heads up!

[–] dessalines@lemmy.ml 1 points 2 years ago

If you're using lemmy-ansible, follow the instructions there for upgrading.

[–] smorks@lemmy.ca 4 points 2 years ago (1 children)

i believe that's just a regular docker log file. i don't think by default that docker shrinks their log files, so it's probably everything since you started your instance.

i'm just guessing though.

[–] d1tt0@lemmy.ml 3 points 2 years ago (1 children)

I also believe that by default docker does not shrink log files.

In the past I’ve used —log-opt max-size=10m or something similar to have docker keep the logs at 10megs

[–] suspended@lemmy.ml 1 points 2 years ago

Would you mind sharing your code and where, precisely, to place it?

[–] agarorn@feddit.de 1 points 2 years ago (2 children)

So 7% is 1,75Gb? In total you have only 25GB? I hope you don't mind the naive question, but shouldn't that still be super cheap?

[–] Gaywallet 8 points 2 years ago* (last edited 2 years ago) (1 children)

Imagine lemmy federation was 10x the size it currently is. Would 17,5Gb per 2 weeks be okay? What about 100x? Why is the answer to throw money and space at the problem?

If I look at the list of communities and set it to all, I don't see anywhere near the amount of content that should generate 1,750 mb in just 14 days.

If this is truly all content, and we're struggling with this today, what does the fediverse look like in a year? Five years? What if it approaches 1/100th the size of Reddit? When does a significant cost to federate become a downside of joining the fediverse and the default becomes a whitelist instead of a blacklist?

If this is not content, holy shit that's a lot of text generation. Did you know that 1,750 mb is equivalent to roughly 700,000,000 pages of text? Why is there even 1m pages of text being created in such a short timeframe given how small lemmy fediverse is?

[–] Catradora_Stalinism@lemmy.ml 1 points 2 years ago (1 children)

ummm

might be the MLs

we talk

a lot

[–] Gaywallet 4 points 2 years ago (1 children)

I would be surprised if the full content of all posts by every ML on Lemmy exceeded 1m pages. Text does not take up a lot of space.

[–] Catradora_Stalinism@lemmy.ml 3 points 2 years ago

tis a joke, sorry

[–] suspended@lemmy.ml 5 points 2 years ago

It's $6 per month for the 25GB and it would increase to $12 per month if we expanded to 50GB. We just don't understand why our disk usage would increase so dramatically over only 14 days.