Over just 14 days our physical disk usage has increased from 52% to 59%. That’s approximately 1.75 GB of disk space being gobbled up for unknown reasons.

At that rate, we’d be out of physical server space in 2 -3 months. Of course, one solution would be to double our server disk size which would double our monthly operating cost.

The ‘pictrs’ folder named ‘001’ is 132MB and the one named ‘002’ is 2.2GB. At first glance this doesn’t look like it’s an image problem.

So, we are stumped and don’t know what to do.

    • it looks like from the error message that you maybe don’t have the correct amount of whitespace. here’s a snippet of mine:

      services:
        lemmy:
          image: dessalines/lemmy:0.16.6
          ports:
            - "127.0.0.1:8536:8536"
            - "127.0.0.1:6669:6669"
          restart: always
          environment:
            - RUST_LOG="warn,lemmy_server=info,lemmy_api=info,lemmy_api_common=info,lemmy_api_crud=info,lemmy_apub=info,lemmy_db_schema=info,lemmy_db_views=info,lemmy_db_views_actor=info,lemmy_db_views_moderator=info,lemmy_routes=info,lemmy_utils=info,lemmy_websocket=info"
          volumes:
            - ./lemmy.hjson:/config/config.hjson
          depends_on:
            - postgres
            - pictrs
          logging:
            options:
              max-size: "20m"
              max-file: "5"
      

      i also added the 4 logging lines for each service listed in my docker-compose.yml file. hope this helps!

    • I have no idea where you are getting that from, but it doesn’t match lemmy-ansible, or the PR I linked.

      Also please do not screenshot text, just copy-paste the entire file so I can see what’s wrong with it.