Migrating static data to Digital Ocean's "block storage"?

My instance’s disc filled up this weekend. Time to get some more space for all those attachments.

I’m running it in a Docker container on a Digital Ocean droplet. Has anyone had any experience migrating their attachments to DO’s Block Storage?

For that matter, are there any guides out there on migrating one’s attachments to external storage, period? I can’t find anything about that in either of the documentation repos.

I don’t have mastodon experience but once you have block storage setup on DO I believe that the task would be equivalent to just moving to another directory. For example you might set up block storage as /other and then move the data there. I’m curious - do you have a resource for how to set up Mastodon in Docker? I’m working on setting up Mastodon and I would much rather do it in Docker. Is there a guide? And Docker has volumes which can be mapped to the underlying file system. Are you using docker volumes?

One way to do it would be to create DO block storage, map a docker volume to the block storage mount point, and copy the new data in, and then change your docker image to use the new docker volume.

Yeah, that’s what it looks like right now. I gotta sit down and really reeeaaaddd the block storage docs. And squint hard at some of the config files too probably.

I don’t think I’m using any docker volumes, I’ve just followed the guide on the now-depreciated Github docs repo: documentation/Docker-Guide.md at master · tootsuite/documentation · GitHub

(Also here are some notes on the whys and wherefores of moving one’s data to Amazon’s S3 that I got pointed to on Masto.)

I got it done! It turned out to be pretty simple: make the new block storage, get it mounted, copy public/system to the new drive, rename public/system to something else (in case things went wrong), make a link to the new drive’s copy. Made a test post with an image, took the instance down and brought it back up, to make sure everything was still hooked up. It was so I rm’d public/system.

It might blow up later but it’s working so far. So I came here to answer my own question and not be DenverCoder9. :partying_face:

(Also, while doing this I learnt how to get a progress gauge on a huge copy job like this: rsync -a --info=progress2 --no-i-r SOURCE DEST. instead of cp -ar SOURCE DEST. It will probably hang for a while at first while it grovels through many gigabytes of files, but then it will start giving you feedback.)

I am using AWS S3 at the moment in order to avoid space issues. My plan is to move to Wasabi (compatible with S3) longer term (perhaps sooner if there any black Friday deals).