Over time, Lemmy instances are going to keep aquiring more, and more data. Even if, in the best case, they are not caching content and they are just storing the data posted to communities local to the server, there will still be a virtually limitless growth in server storage requirements. Eventually, it may get to a point where it is no longer economically feesible to host all of the infrastructure to keep expanding the server’s storage. What happens at this point? Will servers begin to periodically purge old content? I have concerns that there will be a permanent horizon (as Lemmy becomes more popular, the rate of growth in storage requirements will also increase, thereby reducing the distance to this horizon) over which old - and still very useful - data will cease to exist. Is there any plan to archive this old data?
Pictrs 0.4 recently added support for object storage. This is fantastic, because object storage is dirt cheap compared to traditional block storage (like a VM filesystem). This helps a lot for image storage, which is a large part of the problem, but it’s not the whole problem.
I know Lemmy uses Postgres for everything else, but they should really invest time into moving towards something more sustainable for long term/permanent hosting. Paid Postgres services are obscenely upcharged and prohibitively expensive, so that’s not an option.
I’m armchair architecting here so I’m not sure what that would look like for Lemmy (Cloudflare KV? Redis?)
Still, even my own private instance has been growing at a rate of about 700MB per day, and I don’t even subscribe to that many things. I can’t imagine what the major instances are dealing with. This isn’t sustainable unless we want to start purging old data, which will kill Lemmy long term.
EDIT: Turns out ~90% of my Lemmy data is just for debugging and not needed:
https://github.com/LemmyNet/lemmy/issues/3103#issuecomment-1631643416
I’m not really sure that a K/V service is a more scalable option than Postgres for storing text posts and the like. If you’re not performing complex queries or requiring microsecond latencies then Postgres doesn’t require that much compute or memory.
Object storage for pictrs is definitely a fantastic addition, though.
The 700MB are the postgres data or everything including the images?
I’m under the impression that text should be very cheap to store inside postgres.
Keep in mind that you are also storing metadata for the post (i.e. creation time), relations (i.e. which used posted) and an index.
Might not be much now but these things really add up over the years.
Yes but those are in general a couple of bytes at most. The average comment will be less than 1KB. Metadata that goes with it will be barely more.
On the other hand most images will be around 1MB, or 1000x times larger. Sure it depends on the type of instance but text should be a long way from filling a hard drive. From what I’ve seen on github the database size is actually mostly debugging information, so it might explain the weirdness.
On average, 500MB is Postgres, 200MB is Pictrs thumbnails. Postgres is growing faster than Pictrs is.
My local instance that I run for myself is about a week old. Has 2.5G in pictrs, 609M in postgres. One of those things that’ll vary for every setup
It would be greater if it can also leverage IPFS. So we can have unique identifiers per media object and hence deduplication in a P2P network which in my opinion is more federvise affinitive. I have been thinking of making such an alternative media backend for a while.
AWS Postgres instances aren’t that expensive, and they handle upgrades and backups for you.
That said, I’m interested in distributed storage, and maybe this fall/winter when I get some time off I’ll try making a lemmy fork that’s based on a distributed hash table. There are going to be a ton of issues (i.e. data will be immutable), but I have a few ideas on how to mitigate most of the issues I know about.
The largest table holds data that is only needed by Lemmy briefly. There is a scheduled job to clear it… Every 6 months. There are active discussions on how best to handle this.
On my instance I’ve set a cronjob to delete everything but the most recent 100k rows of that table every hour.
I saw that issue, and then I saw people having problems after clearing it, so I’m just going to wait until they figure that out in a stable version. Looking forward to it though!
@[email protected] @[email protected] Can either of you link to that discussion please?
It looks like the issue I was referring to has since been edited, as it’s not actually relevant to clearing this database bloat:
I mean, that post is exactly about clearing database bloat. The bulk of the bloat is in the activity table (going by the comments in the thread).
Yeah, I meant the comments that have been hidden previously made it seem like clearing it would be an issue, so I hesitated to clear it. Now that those comments have been edited, it looks like it won’t break anything after all.
Gotcha. Thanks for pointing me in that direction. Seems like that table’s only useful for debugging federation issues. Not sure why they’d want to keep it for 6 months.
Isn’t it mostly pictures and movies taking up space, posts and comments that is just text doesn’t take up much.
I would be fine with text is forever but pictures and movies are deleted after time.
Just think of all those old, helpful forum posts from years past with tinypic and Photobucket links that are dead. I agree memes can probably die out after time but anything informative would be bad to lose imo
For large instances pictures is probably the bigger consumer of space, but for small instances the database size is the bigger issue because of federation. Also, mass storage for media is cheap, fast storage for databases is not. With my host I can get 1TB of object storage for $5 a month. Attached NVMe storage is $1 per month per 10 GB.
For my small instance the database is almost 4x as large as pictrs, and growing fast.
There is a good writeup on how to do the migration here. I went through it myself since I host my tiny Lemmy instance on an AWS EC2 instance. It went pretty smoothly bu obviously larger instances will have to take a longer downtime to perform the migration.
Hey, that’s a Vultr guide! I use Vultr, thanks!
By the way, how are your costs on EC2? My understanding is that hosting on EC2 would be cost prohibitive from data transfer costs alone, not to mention their monthly rates for instances are pretty much always below the cost of a VPS.
Now if only someone could do this for the Postgres data. I wonder if S3FS would be able to handle the load of a running database, that would be a nice way to save costs.
Currently I’m just running a single user instance on a t2.micro. I’ve definitely locked it up at least twice after subscribing to a big batch of external communities so it’s definitely undersized if were to open it up to more users. I only have one other small service running on that instance though so Lemmy is definitely using the bulk of that capacity at least when it’s got work to do.
Costs are about $11.25 a month for the instance and about $2.50 for block storage (which is oversized now that pict-rs is on S3). I’m guessing that pict-rs s3 costs will be just a few pennies a day unless I start posting a lot on my own instance, probably less than a dollar a month.
Data transfer costs for me are zero though. I’m not using a load balancer or moving things between regions so I don’t expect that to change.
As for the data transfer costs, any network data originating from AWS that hits an external network (an end user or another region) typically will incur a charge. To quote their blog post:
A general rule of thumb is that all traffic originating from the internet into AWS enters for free, but traffic exiting AWS is chargeable outside of the free tier—typically in the $0.08–$0.12 range per GB, though some response traffic egress can be free. The free tier provides 100GB of free data transfer out per month as of December 1, 2021.
So you won’t be charged for incoming federated content, but serving content to the end user will count as traffic exiting AWS. I am not sure of your exact setup (AWS pricing is complex) but typically this is charged. This is probably negligible for a single-user instance, but I would be careful serving images from your instance to popular instances as this could incur unexpected costs.
Just FYI, you could save about $5 a month and get 2x the performance if you moved that to a VPS not on AWS. $11 a month for t2.micro, especially if it’s locking up, is basically you being scammed if I’m being honest 😅
AWS isn’t really designed for long running tasks like this unless you get a long term commitment discount. It’s intended for enterprises and priced that way. For a hobbyist like you, I’d definitely recommend Vultr or something.
Also, be careful about those bandwidth costs. Most of the time it’s never free to serve data out like that. You may not be using a load balancer, but double check those bandwidth costs, I remember something about paying for bandwidth I didn’t expect.
Definitely consider moving to a $5 or $10 instance on Vultr, they have block storage too. You could either save money, or spend the same for 3-4x the performance.
Yeah it’s likely that I’ll move this eventually. This instance was only setup so I had a test environment to learn AWS.
I’m in a similar boat and I’m gaining about 300 MB/day on my small instance doesn’t yet have any local communities.
One way to approach the geometric storage growth would be to not cache everything everywhere all at once. With 1000+ instances, storing an object in a few instances would be ok if others can pull it in on demand. Can use some typical caching methodology like use frequency, aging etc.
This is a great idea. Instances will need eventually to agree to common storage areas, even if they dont all allow the same content on their instance. That savings would be huge in the long run.
Premature optimization is not good. Content here is not very storage intensive, so I would not yet make it to issue. Postgre can handle billions of rows when indexed right.
Sounds like the federated instances should consider opening up for donations and paid features like comment awards and animated shit like Discord or Reddit.
I’m all for donations, but comment awards just sucked tbh. Opening Reddit felt like opening some microtransaction-ridden mobile game.
I liked Reddit Gold when that’s all there was. A super-upvote conveying “I’d pay money for this posting”.
The weird tiers they added later weren’t an improvement on that but fine I guess and the stupid icon thingies they have now? They’re horrible.So, I’d be in favour of something like the old Reddit Gold but I wonder how it could even work in the fediverse.
Not a fan of any monetary way to give extra visibility or attention to some comment. I think there’s a kind of slippery slope involved in such dynamics. Just my opinion though, nothing wrong with yours.
I think that’s an absolutely valid concern.
In the fediverse, we have some tools against abuse of this though because every Gold application would be public just like upvotes are.
I worry that these sorts of things would end up turning the site into a popularity contest (or, well, more of a popularity contest than these sorts of sites already are. That being said, I’m quite proud of Lemmy, currently, as it appears to be resisting that). Also I’m not entirely sure how things like payed comment awards would work with everything being federated.
I don’t think awards would make it any more of a popularity contest than updvotes already do. And it would be nice to have an incentivised way to support server costs. How it would work with federation is a good question, though.
Not every feature needs to be federated.
We don’t have karma, so up/downvotes shouldn’t really matter to individuals. But I agree it would be great to have incentives to helo out the owners.
What about custom stuff? Like custom emojis, themes, fonts, etc? Those aren’t really necessary but still gives you something in return. Or anything similar, really. I’m worried about awards having an effect like the karma system.
Just a note, lemmy does have a karma system. The default UI doesn’t show this but I believe apps like wefwef/memmy expose this data.
Yeah, Connect shows it too. But in my experience, at least with connect, it’s not reliable. The value changes (sometimes drastically). I noticed this early when my score went from ~80 to ~40 and checked if I had any comments/posts that were suddenly heavily downvoted (there were none).
Esit: Just checked with wefwef - points aren’t even the same between apps.
Edit 2: here’s a short discussion that touches on what I’ve mentioned.
Yeah, no. No to paid features. That’s how we ended up with reddit.
No, we ended up with Reddit from stupid top-down leadership stupidity. I have no problem with Reddit’s awards and whatnot, I have a problem with them locking kicking out third party apps and limiting the mobile web experience.
If you don’t like an instance’s profit model, just move to another one or host your own. There are plenty to choose from, so competition will keep away the worst of it.
My preference is for monetary subscribers to get access to new features early and to get more of limited features (i.e. if everyone gets X awards to distribute/day, subs get 2X or something), but for features to eventually make their way to everyone else. This allows A/B testing in an interesting way while funding the project. Since the project is FOSS, individual instances can decide to roll them out to everyone or block usage of these optional features from other instances.
If instance admins will be paying hundreds per month for hosting, they should have some way outside of donations to recoup that cost (plus their time spent). I doubt anyone will get rich from it, but hopefully it’ll be enough to help admins offset the costs.
I’d be ok with something like a “donor” flair.
I think we could do some kind of profile badge if you donated. So it wouldn’t influence the way people vote, only if you clicked on their profile will you see it.
May be lemmy.world Admins can answer this ? @michelleG
The long term solution is something like IPFS object storage that’s read only for everyone but the author instance. One copy of the data but all instances can read it and it’s stored forever in a redundant medium with bitrot protection.
No thanks, don’t need crypto bullshit
Because ipfs isn’t free and gets paid in filecoin, which is exactly what it sounds. Just crypto bullshit.
@mojo Just keep telling people you don’t know what IPFS is without coming outright and saying it. Lol.
“IpFs GeTs PaId In FiLe CoIn”
IPFS is a protocol, you nitwit. That’s like saying “ActivityPub is gets paid in Filecoin” Makes no fucking sense. Build a Fediverse layer on IPFS, no crypto needed. FFS get educated before you start trying to talk to adults.
Jesus… just stop.
Delete it all! The 'ol internet archive’ll take care of it slaps hood
Personally I think we should add a differentiation between the storage policies of content which is owned by your own instance and content that federates from other instances.
The former should be kept for a long time (forever?), while the latter can be cleared more regularly.
deleted by creator
deleted by creator
Honestly selective purging might work. There definitely some communities we should keep but if you purge like !memes, I don’t think anyone would care.
deleted by creator
if you also target low score posts first
That’s not a good metric either. There are many low scored posts I found on Reddit that were immensely helpful for some niche issue I was having.
While I’d still prefer to keep it, I’d agree that content meant for entertainment such as memes aren’t as valuable for long-term archival though. You can always get entertainment in so many places and forms but you can’t revive lost knowledge.
The most value memes could have would be for historical analysis.deleted by creator
deleted by creator
ie oldest postes && least liked First
This would pretty much automatically throw out all troubleshooting posts. These sorts of posts, very often, don’t receive many likes, as that is not their purpose. On top of that, there has been many a time that I have been saved by finding some ancient forum post that solved my problem.
deleted by creator
deleted by creator
The real takeaway here is that we are all bad at storing the kind of knowledge you’ll find in a troubleshooting post.
Perhaps there can be a Lemmy instance that scrapes and mirrors troubleshooting posts across other instances.
I m sure I won’t miss this post in one month or even one day.
Well, maybe you won’t, but others might. That was the great thing about Reddit. Found a sub that might be interesting? Browse its top content of the last ten years and you’ll see. Have a specific programming question? Google it with site:reddit.com and you’ll likely find a good discussion on it. Reading on old interview somewhere and wonder if someone ever fact-checked that one statement there? The guys at Reddit likely have.
Even before the blackout it already happened way too often that you stumbled upon an interesting Reddit thread just to see that one of the central comments has been deliberately deleted by its author and so the whole thing gets less helpful. Would be a shame if the system itself would further delete even more content.
deleted by creator
Why are people downvoting this? It’s just practical. If you don’t reduce the amount of storage you use, the cost of storing will constantly and steadily rise, will donations or personal budgets keep up? Maybe on lemmy.world or other mega instances but it just won’t be the case for everybody. I think purging old content is gonna be a reality eventually, even if it takes a really long time before it catches up to the larger instances. And it’s going to be OK as long as, as this person suggested, the rules for purging old stuff is tenable for everybody.
For example, does lemmy.world, lemm.ee, and sh.itjust.works really NEED to keep each other’s entire federated post history, in perpetuity? As these guys grow larger wouldn’t it make sense to start purging very old duplicate content between them? Stuff that hasn’t been accessed on the instance in, say, over a year? Mind you, I believe that before we get to this point, there will be other systems in place. For example, the Reddit archive sites were never run by Reddit, and they often contained ads or other monetization strategies. Donations can keep the most recent or relevant content up on the instances, but somebody somewhere is gonna have to pay for this content to stay out there. For all we know, it’s gonna be fucking Google and their seemingly unlimited cache. For all we know, some person at Google is spending his 20% personal project time subscribing a bot to everything on the fediverse and collecting data for some kind of new search engine right this very second on Google’s hardware.
Anyway, just some food for thought.
People always assume they deserve to have stuff hosted on the internet forever for free. I’ll donate a good chunk for my instance for sure, but I’m not going to be shelling out hundreds of dollars a month for it either.
If people care so much about it staying up forever they can start donating.