After a machine learning librarian released and then deleted a dataset of one million Bluesky posts, several other, even bigger datasets have appeared in its place—including one of almost 300 million non-anonymized posts.
Not necessarily expecting any legislation; it might be the simple inequality of you having an instance, while they have a bunch of datacenters.
What’s 1TB/s more or less, a rounding error?
Big names scrape the whole web all the time; best case scenario, they’ll have an optimized scraper for federated networks; worst case, they’ll scrape as they would any other website and not even notice the difference.
I don’t think they’re optimising much at all. I think it’s likely just a modified web crawler but without the kind of throttling normal search engine crawlers use. They’re following links recursively. Then probably some basic parsing or even parsing with AI to prepare the data to make another AI model.
Not necessarily expecting any legislation; it might be the simple inequality of you having an instance, while they have a bunch of datacenters.
What’s 1TB/s more or less, a rounding error?
Big names scrape the whole web all the time; best case scenario, they’ll have an optimized scraper for federated networks; worst case, they’ll scrape as they would any other website and not even notice the difference.
I don’t think they’re optimising much at all. I think it’s likely just a modified web crawler but without the kind of throttling normal search engine crawlers use. They’re following links recursively. Then probably some basic parsing or even parsing with AI to prepare the data to make another AI model.