• 0 Posts
  • 8 Comments
Joined 21 days ago
cake
Cake day: June 6th, 2025

help-circle
  • Communicating on a platform you don’t own and can’t control seems very shortsighted.

    I feel like this would be a much more realistic take if social media more broadly was all federated, and anyone’s independent instance could still communicate with the others, but that’s unfortunately not the case.

    For a politician, which is better for their campaign? Starting an independent platform they entirely own and control, but with no local users to start out with, or having an account on an existing platform with millions and millions of users?

    Obviously, even though in the first example they would have 100% control over their infrastructure, they wouldn’t exactly be spreading their message very far. They could always publish simultaneously on both platforms, but that still doesn’t mean much if the second platform has no users. However, the platform that has many millions of users can instantly grant them reach, which is kind of the point of them being on social media in the first place.

    On your point about a bot, I’m assuming you mean more like a bridge mechanism that cross-posts from one platform to another. You could correct me if I’m wrong, but I believe AOC at least posts a lot of similar messaging on both Twitter and Bluesky, rather than staying isolated to one or the other. It’s not exactly the same thing, but it has a similar effect.

    In an ideal world, everyone could easily host their own Mastodon server and just communicate with others without being tied to a platform, but unfortunately we still live in a world where the network effect is keeping people trapped in corporate social media silos, and there’s only so much an individual politician can do to change that without harming their own ability to message to the public.


  • Nobody left on that platform is going to be convinced of anything anymore.

    I’d beg to differ. Although it’s true that the ratio of neo-Nazis and generally just far right freaks has far surpassed the number of everyday people, that doesn’t mean those people don’t exist anymore.

    I always bring this up in conversations about leaving social networks, because if you don’t understand it, it will warp your entire perspective of why people stay on shitty platforms in the first place. The Network Effect is what keeps people hooked on these platforms, even when the owner becomes a literal neo-Nazi.

    The people who have already left are the ones that are capable of and willing to sacrifice the scale, reach, and history that Twitter has, in the hopes that whatever platform they move to will treat them better. Leaving Twitter means deleting your digital history, erasing every connection you’ve made on the platform, and entirely cutting all of your messaging off from anyone who hasn’t yet left.

    AOC is already on alternative platforms like Bluesky, so people who are willing and able to move, that would otherwise have stayed solely because she was still on Twitter have already done so. The people that remain do not remain because of her, they remain because of everybody else.

    Yes, there are still quite a few neo-Nazis outnumbering the average person on there, but there are still quite a few average people that are still on Twitter. Don’t forget that the average person doesn’t seem to care when the companies they buy products from exploit child labor, fund wars that keep oil prices low, and suppress the wages of the workers in their own communities. The average person simply does not have the will to sacrifice what they must give up by leaving a large platform like Twitter, so they remain there.

    If AOC didn’t benefit politically from being on Twitter, then she would have entirely left and deleted her account a while ago.




  • Presearch is not fully decentralized.

    All the services that manage advertising, staking/marketplace/rewards functionality, and unnamed “other critical Presearch services” are all “centrally managed by Presearch” according to their own documentation.

    The nodes that actually help scrape and serve content are also reliant on Presearch’s centralized servers. Every search must go through Presearch’s “Node Gateway Server,” which is centrally managed by them. That removes identifying metadata and IP info.

    That central server then determines where your request goes. It could be going to open nodes run by volunteers, or it could be their own personal nodes. You cannot verify this due to how the structure of the network works.

    Presearch’s search index is not decentralized. It’s a frontend for other indexes. (e.g. it outsources queries to other search engines, databases, and APIs for services it’s configured to use) This means it does not actually have an index that is independent from these central services. I’ll give it a pass for this since most search engines are like this today, but many of them are developing their own indexes that are much more robust than what Presearch seems to be doing.

    This node can return results to the gateway. There doesn’t seem to be any way that the gateway can verify that what it’s being provided is actually what was available on the open web. For example, the node could just send back results with links that are all affiliate links to services it thinks are vaguely relevant to the query, and the gateway would assume that these queries are valid.

    For the gateway to verify these are accurate, it would have to additionally scrape these services itself, which would render the entire purpose of the nodes pointless. The docs claim it can “ensure that each node is only running trusted Presearch software,” but it does not control the root of trust, and thus it has the same pitfalls that games have had for years trying to enforce anticheat (that is to say, it’s simply impossible to guarantee unless presearch could do all the processing within a TPM module that they entirely control, which they don’t. Not to mention that it would cause a number of privacy issues)

    A better model would be one where nodes are solely used for hosting to take the burden off a central server for storing the index, and chunks sent to nodes would be hashed, with the hash stored on the central server. When the central server needs a chunk of data based on a query, it sends a request, verifies the hash matches, then forwards it to the user, thus taking the storage burden off the main server and making the only cost bottleneck the bandwidth, but that’s not what Presearch is doing here.

    This doesn’t make Presearch bad in itself, but it’s most definitely not decentralized. All core search functionality relies on their servers alone, and it simply adds additional risk of bad actors being able to manipulate search results.