

Sure, but that has little to do with disinformation. Misleading/wrong posts don’t usually spoof the origin - they post the wrong information in their own name. They might lie about the origin of their “information”, sure - but that’s not spoofing.
Sure, but that has little to do with disinformation. Misleading/wrong posts don’t usually spoof the origin - they post the wrong information in their own name. They might lie about the origin of their “information”, sure - but that’s not spoofing.
I don’t understand how this will help deep fake and fake news.
Like, if this post was signed, you would know for sure it was indeed posted by @lily33@lemm.ee, and not by a malicious lemm.ee admin or hacker*. But the signature can’t really guarantee the truthfulness of the content. I could make a signed post that claiming that the Earth is flat - or a deep fake video of NASA’a administrator admitting so.
Maybe I’m missing your point?
(*) unless the hacker hacked me directly
That is why I use just int main(){...}
without arguments instead.
The point of it being open is that people can remove any censorship built into it.
The particular AI model this article is talking about is actually openly published for anyone to freely use or modify (fine-tune). There is a barrier in that it requires several hundred gigs of RAM to run, but it is public.
It’s almost sure to be the case, but nobody has managed to prove it yet.
Simply being infinite and non-repeating doesn’t guarantee that all finite sequences will appear. For example, you could have an infinite non-repeating number that doesn’t have any 9s in it. But, as far as numbers go, exceptions like that are very rare, and in almost all (infinite, non-repeating) numbers you’ll have all finite sequences appearing.
Now, if only the article explained how that killing was related to TikTok. The only relevant thing I saw was,
had its roots in a confrontation on social media.
It’s says “social media”, not “TokTok” though.
Well, he didn’t even buy the original (I guess it has spoiled by then), but a DIY replica and a certificate.
Wary reader, learn from my cautionary tale
I’m not sure what to learn exactly. I don’t get what went wrong or why, just that the files hit deleted somehow…
I guess technically that makes them “not in Ukraine”, but it is the same war in the end. At least for me that’s the important part, not where exactly on the front line they are.
Well, NK and Russia have a defense treaty which obliges NK to sent military assistance to Kursk. So if they aren’t, they’re breaking their obligations.
Yes, almost like they have intentionally waited until Trump’s election.
Type in "Is Kamala Harris a good Democratic candidate
…and any good search engine will find results containing keywords such as “Kamala Harris”, “Democratic”, “candidate”, and “good”.
[…] you might ask if she’s a “bad” Democratic candidate instead
In that case, of course the search engine will find results containing keywords such as “Kamala Harris”, “Democratic”, “candidate”, and “bad”.
So the whole premise that, “Fundamentally, that’s an identical question” is just bullshit when it comes to searching. Obviously, when you put in the keyword “good”, you’ll find articles containing “good”, and if you put in the keyword “bad”, you’ll find articles containing “bad” instead.
Google will find things that match the keywords that you put in. So does DuckDuckGo, Qwant, Yahoo, whatever. That is what a good search engine is supposed to do.
I can assure you, when search engines stop doing that, and instead try to give “balanced” results, according to whatever opaque criteria for “balanced” their company comes up with, that will be the real problem.
I don’t like Google, and only use google when other search engines fail. But this article is BS.
In TikTok or instagram reels, you don’t follow people you like. You just watch stuff happening.
That’s actually the whole point of TikTok, what made it different when it started. An app for short videos where you follow people you like is more of a Snapchat competitor, not TikTok.
If we wait for AI to be advanced enough to solve the problem and don’t do anything in the meantime, when the time finally comes, the AI will (then, rightfully) determine that there’s only one way to solve it…
My bet is, it’ll be Saturday that goes, finally achieving a 6-day work week.
Technically, “enforced pay it forward” is called credit. Your debt would then be “the amount you still have to pay forward”.
Of course, this defeats both the spirit and the purpose of a pay it forward scheme.
It’s not an article about LLMs not using dialects. In fact, they have learned said dialects and will use them if asked.
What they did was, ask the LLM to suggest adjectives associated with sentences - and it would associate more aggressive or negative adjectives with African dialect.
Seems like not a bias by AI models themselves, rather a reflection of the source material.
All (racial) bias in AI models is actually a reflection of the training data, not of the modelling.
No, that’s because social media is mostly used for informal communication, not scientific discourse.
I guarantee you that I would not use lemmy any differently if posts were authenticated with private keys than I do now when posts are authenticated by the user instance. And I’m sure most people are the same.
Edit: Also, people can already authenticate the source, by posting a direct link there. Signing wouldn’t really add that much to that.