The original Steam controller worked without Steam running, even including some of the extra features like mouse and scrolling functions for the trackpads if you wanted it to. So here’s hoping
The original Steam controller worked without Steam running, even including some of the extra features like mouse and scrolling functions for the trackpads if you wanted it to. So here’s hoping
I’m German, and I’ve never heard that before. I’d be seriously weirded out by someone saying that or teaching it to their kids
I’m German, and I would not want that. German grammar works differently in a way that makes programming a lot more awkward for some reason. Things like, “.forEach” would technically need three different spellings depending on the grammatical gender of the type of element that’s in the collection it’s called on. Of course you could just go with neuter and say it refers to the “items” in the collection, but that’s just one of lots of small pieces of awkwardness that get stacked on top of each other when you try to translate languages and APIs. I really appreciate how much more straightforward that works with English.
Seconding this. Legitimately better than Google photos in a lot of ways, even if you don’t care about the data ownership aspect. If you’ve ever been annoyed at how Google Photos handles face detection / grouping, you’ll love Immich.
It is an algorithm that searches a dataset and when it can’t find something it’ll provide convincing-looking gibberish instead.
This is very misleading. An LLM doesn’t have access to its training dataset in order to “search” it. Producing convincing looking gibberish is what it always does, that’s its only mode of operation. The key is that the gibberish that comes out of today’s models is so convincing that it actually becomes broadly useful.
That also means that no, not everything an LLM produces has to have been in its training dataset, they can absolutely output things that have never been said before. There’s even research showing that LLMs are capable of creating actual internal models of real world concepts, which suggests a deeper kind of understanding than what the “stochastic parrot” moniker wants you to believe.
LLMs do not make decisions.
What do you mean by “decisions”? LLMs constantly make decisions about which token comes next, that’s all they do really. And in doing so, on a higher, emergent level they can make any kind of decision that you ask them to, the only question is how good those decisions are going be, which in turn entirely depends on the training data, how good the model is, and how good your prompt is.
You’re not wrong, but the way you put it makes it sound a little bit too intentional, I think. It’s not like the camera sees infrared light and makes a deliberate choice to display it as purple. The camera sensor has red, green and blue pixels, and it just so happens that these pixels are receptive to a wider range of the light spectrum than the human eye equivalent, including some infrared. Infrared light apparently triggers the pixels in roughly the same way that purple light does, and the sensor can’t distinguish between infrared light and light that actually appears purple to humans, so that’s why it shows up like that. It’s just an accidental byproduct of how camera sensors work, and the budgetary decision to not include an infrared filter in the lens to prevent it from happening.
Not that I disagree with you generally, but in the recent case, manual door release wouldn’t have helped, as it’s basically impossible to push open a car door against the water pressure outside a submerged car.
“Caret” is also correct, and more specific, since “Cursor” can also mean the mouse cursor.
Because the balls did often need cleaning, whereas I’ve never heard of the moose wheel needing it
You’re an ex British colony?