

Hm, that is unexpected. obviously that doesnt include the full manufacturing carbon cost of a rocket but it’s probably close enough anyway.
I exist or something probably
Hm, that is unexpected. obviously that doesnt include the full manufacturing carbon cost of a rocket but it’s probably close enough anyway.
not even a little, but matrioshka brains are cool
i highly doubt 10 years is even remotely close to breaking even on a rocket launch.
many techbros think this unironically and do hire preferentially asian male employees. google famously got in trouble for this.
If everyone gave up on a place when futures there look bleak, there wouldnt be a place left in the world worth living in.
what a strangely passive aggressive and rude response. if you want a comment written in your voice and chosen thoughts, you are free to do so.
This is (deploying malware and backdoors outside of wartime, often widely) criticisized very often and rightfully so. By both cybersecurity people and various political leanings, especially leftists.
Your analogy is good. These things are often intended to kill, and are often countervalue (read: target civilians). It is in fact bad no matter what state does it. It however should also come as no surprise that all states variously want to, though for example the usa has historically gone back and forth on how selective they are for many of the reasons you state. Though other reasons include things like not revealing exact capabilities by releasing malware ahead of time to be spotted and studied.
if you put the people making translation possible out of work, you will run out of sources for useful translations.
LLM are not magic. They function off of human effort for thir training data. High quality data is thus, sourced from (in this case) human translators. Some can be done without them by nonprofessional texts, but it is not enough.
they cant actually but it’s convincing enough that you’ll think it’s the same, and in the process make it financially impossible for improvements to be made by actual translators.
Yes we agree on the first part.
I will again direct you here re: the second.
Where is the world model you maintain? Can you point to it? You can’t - because the human mind is very much a black box just the same way as LLM’s are.
It’s not really factually correct if you want to get pedantic, both brains and llms are called black boxes for different reasons, but this is ultimately irrelevant. Your motive may be here or there, the rhetorical effect is the same. You are arguing very specifically that we cant know llm’s dont hae similar features (world model) to human brains because “both are black boxes”, which is wrong for a few reasons, but also plainly an equivalence. It’s rude to pretend everyone in the conversation is as illiterate as wed need to be to not understand this point.
Where is the world model you maintain? Can you point to it? You can’t - because the human mind is very much a black box just the same way as LLM’s are.
something being a black box is not even slightly notable a feature of relation, it’s a statement about model detail; the only reason you’d make this comparison is if you want the human brain to seem equivalent to llm.
for example, you didnt make the claim: “The inner workings of Europa are very much a black box, just the same way as LLM’s are”
Not understanding the brain (note: said world model idea is something of a fabrication by the ai people, brains are distributed functional structures with many parts and roles) is not an equality with “ai” make. brains and llm do not function in the same way, this is a lie peddled by hype dealers.
depends entirely on the kind of drive.
This is generally in line with ice, the drivetrain efficiencies anymore are in the high 90%s (applies to ev too), so from engine out you are losing basically everything to drag.
critical thinking does not simply mean “think hard”, it means research this person and account for maybe two, even three, seconds, before assuming everything they say is truth.
perhaps instead use critical thinking to determine genuinity. the alternative is not xitter’s version, and twitters old version was criticized too.
an attempt was made
Here’s a recent reuters report. https://www.reuters.com/technology/artificial-intelligence/ghibli-effect-chatgpt-usage-hits-record-after-rollout-viral-feature-2025-04-01/
160 million active users is quite literally worse than many mobile games developed for a tens of, maybe hundreds of thousands of, usd. 160 million active users for 40 billion funding (they have needed more than this, but i cant be assed to go tally their funding) means theyve spent $250 per user, and their costs only grow as people use it. That is not including the massive server time subsidies Azure has provided them. This is not a profitable company and never will be.
“Block Blast” on the google play store has 40 million daily active users, 160 million monthly, and the studio has around 30 people. Its revenue from ads alone is in the tens of millions per month if this case study is accurate. Oai claims their monthly revenue in the hundreds of millions… with operating costs at greater hundreds of millions. oai profit is negative, with no signs of improving without entirely changing their business plan.
problem: actual mental help has low availability
solution: ai can stand in where needed
outcome: ai mental health systemically expands while actual therapists remain inaccessible, as insurance refuses to cover them. mental health outcomes systemically worsen across the board.