

“West Martians”.
“West Martians”.
No, not any more than someone telling you the plot of a book would count as reading it—that’s generally the extent of the original work’s content that survives the process of adaptation. (Possible exceptions are faithful adaptations of stage plays like Shakespeare or Euripides—in that case watching a subtitled production might be considered the equivalent of reading the script.)
While [Trump-supporting] CEO Andy Yen’s recent public statements have raised my hackles more than a little, Proton remains structurally committed to privacy, encryption, and user control, ensuring its ecosystem stays independent of political shifts.
That’s a pretty weak definition of “Trump-proof”.
I don’t understand—you think you’re one of the last people left who started using the internet in the 90s?
It sounds like she’s constructed two competing versions of you in her mind—an idealized version that always understands and sympathizes with her, and a second version constructed from all the times you’ve failed to live up to those expectations.
If you can’t be her idealized version of yourself, you can demonstrate that you’re not the second version, either. Focus on proactively doing things for her when she’s not expecting you to—everything you do that doesn’t match what her mental model of you predicts you’ll do will weaken that model in her head.
Jumping off the ISS wouldn’t cause you to de-orbit—it would just put you in a slightly more elliptical orbit that would eventually intersect the ISS again.
And if you did get into an orbit that took you down into the atmosphere, no parachute would save you—parachutes are for slowing to a safe landing speed from terminal velocity, not from orbital velocity. You’d need to go through atmosphere too thin to fill a chute, but still fast enough to burn you up.
If anyone’s interested in adding similar functionality to their own MediaWiki installation, you can use the ModernTimeline and SemanticMW extensions without the need for an AI to parse the pages for dates.
Nobody notices things that conform to their expectations—but when anything violates their expectations, they assume it’s a deliberate message. (Even if it’s fiction violating their genre expectations in the direction of reality.)
And if they can’t figure out what the message is supposed to be, they let other people tell them. And if people tell them different things, they go with the one that makes them feel the strongest reaction.
Most of it is not actually in verse.
Misleading/wrong posts don’t usually spoof the origin - they post the wrong information in their own name.
You could argue that that’s because there’s no widely-accepted method for verifying sources—if there were, information relayed without a verifiable source might come to be treated more skeptically.
I could see it working if (say) someone tries to modify or fabricate video from a known news source, where you could check the key against other content from the same source.
Looks similar to Footlight. Maybe it’s by the same designer (Ong Chong Wah)?
I tested mine with an infrared thermometer: Starting cold, I turned one burner to medium and another to high, and measured them as they heated up. They heated at the same rate until the medium burner reached its target temperature.
that putting the thermostat up higher will heat the house up quicker
Same with electric range/ovens.
Clip art/stock art.
That is, “art” that’s intended to be meaningless until someone else uses it in a context that supplies a meaning.
“The monkey about whose ability to see my ears I’m wondering”.
Part of the issue is that the thing you’re wondering about needs to be a noun, but the verb “can” doesn’t have an infinitive or gerund form (that is, there’s no purely grammatical way to convert it to a noun, like *“to can” or *“canning”). We generally substitute some form of “to be able to”, but it’s not something our brain does automatically.
Also, there’s an implied pragmatic context that some of the other comments seem to be overlooking:
The speaker is apparently replying to a question asking them to indicate one monkey out of several possibilities
The other party is already aware of the speaker’s doubts about a particular monkey’s ear-seeing ability
The reason this doubt is being mentioned now is to identify the monkey, not to express the doubt.
I don’t think it’s useful for a lot of what it’s being promoted for—its pushers are exploiting the common conception of software as a process whose behavior is rigidly constrained and can be trusted to operate within those constraints, but this isn’t generally true for machine learning.
I think it sheds some new light on human brain functioning, but only reproduces a specific aspect of the brain—namely, the salience network (i.e., the part of our brain that builds a predictive model of our environment and alerts us when the unexpected happens). This can be useful for picking up on subtle correlations our conscious brains would miss—but those who think it can be incrementally enhanced into reproducing the entire brain (or even the part of the brain we would properly call consciousness) are mistaken.
Building on the above, I think generative models imitate the part of our subconscious that tries to “fill in the banks” when we see or hear something ambiguous, not the part that deliberately creates meaningful things from scratch. So I don’t think it’s a real threat to the creative professions. I think they should be prevented from generating works that would be considered infringing if they were produced by humans, but not from training on copyrighted works that a human would be permitted to see or hear and be affected by.
I think the parties claiming that AI needs to be prevented from falling into “the wrong hands” are themselves the most likely parties to abuse it. I think it’s safest when it’s open, accessible, and unconcentrated.
Anyone using DeepSeek as a service the same way proprietary LLMs like ChatGPT are used is missing the point. The game-changer isn’t that a Chinese company like DeepSeek can compete with OpenAI and its ilk—it’s that, thanks to DeepSeek, any organization with a few million dollars to train and host their own model can now compete with OpenAI.
If they tell law enforcement they can’t produce an unencrypted copy and it’s later proven that they could, the potential penalty would likely be more severe than anything they could have gained by using the data themselves. And any employee (or third party they tried to sell the data to) could rat them out—so they’d have to keep the information within a circle too small to make use of it at scale. And even if it never leaked, hackers would eventually find and exploit the backdoor, exposing its existence. And in either case they’d also have to face lawsuits from shareholders (rightly) complaining that they were never warned of the legal risk.