The rapid spread of artificial intelligence has people wondering: who’s most likely to embrace AI in their daily lives? Many assume it’s the tech-savvy – those who understand how AI works – who are most eager to adopt it.
Surprisingly, our new research (published in the Journal of Marketing) finds the opposite. People with less knowledge about AI are actually more open to using the technology. We call this difference in adoption propensity the “lower literacy-higher receptivity” link.
People susceptible to marketing gimmicks more likely to want marketing gimmick.
I’m tech savvy and I use AI daily.
Probably not the AI you think of. As it’s not LLM or image generation.
But I have a security system self hosted using frigate, which uses AI models for image recognition.
I am a system admin and one of our appliances is a HPE Alletra. The AI in it is awesome and it never tries to interact with me. This is what I want. Just do your fucking job AI, I don’t want you to pretend to be a person.
How exactly is this a surprise to anyone when the same applied to crypto and NFTs already? AI and blockchain technologies are useful to experts in tiny niches so far but that’s not the usual tech savvy user. For the end user it’s just a toy with little use cases.
“Surprisingly”? This should be a surprise to no one who is paying any kind of attention to any online communities where techy people post.
Hey, buy my new CoinCoin! No, don’t research what it is, just buy it!
At the state of AI today, it helps noobs to get to average level but not help average to get a pro
The real question in my opinion is how does a pro truly benefit from it other than being a different type of a search engine
« Ignorance is bliss »
- Cypher
It really must be…
I think this is true for a lot of things. iPhones, Nike, Spam
… Trump.
The more I’ve learned about technology, the more hardline I’ve become against having it in my life.
The world is not a blank slate to paint on. Every new thing that you add to your life takes away something which used to be there in previous generations, and the consequences of such can be far reaching and unpredictable. Society as it was, was not built overnight through deliberate intention, but was hard won by millennia of blood, sweat and tears. Changing everything now on the whims of fully grown toddlers who are so wealthy that they’ve never even been aware of the existence of the real world is the peak of insanity.
The more I’ve learned about technology, the more hardline I’ve become against having it in my life.
Eventually you’ll decide pottery, clothing, and agriculture need to go
Neither the position to keep all the old solutions because they are old nor to adopt all the new solutions because they are new is sensible.
Some old solutions worked in the past and don’t work anymore because the actual world around us changed (the bits outside our control, e.g. some resources might be more sparse but were more plentiful in the past, human populations are larger, the world is more interconnected,…).
Some old solutions appeared to work in the past because we didn’t have the knowledge about their flaws yet but now that we do we need new ones.
Some new solutions are genuine improvements, others are merely sold by marketing and hype.
Some new solutions have studies, data or even logic and math backing them up while others are adopted on a whim or even contrary to evidence or logic.
We can not escape the fact that the world is complex and requires evaluation on a case by case basis and simplistic positions like “keep everything old” or “replace everything old” do not work.
What form of AI are we talking about? Because most of them exposed to the people are glorified toys with shady business models. While tools like AlphaFold are pretty useful.
Especially on Lemmy. Every misspelling is “AI” to some of these anti-AI whackos. It’s like they’ve never seen shit webpages before. They don’t know that AI spans thousands of different task types, and generalized AI is nowhere near being accomplished.
Those that really understand what “AI” consists of, understand it’s got weaknesses and strengths. And that those strengths can be used for both good things, and bad things.
I’m just annoyed that the term AI has been co-opted now to refer to pretty much any form of machine learning. Stuff gets called AI today that wouldn’t have been considered AI even 10 years ago. I think that’s part of what’s driving peoples ridiculous expectations because they hear AI and they expect actual AI not a glorified smart fill.
Or AGI, meaning LLM that produces x amount of profit, according to openAI and Microsoft 🤣
Artificial intelligence = machine learning = statistics = just math
Someone should do a Scooby doo meme with the taking the mask off frame multiple times in a row
“Just” math?! Math is everything
Math doesn’t exist its imaginary. Its an impossible ideal that just so happens to be useful at predicting our universe.
What you’re saying expressly isn’t true. Academically, deep learning is considered a subset of machine learning is considered a subset of artificial intelligence.
- Deep learning is machine learning that makes use of deep neural networks.
- Machine learning is artificial intelligence which can perform tasks without explicit instructions by learning from a dataset and generalizing to other data.
- Artificial intelligence is simply trying to make a computer display some sort of intelligence that’s seen as human-like. For example, a perceptron is artificial intelligence because how could a computer possibly see like a human? Chess bots are artificial intelligence because it was thought that chess represented some sort of higher intelligence unique to humans. NPC actions in video games can be artificial intelligence because you’re simulating what another human might do.
Would you like the textbooks from 10 years ago on this exact subject that I’m referencing? The term AI hasn’t been co-opted; you might’ve simply been thinking of general artificial intelligence, because “pretty much any form of machine learning” has been called AI since the dawn of machine learning – because it is.
While your distinctions are correct in the academic way of referring to things, you are not considering the marketing way of referring to things. Behold, the AI powered rice cooker, powered by a magnet and heat, like every other rice cooker ever (because it works really well)
https://www.youtube.com/watch?v=F_HOrMmWoMA
Marketing has decided that anything that does anything is “AI” now. Which is why people are insanely disenchanted with it.
That toaster is what AI is. If it’s machine learning, it’s AI. If I make a toilet that uses a shitty-ass single-layer perceptron to decide when to flush, that’s an AI-powered toilet even if it’s a worthless piece of crap. You can be disenchanted with it as a gimmick all you want (I am too), but it falls under AI the same way it has since the 1950s. The marketing way of referring to things you just showed me entirely comports with the academic one provided what the label says is true.
You are technically correct and yet you are missing the original point that people expect the super-intelligent AGI of science fiction when they hear the term, no matter how much all those lesser forms are AI too by the definition of the scientific field.
Sounds like something an AI would say.
/S