I know its a bit of a hot topic but I’ve always seen people (online anyways) are either a hard yes or absolutely no on using AI. There are many types of “AI” that have already been part of technology before this hype, I’m talking about LLMs specifically (ChatGPT, Claude, Gemini, etc…). When this bubble burst its absolutely not going anywhere. I’m wondering if there is case where you’ve personally used it and found it beneficial (not something you’ve read or seen somewhere). The ethics of essentially stealing vast amount of data for training without compensation or enshitification of products with “AI” is a whole other topic but there is absolutely no way that the use of the technology itself is not beneficial somehow. Like everything else divisive the truth is definitely somewhere in the middle. I’ve been using lumo from proton for the last three weeks and its not bad. I’ve personally found it useful in helping me troubleshoot issues, search or just use it to help with applying for jobs:
- its very good at looking past SEO slop plaguing the internet and it just gets me the information I need. I’ve tried alternative search engine (mojeek, startpage, searXNG, DDG, Qwant, etc…) Most of them unfortunately aren’t very good or are just another way to use google or bing.
- I was having some wifi problem on a pc i was setting up and i couldn’t figure out why. i told it exactly what was happening with my computer along with exact specs. It gave gave me some possible reasons and some steps to try and analyze my computer it was very very useful.
- I’ve been applying for so many jobs and it so exhausting to read hundreds of description see one tiny thing in the middle that disqualifies me so I pass it my resume with links and tell it to compare what i say on my resume and what the job is looking for to see if im a fit. When i find a good job i ask rewriting tips to better focus on what will stand out to a recruiter (or an application filtering system to be real).
I guess what I’m trying to say is it cant all be bad.
My take on it is that it’s just a tool, and as with most tools you can use it in a sensible way that’s positive, although many people choose not to. So as an example, I work in a creative field and I see a lot of people relying on it to do their creative work for them, which I don’t really agree with. What I use it for is kind of like an assistant to handle all the admin crap that usually gets in the way of doing creative stuff. So sometimes you have to write form letters, grant things etc. - basically formal stuff that wouldn’t require any creative thinking if you did it by yourself anyway, but still eats up time and brain power. I just give that stuff to the AI, make sure it sounds vaguely presentable, and send it off. I could also see a use case for it in areas where I’m weaker like marketing my stuff, maybe for at least coming up with an outline strategy of some sort, although I haven’t really tried that out yet.
Essentially, AI will do your creative stuff for you if you let it, or you can just use it to handle the day-to-day piddly crap to free yourself up to do the creative stuff yourself. It’s up to you really.
LLMs trained exclusively on documentation and ran locally seem like they’d be nice. Basically the next step in search algorithms. Do note that I am not talking about having an AI summary at the top of every web search page, that’s harmful.
That’s essentially what I do, I don’t have any accounts with ChatGPT or anything, I just run it locally off my laptop. IDK if you get better results with ChatGPT (I’d assume probably) but my local one seems fine for everything I need it for. It’s a little slow too, but who cares?
I think there are many thousands of folk in fields beyond IT that use it all the time. It’s by no means perfect, but for many of us managing teams or doing boring AF admin, working with procurement, writing user documentation or trying to navigate basic system configs then it’s immensely useful.
bash scripting, light python programs, fixing software and hardware issues, learning Japanese, learning other things, writing applications, all the boring stuff and stuff need extreme repetition or data mining. Ideas comes from me, work is done by machine.
I used it to extract thousands of rows from tables in PDFs and generate enumerations for them in various programming languages. I had to do some pre-processing with the Python script, and review all the output to make sure it didn’t screw up, but it saved me a lot of time.
Regarding the job application, most companies and sites are using shitty AI to rummage through the piles of resumes they receive.
The whole job application process is frankly one of the worst real world use of most technologies, not only AI
For image gen I don’t have a good use but it is very complex which means sometimes I can just lose an hour or 2 fumbling around in a complex network of nodes.
What I found fascinating was how strangely good the results were when I created an image, then fed the result back as input, and repeated that process.
The only useful thing I used image gen for was creating references for an artist to create a PFP for me that looks rad as hell.As for LLMs, also not really. I think about 90% of the time LLMs either give a useless or just wrong answer. I can’t seem to find the thing that LLMs are supposed to be good for. One thing all LLMs I tried have failed consistently at was finding a movie from a vague description I gave.
In short, teaching myself simple stuff.
Inspiration for writing emails, letters, text messages. I always check what the thing wrote though.
I’d love to have an AI assistant that does shit like call service providers and wait in queue and take care of business for me
For engineering… Get me a script that calculates the length of a window based on a similar size. Or calculate the tip velocity of a turbine blade given the speed of the gas going into it and the diameter of the turbine. Basically things we would have to take a month to design so as to answer other questions. Cuz nobody pays you to make quick calculation tools.
This doesn’t sound like the hardest thing to write a program for especially if you have the gibbity help you write it and quadruple check its output.
And then it makes mistakes and you won’t be able to tell
No, you don’t just get a script and run it blindly! You use your knuckle to figure out if it works first by reading the code and calculating known data as a test.
You can’t even rely on AI to have real formulas for the area of a circle. You have to rely on your own knowledge and on books to confirm if the code is doing what you need it to do.
What AI does is it shortens the code creation time to just a few seconds vs days of coding… Because engineers are the best back seat coders I know. Once there’s good code they can move mountains. But confronted with a blank page they freeze.
Or, you know, you could hire a programmer.
Ha! With no budget?
I’ve used it to help me set up a home server. I can paste text from log files or ask about something not working and it tells me what the problem is. It gets things wrong a lot, but this is the perfect low risk use for AI…for sending me in the right direction when I have no idea why things aren’t working. When it’s completely wrong, it doesn’t really matter.
The real test for AI is: “does it matter when it is completely wrong”. If the answer is yes, then that’s not a suitable use for AI.
This. I’m a software engineer and I also sometimes use it by providing it a problem and asking it for ideas how to solve them (usually with the addition of “don’t write any code”, I do that myself, thanks). It gives me a few pointers that I can then follow. Sometimes it’s garbage, sometimes it’s quite good.
99% garbage.
If you have ever touched C++ you will know that it has godawful errors and not even chatgpt knows what the fuck is happwning
That’s why I’m not asking it to give me actual code I should use, but keep it high level. If it then says there are patterns x,y and z that could be usable, I can look it up myself and also write the code itself. Using it to actually write the code is mostly garbage, yes. And in any case you still need to have an idea of what you’re doing yourself anyway.
No, I’m not asking it to write code, I’m asking it to interpret the error and point to the actual problem in the code. It just can’t…
I see it as a toy. No different from the Slinky or Silly Putty I had as a kid. Just something to play with.
I’ve tried learning digital drawing before, but my programmer brain finds prompt engineering to be much more intuitive, so i’ve been doing that a lot lately.
Also its surprisingly good at upscaling in “image-to-image, 0.1 strength” mode. I thought I would need a dedicated upscaling mode for that. The result looks noticeably better than with normal bicubic upscaling.
Creating low-effort images for ideas that don’t warrant effort, like silly jokes.