AI so far has not produced a single useful answer for me. That’s why I ignore it, except to verify it was wrong in the first place.
What people mistakingly call “AI” are just LLMs. Think about what this means: They are parrots with a dictionary, not more. A stolen dictionary, too.
I use it once in a blue moon and every time think it wasn’t really worth my time.
For creative pursuits, it defeats the purpose of why I’m doing the activity.
For factual pursuits at work, I need to 100% back my answer. Like put my signature to it, testify on a witness stand confident. When I make assumptions in my logic, I need to be able to identify and clearly articulate them. And LLMs are only as good as “highly probable” by their nature.
I do computer programming, mostly on stuff the ai coding tools totally fail at; when I saw how much these same tools are pushed by large companies, I felt contempt .
That, and I studied neural networks in grad school, quite a few years ago. The current AI stuff do not awe me
I wish I had tools good enough that I didn’t feel the need to use AI, but sadly documentation for AWS and some terraform issues is traaaaaaaaaash, so I mainly use it to generate some use cases and examples that the docs failed to provide. Some of those examples are wrong on further inspection of the docs (return types mispached and so on), but I do the critical thinking of checking the docs before and after and then coding my own stuff.
I also studied deep convolutional neural networks in school so I know what it is, a tool. Kinda impressive that it is able to do all that solely on the optimisation of the next word in their sentence though.
I have no need to use LLMs or other generative AI, and I have no desire to use them just because.
For creative outlets, it isn’t satisfying to me to use a tool that will instantly complete the task. If I’m prepping for a TTRPG session I would rather come up with the content myself, or use a random generator to give several ideas to build off of. I don’t have artistic skills, but I have more fulfilment from the basic drawings I can do, and for anything more complex I’ll just find something that comes closest to what I imagined as a visual aid.
For non-creative work, I can’t trust the results of an LLM to be factual. If I need to check the sources and confirm the output anyway, I might as well skip that step and just read the original sources myself. Or, use Wikipedia and other wiki sites as a quick reference for basic information and links to more detailed sources.
If I was working in a field that had to sit through large data sets or complex equations, I probably would look at machine learning models. But I don’t, so I have no need for it.
Genuine question, for people that regularly use AI (LLMs), why? What do you get out of it that makes you return to it again? Is it just convenience?
Because they are fascist trash products built by fascist trash companies.
I have less than no use for the chatbot stuff, and I know artists who can make what I want without it looking lazy af
First, there’s no such thing as actual Artificial Intelligence. In it’s current usage, AI is simply a Large Language Model that takes the enormous amount of data it’s been fed and tries to generate a response that seems like it may be an answer to your question. It has no understanding of the question or the answer, it’s just an estimation of what might be an answer. The fact that there is no guarantee whatsoever that the answer you get is accurate is simply a modern example of the old adage, “Garbage in; garbage out.”
Secondly, there isn’t a single LLM made by a company that I would trust to guess my weight let alone the answer to a question I thought was important.
are you aware of cot or rlms?
you knowledge cutoff appears to be 2023 ;)
Isn’t RMLS just one company of many related to AI cyber security? Or does the acronym soup of the tech industry have some other meaning for it.
And CoT, as in the “prompt engineering” technique? How does that at all counter their position?
Please converse instead of vomiting letters and negging.
https://en.wikipedia.org/wiki/Reasoning_model
tbh i don’t really care if he’s stuck in the chatgpt 3.5 turbo era of ai, ive moved on from this place mostly
I think you double posted OP.
I’m probably one of the few that feels like it’s useful for some things. It’s good for regurgitating stuff that is already out there.
I’m currently on a long trip, a lot of it is spur of the moment. It’s really nice to say “I’m on my way to <location>, what is it known for? What areas should I avoid? Where is a good place to stay that is in my budget?”
Yes, I could do all the research on my own, but it’s pretty handy having a travel agent at my fingertips. I’m not saying I would follow it to the end of the earth, but I find it pretty helpful.
funny calling people lemmings for nothing more than being reluctant to embrace what’s being touted as some scientific miracle to the greedy and gullible who would seemingly all follow it off a cliff - much like a lemming.
I think they are saying lemmings because they are addressing users of Lemmy, but still ironic
sorry newbie mistake. carry on







