- cross-posted to:
- technology@beehaw.org
- cross-posted to:
- technology@beehaw.org
The rapid spread of artificial intelligence has people wondering: who’s most likely to embrace AI in their daily lives? Many assume it’s the tech-savvy – those who understand how AI works – who are most eager to adopt it.
Surprisingly, our new research (published in the Journal of Marketing) finds the opposite. People with less knowledge about AI are actually more open to using the technology. We call this difference in adoption propensity the “lower literacy-higher receptivity” link.
The real question in my opinion is how does a pro truly benefit from it other than being a different type of a search engine
Yea, if you are a pro in something it most of the time only tells you what you already know (I sometimes use it as a sort of sanity check, by writing prompts that I think I know the output that comes)
I only found it useful doing trivial chores such as converting between data structures, maybe create a test for a function, parsing and some regex. Anything deeper than that was full of errors or the it offered was suboptimal at best. It also fails a lot of times in fetching the relevant docs/sources for the discussion. I gave up trying after so many times it basically told me " go search for yourself"
I often use it as my Python Slave because I am lazy
Like i write in bad fast human Language what my Script needs to do and then iterate from there giving it errors/ bug reports back (and fix some stuff that am I not too lazy for myself)
Scrripts that I needed were in complexity like, API calls, serial communication or converting PO to CSV and back (pls don’t ask 😅 it is for Work and I can not tell more)
But I guess, that because my skill is not too high, I‘m sure, if I was more skilled, I might be faster just writing it directly as code 💁🏻
But for code that needs to be built (like C), I mostly use it to make it explain me what existing code does, if I am not 100% sure after a short read. Have tried some generated code there as well, but then I get nothing but build errors 😆 at least, it, most of the time, can tell what the build error is trying to say.
Ah, and currently, I use my free chatGPT to make it teach me how to make music using only open source tools 😄
I very much agree with your conclusions and general approach.
LLMs are great for certain tasks that are programming related and it does it very well. I, too, often find myself needing scripts that as long as they did what they were suppose to, I really didn’t care how.
Another thing I’ve noticed(which is probably related to amounts of training data) is that it can help better with simple Python tasks as opposed to how it handles simple rust tasks.
But you mentioned one of my main issues with. Ice been programming for 15 years or so, and still learning. All the available llms did crucial errors about fundamental tabd complex topics and got the answer so very wrong but also sounding very convincing. Couple it with lack of proper linking to the sources of the response, you might see why having it explain code might cause your learn wrongly. Although it is also possible to say this about randoms internet tutorials. I always try to remind myself that it’s a tool that produces output that always needs to be verified.
I often make in a new chat with a prompt including assumptions based on the info from output of previous chat. Most of the time, it then makes a good job factchecking itself and for example tells many things not matching with what it told in previous chats. Then you know that it has not enough training data in that regard and failed to get relevant infos from it’s web search.
More than once above happened to me on copilot (from enterprise ms365) and then chatGPT limited free promts saved me 😂