As a software developer I fully agree. People bash on it constantly here but the fact is is that it’s required for our jobs now. I just made it through a job hunt and every tech screen I did they not only insisted on me using AI, but they figured how much I was using too.
The fact is is that like it or not it does speed us up, and it is a tool in our toolbelt. You don’t have to trust it 100% or blindly accept what it does, but you do need to be able to use it. Refusing to use it is like refusing to use the designer for WinForms 20 years ago, or refusing to use an IDE at work. You’re going to be at a massive disadvantage to your competing jobseekers who are more than happy to use AI.
I review take home assignments and mostly we receive AI submissions. It’s easy to tell when they aren’t AI though because we get thoughtful comments about why one choice was made over another, and comments on the higher level view that only come from product context and experience. I don’t think one single fully ai-created submission has made it passed the code review part.
See it’s hard as an interviewer because for the first time ever I lost points at one place because I didn’t use AI at all, and they almost didn’t say yes to me. Their feedback quite literally was that it functioned well, but I could have got it done faster with AI.
Seems pointless to test you on anything that could be done by ai, otherwise why even hire someone, just have fewer devs using more ai right? I want to test people on whether they have experience to notice things and make decisions. Idk if they generate the busy work but that isn’t what I’m grading them on
Hey preaching to the choir there, but other companies were saying “if they didn’t use AI for this they won’t here either”. For your interviewees sake, make sure AI use is extremely specific for every step in the interview. I had places where they were upset I didn’t use AI at this step, but did on this other step. It’s batshit out there.
That’s dumbshits using it to do their job for them and trusting the output blindly. If you’re using LLMs to get over the occasional hump they’re awesome time savers.
As a software developer I fully agree. People bash on it constantly here but the fact is is that it’s required for our jobs now. I just made it through a job hunt and every tech screen I did they not only insisted on me using AI, but they figured how much I was using too.
The fact is is that like it or not it does speed us up, and it is a tool in our toolbelt. You don’t have to trust it 100% or blindly accept what it does, but you do need to be able to use it. Refusing to use it is like refusing to use the designer for WinForms 20 years ago, or refusing to use an IDE at work. You’re going to be at a massive disadvantage to your competing jobseekers who are more than happy to use AI.
I review take home assignments and mostly we receive AI submissions. It’s easy to tell when they aren’t AI though because we get thoughtful comments about why one choice was made over another, and comments on the higher level view that only come from product context and experience. I don’t think one single fully ai-created submission has made it passed the code review part.
See it’s hard as an interviewer because for the first time ever I lost points at one place because I didn’t use AI at all, and they almost didn’t say yes to me. Their feedback quite literally was that it functioned well, but I could have got it done faster with AI.
Seems pointless to test you on anything that could be done by ai, otherwise why even hire someone, just have fewer devs using more ai right? I want to test people on whether they have experience to notice things and make decisions. Idk if they generate the busy work but that isn’t what I’m grading them on
Hey preaching to the choir there, but other companies were saying “if they didn’t use AI for this they won’t here either”. For your interviewees sake, make sure AI use is extremely specific for every step in the interview. I had places where they were upset I didn’t use AI at this step, but did on this other step. It’s batshit out there.
This is not a fact at all.
Fine it speeds me up.
The people in the study thought so too
That’s dumbshits using it to do their job for them and trusting the output blindly. If you’re using LLMs to get over the occasional hump they’re awesome time savers.
I’m guessing you don’t write code?