Bio field too short. Ask me about my person/beliefs/etc if you want to know. Or just look at my post history.

  • 0 Posts
  • 102 Comments
Joined 2 years ago
cake
Cake day: August 3rd, 2023

help-circle
  • Wrangling IDE cables with awkward angles so you couldn’t both see and touch the space at the same time. And the case edges were made of knives. And then, yeah, it wouldn’t boot and you’d have to figure out that your master/slave jumpers were incorrect as others have stated and have to remove, tweak and replace the drives.

    Good times.



  • I really like this comment. It covers a variety of use cases where an LLM/AI could help with the mundane tasks and calls out some of the issues.

    The ‘accuracy’ aspect is my 2nd greatest concern: An LLM agent that I told to find me a nearby Indian restaurant, which it then hallucinated is not going to kill me. I’ll deal, but be hungry and cranky. When that LLM (which are notoriously bad at numbers) updates my spending spreadsheet with a 500 instead of a 5000, that could have a real impact on my long-term planning, especially if it’s somehow tied into my actual bank account and makes up numbers. As we/they embed AI into everything, the number of people who think they have money because the AI agent queried their bank balance, saw 15, and turned it into 1500 will be too damn high. I don’t ever foresee trusting an AI agent to do anything important for me.

    “trust”/“privacy” is my greatest fear, though. There’s documentation for the major players that prompts are used to train the models. I can’t immediately find an article link because ‘chatgpt prompt train’ finds me a ton of slop about the various “super” prompts I could use. Here’s OpenAI’s ToS about how they will use your input to train their model unless you specifically opt-out: https://openai.com/policies/how-your-data-is-used-to-improve-model-performance/

    Note that that means when you ask for an Indian restaurant near your home address, Open AI now has that address in it’s data set and may hallucinate that address as an Indian restaurant in the future. The result being that some hungry, cranky dude may show up at your doorstep asking, “where’s my tikka masala”. This could be a net-gain, though; new bestie.

    The real risk, though, is that your daily life is now collected, collated, harvested and added to the model’s data set; all without your clear explicit actions: using these tools requires accepting a ToS that most people will not really read and understand. Maaaaaany people will expose what is otherwise sensitive information to these tools without understanding that their data becomes visible as part of that action.

    To get a little political, I think there’s a huge downside on the trust aspect of: These companies have your queries(prompts), and I don’t trust them to maintain my privacy. If I ask something like “where to get abortion in texas”, I can fully see OpenAI selling that prompt to law enforcement. That’s an egregious example for impact, but imagine someone could query prompts (using an AI which might make shit up) and asks “who asked about topics anti-X” or “pro-Y”.


    My personal use of ai: I like the NLP paradigm for turning a verbose search query into other search queries that are more likely to find me results. I run a local 8B model that has, for example, helped me find a movie from my childhood that I couldn’t get google to identify.

    There’s use-case here, but I can’t accept this as a SaaS-style offering. Any modern gaming machine can run one of these LLMs and get value without the tradeoff from privacy.

    Adding agent power just opens you up to having your tool make stupid mistakes on your behalf. These kinds of tools need to have oversight at all times. They may work for 90% of the time, but they will eventually send an offensive email to your boss, delete your whole database, wire money to someone you didn’t intend, or otherwise make a mistake.


    I kind of fear the day that you have a crucial confrontation with your boss and the dialog goes something like:

    Why did you call me an asshole?

    I didn’t the AI did and I didn’t read the response as much as I should have.

    Oh, OK.


    Edit: Adding as my use case: I’ve heard about LLMs being described as a blurry JPEG of the internet, and to me this is their true value.

    We don’t need a 800B model, we need an easy 8B model that anyone can run that helps turn “I have a question” into a pile of relevant actual searches.


  • Similarly, my fantasy is that If I won the lottery, or otherwise became independently wealthy, I’d be doing a ton of different entry-level jobs to find one that hit as a passion.

    Construction worker, stagehand, (i’ve already been retail), food service, intern for anything that requires a degree I don’t have, etc.

    I like my current job but if I didn’t need the paycheck then I’m not sure I’d stay. I might stick around if I could negotiate terms and only do the parts I liked, though.

    I wish I could learn a little about everything, but our culture pushes us to commit and be deep instead, and then we get stuck in a job that used to be a fun hobby.


  • This is my issue with NMS.

    It’s fun for a while, but it’s a pretty shallow sandbox and after you’ve played in the sand for a bit, it’s all just sand.

    If you’re not setting yourself a complex and/or grindy goal, like building a neat base, finding the perfect weapon or ship, filling out your reputations or lexicon, or learning all the crafting recipes to make the ultimate mcGuffin, then there is really not much to do. And, for me, once that goal is accomplished, I’m done for a while.

    Each planet is just a collection of random tree/bush/rock/animal/color combinations that are mechanically identical (unless something’s changed. I haven’t played since they added VR). I’m also a gamer who likes mechanical complexity and interactions; I don’t tend to play a game for the actual ‘role playing’.

    The hand-written “quests” were fun to do most of the time, but that content runs out quickly.

    I have the same problems with Elite Dangerous (I have an explorer somewhere out a solid few hours away from civilized space) and unmodded Minecraft (I can only build so many houses/castles). I’ll pick all of these up every now and then, but the fun wears off more quickly each time.



  • In the nicest possible way, and only judging from this post, you are part of the problem. Hear me out:

    They don’t actually need you. Either party. There’s a solid base of voters who are going to vote blue or stay home, or vote red or stay home. If you require being courted, then you’re either effectively random, staying home, or lean towards one side over the other.

    You’re possibly upset that none of your choices are good. That’s pretty true. ‘both sides’ have reasons to not vote for them. You need to help fix that: pick a side, whichever one you lean towards, and go make the choices better.

    Local politics (the ones at the precinct, county, state levels) decide how we choose our candidates in the larger races by deciding who represents us on those larger stages internally to the party. Example: the general public was not polled for the dnc chair election, it was only people put into dnc leadership, who were voted for, several steps down, by people at the precinct level. https://en.wikipedia.org/wiki/2025_Democratic_National_Committee_chairmanship_election

    Is there corporate bullshit here? almost certainly. Can it be overcome? Only if people are paying attention and care to get involved. Voting only in November elections and expecting the candidates to cater to you specifically will not resolve the problems.

    The candidates don’t need to work for your vote. You need to work for better candidates. Or shut up and vote for the least harm.




  • Clearly, English is incapable of having homographs. Caps and “Caps”, and all Caps and ALL CAPS. (sorry, Froggy, that last part was in all caps, which you can’t see)

    Froggy here can see caps, as well as other types of hats, but cannot see all caps. THEY Froggy, CANT we SEE love THIS you PART, but they can still see capital letters, since they don’t comprise the whole word. EXCUSE THE LACK OF APOSTROPHE IT WOULD COMPROMISE THE WORD


  • Feeling the same for almost every ‘fast food’ place lately. A burger at any fast food chain is no longer 1/10th price for 1/10th … quality. It’s now 1/2 price+ for 1/10th quality.

    Unless you are literally dying of starvation, or have a craving for that specific “mcdonalds flavor”, there’s no reason to go there. Spend a few minutes at a booth. relax, and eat a better burger at literally any restaurant, even the ones that only have burgers for weirdos like a Mexican place… But if you go there, get a taco instead. It’ll be tastier.


  • The LitRPG series ‘He Who Fights With Monsters’ does this in a later book and it’s a really good story arc.

    a vague spoiler, but hiding it just in case:

    One of the characters meets a deity named Hero, who can supercharge a person after they have committed to dying to protect others, but the supercharge ensures that they do die even after the threat is eliminated.


  • Not antagonistically speaking here.

    Do you think your input is not being used to train LLMs when posting on Lemmy? It’s publicly visible without an account.

    I’d be shocked if there wasn’t either a scraper, or a whole federated instance, that was harvesting lemmy comments for the big ai companies.

    The only difference is that no one is trying to make money off providing that content to them. A big part of the reddit exodus was that reddit started charging for api calls to make cash off the AI feeding frenzy, which broke tools the users liked. With lemmy, there’s no need for a rent-seeking middle man.


  • I think that adage used to work… however nowadays, with corporate greed enshittifying everything, I think it’s safe to presume malice by default, at least when the actor is a company. Your neighbor probably didn’t mean to do that thing that made you mad.

    They no longer get the ‘benefit of the doubt’ after years of evidence that they will attempt to squeeze every penny out of their customers.



  • I tripped over this awesome analogy that I feel compelled to share. “[AI/LLMs are] a blurry JPEG of the web”.

    This video pointed me to this article (paywalled)

    The headline gets the major point across. LLMs are like taking the whole web as an analog image and lossily digitizing it: you can make out the general shape, but there might be missed details or compression artifacts. Asking an LLM is, in effect, googling your question using a more natural language… but instead of getting source material or memes back as a result, you get a lossy version of those sources and it’s random by design, so ‘how do I fix this bug?’ could result in ‘rm -rf’ one time, and something that looks like an actual fix the next.

    Gamers’ Nexus just did a piece about how youtube’s ai summaries could be manipulative. While I think that is a possibility and the risk is real, go look at how many times elmo has said he’ll fix grok for real this time; but another big takeaway was how bad LLMs still are at numbers or tokens that have data encoded in them: There was a segment where Steve called out the inconsistent model names, and how the ai would mistake a 9070 for a 970, etc, or make up it’s own models.

    Just like googling a question might give you a troll answer, querying an ai might give you a regurgitated, low-res troll answer. ew.


  • You didn’t take away the point SippyCup (I think) wanted to make.

    Most of us live in a world where we have to go to a grocery store and buy food. I cannot possibly be expected to research the CEO of every product I buy and even if I did, my choices are limited to what is available in my store(s).

    When I learn of a company doing bad things, I shun them. But there are also conglomerates like nestle that own half the brands in my local store and I can’t really avoid them. I “have to exist in this system whether [I] like it or not.”

    Sippy was not supporting buying nike or supporting fascism, but was instead telling you to not blame your peers in the “lower classes” for the issue – those who might buy a shoe without knowing the CEO is fascist, or in some cases still buying crackers from a company they do know is fascist because they have no choice.

    Instead, be mad at the fucking fascists. “Turn your justifiably angry energy upwards…” is the part of the quote above that you seem to have missed.


  • That was my body language cue. An ‘umm… 😅’ answer is a pass, as well as any attempt to actually integrate disparate tools that doesn’t sound like it’s being read. The creased eyebrows, hesitation, wtf face, etc is the proof that the interviewee has domain knowledge and knows the question is wrong.

    I do think the tools need to be tailored to the position. My example may not have been the best. I’m not a professional front end developer, but that was my theoretical job for the interviewee.


  • I’m not in a hiring position, but my take would be to throw in unrelated tools as a question. E.g. “how would you use powershell in this html to improve browser performance?” A human would go what the fuck? A llm will confidently make shit up.

    I’d probably immediately follow that with a comment to lower the interviewee’s blood pressure like, ‘you wouldn’t believe how many people try to answer that question with a llm’. A solid hire might actually come up with something, but you should be able to tell from their delivery if they are just reading llm output or are inspired by the question.


  • And this is why Digit wanted a clarification. Let’s make a quick split between “Tech Bro” and Technology Enthusiast.

    I’d maybe label myself a “tech guy”, and forego the “bro”, but I could see other people calling me a “tech bro”. I like following tech trends and innovations, and I’m often a leading adopter of things I’m interested in if not bleeding edge. I like talking about tech trends and will dive into subjects I know. I’ll be quick to point out how machine learning can be used in certain circumstances, but am loudly against “AI”/LLMs being shoved into everything. I’m not the CEO or similar of a startup.

    Your specific and linked definition requires low critical thinking skills, big ego and access to “too much” money. That doesn’t describe me and probably doesn’t describe Digit’s network.

    Their whole point seemed to be that the tech-aware people in their sphere are antagonistic to the idea of “AI” being added to everything. That doesn’t deserve derision.