• rustydrd@sh.itjust.works
    link
    fedilink
    arrow-up
    37
    arrow-down
    8
    ·
    3 days ago

    Lots of AI is technologically interesting and has tons of potential, but this kind of chatbot and image/video generation stuff we got now is just dumb.

    • MrMcGasion@lemmy.world
      link
      fedilink
      arrow-up
      30
      arrow-down
      3
      ·
      edit-2
      3 days ago

      I firmly believe we won’t get most of the interesting, “good” AI until after this current AI bubble bursts and goes down in flames. Once AI hardware is cheap interesting people will use it to make cool things. But right now, the big players in the space are drowning out anyone who might do real AI work that has potential, by throwing more and more hardware and money at LLMs and generative AI models because they don’t understand the technology and see it as a way to get rich and powerful quickly.

      • NewDayRocks@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        7
        arrow-down
        3
        ·
        2 days ago

        AI is good and cheap now because businesses are funding it at a loss, so not sure what you mean here.

        The problem is that it’s cheap, so that anyone can make whatever they want and most people make low quality slop, hence why it’s not “good” in your eyes.

        Making a cheap or efficient AI doesn’t help the end user in any way.

        • SolarBoy@slrpnk.net
          link
          fedilink
          English
          arrow-up
          7
          ·
          2 days ago

          It appears good and cheap. But it’s actually burning money, energy and water like crazy. I think somebody mentioned to generate a 10 second video, it’s the equivalent in energy consumption as driving a bike for 100km.

          It’s not sustainable. I think the thing the person above you is referring to is if we ever manage to make LLMs and such which can be run locally on a phone or laptop with good results. That would make people experiment and try out things themselves, instead of being dependent on paying monthly for some services that can change anytime.

          • krunklom@lemmy.zip
            link
            fedilink
            arrow-up
            2
            ·
            2 days ago

            i mean. i have a 15 amp fuse in my apartment and a 10 second cideo takes like 10 minutes to make, i dont know how much energy a 4090 draws but anyone that has an issue with me using mine to generate a 10 second bideo better not play pc games.

          • NewDayRocks@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            1
            ·
            2 days ago

            You and OP are misunderstanding what is meant by good and cheap.

            It’s not cheap from a resource perspective like you say. However that is irrelevant for the end user. It’s “cheap” already because it is either free or costs considerably less for the user than the cost of the resources used. OpenAI or Meta or Twitter are paying the cost. You do not need to pay for a monthly subscription to use AI.

            So the quality of the content created is not limited by cost.

            If the AI bubble popped, this won’t improve AI quality.

        • MrMcGasion@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          2 days ago

          I’m using “good” in almost a moral sense. The quality of output from LLMs and generative AI is already about as good as it can get from a technical standpoint, continuing to throw money and data at it will only result in minimal improvement.

          What I mean by “good AI” is the potential of new types of AI models to be trained for things like diagnosing cancer, and and other predictive tasks that we haven’t thought of yet that actually have the potential to help humanity (and not just put artists and authors out of their jobs).

          The work of training new, useful AI models is going to be done by scientists and researchers, probably on a limited budgets because there won’t be a clear profit motive, and they won’t be able to afford thousands of $20,000 GPUs like are being thrown at LLMs and generative AI today. But as the current AI race crashes and burns, the used hardware of today will be more affordable and hopefully actually get used for useful AI projects.

          • NewDayRocks@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            1
            ·
            2 days ago

            Ok. Thanks for clarifying.

            Although I am pretty sure AI is already used in the medical field for research and diagnosis. This “AI everywhere” trend you are seeing is the result of everyone trying to stick and use AI in every which way.

            The thing about the AI boom is that lots of money is being invested into all fields. A bubble pop would result in investment money drying up everywhere, not make access to AI more affordable as you are suggesting.

      • FauxLiving@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        2
        ·
        2 days ago

        I firmly believe we won’t get most of the interesting, “good” AI until after this current AI bubble bursts and goes down in flames.

        I can’t imagine that you read much about AI outside of web sources or news media then. The exciting uses of AI is not LLMs and diffusion models, though that is all the public talks about when they talk about ‘AI’.

        For example, we have been trying to find a way to predict protein folding for decades. Using machine learning, a team was able to train a model (https://en.wikipedia.org/wiki/AlphaFold) to predict the structure of proteins with high accuracy. Other scientists have used similar techniques to train a diffusion model that will generate a string of amino acids which will fold into a structure with the specified properties (like how image description prompts are used in an image generator).

        This is particularly important because, thanks to mRNA technology, we can write arbitrary sequences of mRNA which will co-opt our cells to produce said protein.


        Robotics is undergoing similar revolutionary changes. Here is a state of the art robot made by Boston Dynamics using a human programmed feedback control loop: https://www.youtube.com/watch?v=cNZPRsrwumQ

        Here is a Boston Dynamics robot “using reinforcement learning with references from human motion capture and animation.”: https://www.youtube.com/watch?v=I44_zbEwz_w


        Object detection, image processing, logistics, speech recognition, etc. These are all things that required tens of thousands of hours of science and engineering time to develop the software for, and the software wasn’t great. Now, freshman at college can train a computer vision network that outperforms these tools using free tools and a graphics card which will outperform the human-created software.

        AI isn’t LLMs and image generators, those may as well be toys. I’m sure eventually LLMs and image generation will be good, but the only reason it seems amazing is because it is a novel capability that computers have not had before. But the actual impact on the real world will be minimal outside of specific fields.

            • mojofrododojo@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              16 hours ago

              then pray tell where is it working out great?

              again, you have nothing to refute the evidence placed before you except “ah that’s a bunch of links” and “not everything is an llm”

              so tell us where it’s going so well.

              Not the meacha-hitler swiftie porn, heh, yeah I wouldn’t want to be associated with it either. But your aibros don’t care.

                • mojofrododojo@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  10 hours ago

                  ah what great advances has alpha fold delivered?

                  and that robotics training, where has that improved human lives? because near as I can tell it’s simply going to put people out of work. the lowest paid people. so that’s just great.

                  but let’s give you some slack: let’s leave it to protein folding and robotics and stop sticking it into every fuckin facet of our civilization.

                  and protein folding and robotics training wouldn’t require google, x, meta and your grandmother to be rolling out datacenters EVERYWHERE, driving up the costs of electricity for the average user, while polluting the air and water.

                  Faux, I get it, you’re an aibro, you really are a believer. Evidence isn’t going to sway you because this isn’t evidence driven. The suffering of others isn’t going to bother you, that’s their problem. The damage to the ecosystem isn’t your problem, you apparently don’t need water or air to exist. You got it made bro.

                  pfft.

                  • FauxLiving@lemmy.world
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    edit-2
                    9 hours ago

                    ah what great advances has alpha fold delivered?

                    The ability to know how any sequence of amino acids will create a protein and what shape the protein would have. This also led to other scientists creating diffusion models which can be prompted with protein properties and they generate the sequence of amino acids which will create a protein with those properties. We also can write those arbitrary sequences into mRNA and introduce that into a local area of our cells.

                    But what do I know, I’m just an aibro. So, I’ll listen to scientists who write peer reviewed papers which are published in scientific journals: AI-Enabled Protein Design: A Strategic Asset for Global Health and Biosecurity

                    and that robotics training, where has that improved human lives?

                    Well, Fukushima would be one place.

                    Now they can use disposable robotic dogs to do clean up and monitoring in high radiation areas. A job that humans were doing at the beginning. I’m sure those humans appreciate not having to die of cancer early.

                    Faux, I get it, you’re an aibro, you really are a believer. Evidence isn’t going to sway you because this isn’t evidence driven. The suffering of others isn’t going to bother you, that’s their problem. The damage to the ecosystem isn’t your problem, you apparently don’t need water or air to exist. You got it made bro

                    🙄. If you can’t win an argument just switch to insults, the tactic of choice for the ignorant.

        • MrMcGasion@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          2 days ago

          Oh I have read and heard about all those things, none of them (to my knowledge) are being done by OpenAI, xAI, Google, Anthropic, or any of the large companies fueling the current AI bubble, which is why I call it a bubble. The things you mentioned are where AI has potential, and I think that continuing to throw billions at marginally better LLMs and generative models at this point is hurting the real innovators. And sure, maybe some of those who are innovating end up getting bought by the larger companies, but that’s not as good for their start-ups or for humanity at large.

          • FauxLiving@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            2
            ·
            1 day ago

            AlphaFold is made by DeepMind, an Alphabet (Google) subsidiary.

            Google and OpenAI are also both developing world models.

            These are a way to generate realistic environments that behave like the real world. These are core to generating the volume of synthetic training data that would allow training robotics models massively more efficient.

            Instead of building an actual physical robot and having it slowly interact with the world while learning from its one physical body. The robot’s builder could create a world model representation of their robot’s body’s physical characteristics and attach their control software to the simulation. Now the robot can train in a simulated environment. Then, you can create multiple parallel copies of that setup in order to generate training data rapidly.

            It would be economically unfeasible to build 10,000 prototype robots in order to generate training data, but it is easy to see how running 10,000 different models in parallel is possible.

            I think that continuing to throw billions at marginally better LLMs and generative models at this point is hurting the real innovators.

            On the other hand, the billions of dollars being thrown at these companies is being used to hire machine learning specialists. The real innovators who have the knowledge and talent to work on these projects almost certainly work for one of these companies or the DoD. This demand for machine learning specialists (and their high salaries) drives students to change their major to this field and creates more innovators over time.

      • haungack@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        2 days ago

        I don’t know if the current AI phase is a bubble, but i agree with you that if it were a bubble and burst, it wouldn’t somehow stop or end AI, but cause a new wave of innovation instead.

        I’ve seen many AI opponents imply otherwise. When the dotcom bubble burst, the internet didn’t exactly die.