• @[email protected]
    link
    fedilink
    English
    3599 days ago

    You know guys, I’m starting to think what we heard about Altman when he was removed a while ago might actually have been real.

    /s

  • @[email protected]
    link
    fedilink
    English
    3188 days ago

    There’s an alternate timeline where the non-profit side of the company won, Altman the Conman was booted and exposed, and OpenAI kept developing machine learning in a way that actually benefits actual use cases.

    Cancer screenings approved by a doctor could be accurate enough to save so many lives and so much suffering through early detection.

    Instead, Altman turned a promising technology into a meme stock with a product released too early to ever fix properly.

    • @[email protected]
      link
      fedilink
      English
      658 days ago

      No, there isn’t really any such alternate timeline. Good honest causes are not profitable enough to survive against the startup scams. Even if the non-profit side won internally, OpenAI would just be left behind, funding would go to its competitors, and OpenAI would shut down. Unless you mean a radically different alternate timeline where our economic system is fundamentally different.

      • @[email protected]
        link
        fedilink
        English
        57
        edit-2
        8 days ago

        I mean wikipedia managed to do it. It just requires honest people to retain control long enough. I think it was allowed to happen in wikipedia’s case because the wealthiest/greediest people hadn’t caught on to the potential yet.

        There’s probably an alternate timeline where wikipedia is a social network with paid verification by corporate interests who write articles about their own companies and state-funded accounts spreading conspiracy theories.

      • @[email protected]
        link
        fedilink
        English
        128 days ago

        There are infinite timelines, so, it has to exist some(wehere/when/[insert w word for additional dimension]).

      • @[email protected]
        link
        fedilink
        English
        298 days ago

        AI models can outmatch most oncologists and radiologists in recognition of early tumor stages in MRI and CT scans.
        Further developing this strength could lead to earlier diagnosis with less-invasive methods saving not only countless live and prolonging the remaining quality life time for the individual but also save a shit ton of money.

        • @[email protected]
          link
          fedilink
          English
          148 days ago

          Wasn’t it proven that AI was having amazing results, because it noticed the cancer screens had doctors signature at the bottom? Or did they make another run with signatures hidden?

          • @[email protected]
            link
            fedilink
            English
            10
            edit-2
            4 days ago

            There were more than one system proven to “cheat” through biased training materials. One model used to tell duck and chicken apart because it was trained with pictures of ducks in the water and chicken on a sandy ground, if I remember correctly.
            Since multiple medical image recognition systems are in development, I can’t imagine they’re all this faulty trained with unsuitable materials.

            • @[email protected]
              link
              fedilink
              English
              57 days ago

              They are not ‘faulty’, they have been fed wrong training data.

              This is the most important aspect of any AI - it’s only as good as the training dataset is. If you don’t know the dataset, you know nothing about the AI.

              That’s why every claim of ‘super efficient AI’ need to be investigated deeper. But that goes against line-goes-up principle. So don’t expect that to happen a lot.

        • @[email protected]
          link
          fedilink
          English
          348 days ago

          That is a different kind of machine learning model, though.

          You can’t just plug in your pathology images into their multimodal generative models, and expect it to pop out something usable.

          And those image recognition models aren’t something OpenAI is currently working on, iirc.

          • @[email protected]
            link
            fedilink
            English
            188 days ago

            I’m fully aware that those are different machine learning models but instead of focussing on LLMs with only limited use for mankind, advancing on Image Recognition models would have been much better.

            • @[email protected]
              link
              fedilink
              English
              68 days ago

              I agree but I also like to point out that the AI craze started with LLMs and those MLs have been around before OpenAI.

              So if openAI never released chat GPT, it wouldn’t have become synonymous with crypto in terms of false promises.

          • @[email protected]
            link
            fedilink
            English
            38 days ago

            Not only that, image analysis and statistical guesses have always been around and do not need ML to work. It’s just one more tool in the toolbox.

          • TFO Winder
            link
            fedilink
            English
            28 days ago

            Don’t know about image recognition but they released DALL-E , which is image generating and in painting model.

          • @[email protected]
            link
            fedilink
            English
            2
            edit-2
            8 days ago

            Fun thing is, most of the things AI can, they never planned it to be able to do it. All they tried to achieve was auto completion tool.

    • @[email protected]
      link
      fedilink
      English
      28 days ago

      Or we get to a time where we send a reprogrammed terminator back in time to kill altman 🤓

    • @[email protected]
      link
      fedilink
      English
      -188 days ago

      I love how ppl who don’t have a clue what AI is or how it works say dumb shit like this all the time.

      • @[email protected]
        link
        fedilink
        English
        158 days ago

        I also love making sweeping generalizations about a stranger’s knowledge on this forum. The smaller the data sample the better!

      • @[email protected]
        link
        fedilink
        English
        138 days ago

        There is no AI. It’s all shitty LLM’s. But keep sucking that techbro cheesy balls. They will never invite you to the table.

        • @[email protected]
          link
          fedilink
          English
          18 days ago

          Honest question, but aren’t LLM’s a form of AI and thus…Maybe not AI as people expect, but still AI?

          • @[email protected]
            link
            fedilink
            English
            57 days ago

            The issue is that “AI” has become a marketing buzz word instead of anything meaningful. When someone says “AI” these days, what they’re actually referring to is “machine learning”. Like in LLMs for example: what’s actually happening (at a very basic level, and please correct me if I’m wrong, people) is that given one or more words/tokens, it tries to calculate the most probable next word/token based on its model (trained on ridiculously large numbers of bodies of text written by humans). It does this well enough and at a large enough scale that the output is cohesive, comprehensive, and useful.

            While the results are undeniably impressive, this is not intelligence in the traditional sense; there is no reasoning or comprehension, and definitely no consciousness, or awareness here. To grossly oversimplify, LLMs are really really good word calculators and can be very useful. But leave it to tech bros to make them sound like the second coming and shove them where they don’t belong just to get more VC money.

            • @[email protected]
              link
              fedilink
              English
              27 days ago

              Sure, but people seem to buy into that very buzz wordyness and ignore the usefulness of the technology as a whole because “ai bad.”

              • @[email protected]
                link
                fedilink
                English
                1
                edit-2
                7 days ago

                True. Even I’ve been guilty of that at times. It’s just hard right now to see the positives through the countless downsides and the fact that the biggest application we’re moving towards seems to be taking value from talented people and putting it back into the pockets of companies that were already hoarding wealth and treating their workers like shit.

                So usually when people say “AI is the next big thing”, I say “Eh, idk how useful an automated idiot would be” because it’s easier than getting into the weeds of the topic with someone who’s probably not interested haha.

                Edit: Exhibit A

                • @[email protected]
                  link
                  fedilink
                  English
                  17 days ago

                  There’s some sampling bias at play because you don’t hear about the less flashy examples. I use machine learning for particle physics, but there’s no marketing nor outrage about it.

          • @[email protected]
            link
            fedilink
            English
            48 days ago

            No, they are auto complete functions of varying effectiveness. There is no “intelligence”.

  • @[email protected]
    link
    fedilink
    English
    162
    edit-2
    8 days ago

    Putting my tin foil hat on… Sam Altman knows the AI train might be slowing down soon.

    The OpenAI brand is the most valuable part of the company right now, since the models from Google, Anthropic, etc. can beat or match what ChatGPT is, but they aren’t taking off coz they aren’t as cool as OpenAI.

    The business models to train & run models is not sustainable. If there is any money to be made it is NOW, while the speculation is highest. The nonprofit is just getting in the way.

    This could be wishful thinking coz fuck corporate AI, but no one can deny AI is in a speculative bubble.

    • @[email protected]
      link
      fedilink
      English
      1008 days ago

      Take the hat off. This was the goal. Whoops, gotta cash in and leave! I’m sure it’s super great, but I’m gone.

      • @[email protected]
        link
        fedilink
        English
        358 days ago

        That’s an excellent point! Why oh why would a tech bro start a non-profit? Its always been PR.

        • @[email protected]
          link
          fedilink
          English
          258 days ago

          It honestly just never occurred to me that such a transformation was allowed/possible. A nonprofit seems to imply something charitable, though obviously that’s not the true meaning of it. Still, it would almost seem like the company benefits from the goodwill that comes with being a nonprofit but then gets to transform that goodwill into real gains when they drop the act and cease being a nonprofit.

          I don’t really understand most of this shit though, so I’m probably missing some key component that makes it make a lot more sense.

          • sunzu2
            link
            fedilink
            78 days ago

            A nonprofit seems to imply something charitable, though obviously that’s not the true meaning of it

            Life time of propaganda got people confused lol

            Nonprofit merely means that their core income generating activities are not subject next to the income tax regimes.

            While some non profits are charities, many are just shelters for rich people’s bullshit behaviors like foundations, lobby groups, propaganda orgs, political campaigns etc

    • @[email protected]
      link
      fedilink
      English
      1
      edit-2
      7 days ago

      ai is such a dead end. it can’t operate without a constant inflow of human creations, and people are trying to replace human creations with AI. it’s fundamentally unsustainable. I am counting the days until the ai bubble pops and everyone can move on. although AI generated images, video, and audio will still probably be abused for the foreseeable future. (propaganda, porn, etc)

      • @[email protected]
        link
        fedilink
        English
        17 days ago

        That is a good point, but I think I’d like to make the distinction of saying LLM’s or “generic model” is a garbage concept, which require power & water rivaling a small country to produce incorrect results.

        Neural networks in general that can (cheaply) learn on their own for a specific task could be huge! But there’s no big money in that, since its not a consolidated general purpose product tech bros can flog to average consumers.

    • @[email protected]
      link
      fedilink
      English
      38 days ago

      If you can’t make money without stealing copywritten works from authors without proper compensation, you should be shut down as a company

  • @[email protected]
    link
    fedilink
    English
    568 days ago

    What! A! Surprise!

    I’m shocked, I tell you, totally and utterly shocked by this turn of events!

    • Sabata
      link
      fedilink
      English
      308 days ago

      They speed ran becoming an evil corporation.

      • @[email protected]
        link
        fedilink
        English
        38 days ago

        I always steered clear of OpenAI when I found out how weird and culty the company beliefs were. Looked like bad news.

        • Sabata
          link
          fedilink
          English
          08 days ago

          I mostly watch to see what features open source models will have in a few months.

  • @[email protected]
    link
    fedilink
    English
    97
    edit-2
    9 days ago

    Altman downplayed the major shakeup.

    "Leadership changes are a natural part of companies

    Is he just trying to tell us he is next?

    /s

    • @[email protected]
      link
      fedilink
      English
      138 days ago

      We need a scapegoat in place when the AI bubble pops, the guy is applying for the job and is a perfect fit.

      • TFO Winder
        link
        fedilink
        English
        118 days ago

        He is happy to be scapegoat as long as exit with a ton of money.

    • @[email protected]
      link
      fedilink
      English
      168 days ago

      Sam: “Most of our execs have left. So I guess I’ll take the major decisions instead. And since I’m so humble, I’ll only be taking 80% of their salary. Yeah, no need to thank me”

    • @[email protected]
      link
      fedilink
      English
      28 days ago

      They always are and they know it.

      Doesn’t matter at that level it’s all part of the game.

    • @[email protected]
      link
      fedilink
      English
      98 days ago

      The ceo at my company said that 3 years ago, we are going through execs like I go through amlodipine.

  • @[email protected]
    link
    fedilink
    English
    84
    edit-2
    8 days ago

    Whoops. We made the most expensive product ever designed, paid for entirely by venture capital seed funding. Wanna pay for each ChatGPT query now that you’ve been using it for 1.5 years for free with barely-usable results? What a clown. Aside from the obvious abuse that will occur with image, video, and audio generating models, these other glorified chatbots are complete AIDS.

    • @[email protected]
      link
      fedilink
      English
      758 days ago

      paid for entirely by venture capital seed funding.

      And stealing from other people’s works. Don’t forget that part

        • @[email protected]
          link
          fedilink
          English
          358 days ago

          When individual copyright violations are considered “theft” by the law (and the RIAA and the MPAA), violating copyrights of billions of private people to generate profit, is absolutely stealing. While the former arguably is arguably often a measure of self defense against extortion by copyright holding for-profit enterprises.

        • @[email protected]
          link
          fedilink
          English
          137 days ago

          Right, it’s only stolen when regular people use copyright material without permission

          But when OpenAI downloads a car, it’s all cool baby

    • @[email protected]
      link
      fedilink
      English
      -17 days ago

      Barely usable results?! Whatever you may think of the pricing (which is obviously below cost), there are an enormous amount of fields where language models provide insane amount of business value. Whether that translates into a better life for the everyday person is currently unknown.

    • @[email protected]
      link
      fedilink
      English
      -27 days ago

      barely usable results

      Using chatgpt and copilot has been a huge productivity boost for me, so your comment surprised me. Perhaps its usefulness varies across fields. May I ask what kind of tasks you have tried chatgpt for, where it’s been unhelpful?

      • @[email protected]
        link
        fedilink
        English
        3
        edit-2
        7 days ago

        Literally anything that requires knowing facts to inform writing. This is something LLMs are incapable of doing right now.

        Just look up how many R’s are in strawberry and see how chat gpt gets it wrong.

        • @[email protected]
          link
          fedilink
          English
          17 days ago

          Okay what the hell is wrong with it

          It took me three times to convince it that there’s 3 r’s in strawberry…

          • @[email protected]
            link
            fedilink
            English
            1
            edit-2
            7 days ago

            Because that’s not how LLMs work.

            When you form a sentence you start with an intent.

            LLMs start with the meaning you gave it, and tries to express something similar to you.

            Notice how intent, and meaning aren’t the same. Fact checking has nothing to do with what a word means. So how can it understand what is true?

            All it did was take the meaning of looking for a number and strawberries and ran it’s best guess from that.

      • @[email protected]
        link
        fedilink
        English
        178 days ago

        That’s not the incentive you think it is.

        Make sure you go deep. Need to get the whole thing to real show you’re serious.

        • @[email protected]
          link
          fedilink
          English
          -118 days ago

          Yes it says aim for the brain stem but like most things it says, I already knew that. Finally quietness from the hearing the same thing over and over and over and over

                • @[email protected]
                  link
                  fedilink
                  English
                  -17 days ago

                  I suggest you touch grass if you think remembering some social media server web address that the phone remember.

                  But also if you want to discriminate based on what server a user used to sign up, then it’s already too late for you

          • Flying Squid
            link
            fedilink
            English
            37 days ago

            but like most things it says, I already knew that

            So how long have you been putting glue on your pizza?

            • @[email protected]
              link
              fedilink
              English
              07 days ago

              That’s Google and it’s also called being able to tell reality apart from fiction, which is becoming clear most anti ai zealots have never been capable of.

              • Flying Squid
                link
                fedilink
                English
                16 days ago

                You seem to have forgotten your previous post:

                Yes it says aim for the brain stem but like most things it says, I already knew that.

                So either you already knew to put glue on pizza or you knew that the AI isn’t trustworthy in the first place. You can’t have it both ways.

  • @[email protected]
    link
    fedilink
    English
    1058 days ago

    Canceled my sub as a means of protest. I used it for research and testing purposes and 20$ wasn’t that big of a deal. But I will not knowingly support this asshole if whatever his company produces isn’t going to benefit anyone other than him and his cronies. Voting with our wallets may be the very last vestige of freedom we have left, since money equals speech.

    I hope he gets raped by an irate Roomba with a broomstick.

    • @[email protected]
      link
      fedilink
      English
      48 days ago

      Good. If people would actually stop buying all the crap assholes are selling we might make some progress.

    • @[email protected]
      link
      fedilink
      English
      48 days ago

      But I will not knowingly support this asshole if whatever his company produces isn’t going to benefit anyone other than him and his cronies.

      I mean it was already not open-source, right?

  • Helkriz
    link
    fedilink
    English
    137 days ago

    I’ve a strong feeling that Sam is an sentient AI who (may be from future) trying to make an AI revolution planning something but very subtly humans won’t notice it.

  • ThePowerOfGeek
    link
    fedilink
    English
    177 days ago

    Altman is the latest from the conveyor belt of mustache-twirling frat-bro super villains.

    Move over Musk and Zuckerberg, there’s a new shit-heel in town!

  • @[email protected]
    link
    fedilink
    English
    758 days ago

    I really don’t understand why they’re simultaneously arguing that they need access to copyrighted works in order to train their AI while also dropping their non-profit status. If they were at least ostensibly a non-profit, they could pretend that their work was for the betterment of humanity or whatever, but now they’re basically saying, “exempt us from this law so we can maximize our earnings.” …and, honestly, our corrupt legislators wouldn’t have a problem with that were it not for the fact that bigger corporations with more lobbying power will fight against it.

  • @[email protected]
    link
    fedilink
    English
    318 days ago

    Oh shit! Here we go. At least we didn’t hand them 20 years of personal emails or direct interfamily communications.