I started a local vibecoders group because I think it has the potential to help my community.

(What is vibecoding? It’s a new word, coined last month. See https://en.wikipedia.org/wiki/Vibe_coding)

Why might it be part of a solarpunk future? I often see and am inspired by solarpunk art that depicts relationships and family happiness set inside a beautiful blend of natural and technological wonder. A mom working on her hydroponic garden as the kids play. Friends chatting as they look at a green cityscape.

All of these visions have what I would call a 3-way harmony–harmony between humankind and itself, between humankind and nature, and between nature and technology.

But how is this harmony achieved? Do the “non-techies” live inside a hellscape of technology that other people have created? No! At least, I sure don’t believe in that vision. We need to be in control of our technology, able to craft it, change it, adjust it to our circumstances. Like gardening, but with technology.

I think vibecoding is a whisper of a beginning in this direction.

Right now, the capital requirements to build software are extremely high–imagine what Meta paid to have Instagram developed, for instance. It’s probably in the tens of millions or hundreds of millions of dollars. It’s likely that only corporations can afford to build this type of software–local communities are priced out.

But imagine if everyone could (vibe)code, at least to some degree. What if you could build just the habit-tracking app you need, in under an hour? What if you didn’t need to be an Open Source software wizard to mold an existing app into the app you actually want?

Having AI help us build software drops the capital requirements of software development from millions of dollars to thousands, maybe even hundreds. It’s possible (for me, at least) to imagine a future of participative software development–where the digital rules of our lives are our own, fashioned individually and collectively. Not necessarily by tech wizards and esoteric capitalists, but by all of us.

Vibecoding isn’t quite there yet–we aren’t quite to the Star Trek computer just yet. I don’t want to oversell it and promise the moon. But I think we’re at the beginning of a shift, and I look forward to exploring it.

P.S. If you want to try vibecoding out, I recommend v0 among all the tools I’ve played with. It has the most accurate results with the least pain and frustration for now. Hopefully we’ll see lots of alternatives and especially open source options crop up soon.

  • noodle (he/him)@lemm.ee
    link
    fedilink
    arrow-up
    1
    ·
    3 months ago

    using insecure code that a glorified autocorrect has spat out hopefully isn’t going to be a part of the future I’ll be living in.

  • Nafeon (Lemmy, don't "@" this@pawb.social
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    3 months ago

    Yeah sorry no. Solarpunk is about community so if anything then pair programming is Solarpunk, but I don’t think that talking in isolation to an auto completion system is Solarpunk.

    Maybe in like 300 years with some kind of robots, but that’s not really the scope of solarpunk, tbh.

    Btw vibecoding is an horrifying name for the crisis you’ll get, when you try to fix code that your LLM spat out in production, when the customer demands it working.

    (Recent example: https://cloudisland.nz/@daisy/114182826781681792 )

  • Taleya@aussie.zone
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 months ago

    Tech here, married to a dev, friends with several devs.

    LLM are shit coders. They are absolute ecological rapists and garbage vaporware for 90% of the uses people try to wedge them into.

    The capital investment for software is not extremely high. It’s standard wages and learned skill. IG was bought, cherry picked and twisted to suit meta’s data thieving desires. There are literally millions of people producing and sharing code and software for free just for shits and giggles. GNU has been a very real thing for generations now.

    Also: gatekept tech knowledge is not required for the harmony of which you speak. People aren’t being excluded from a solarpunk utopia because they can’t write an app. All that is required is a willingness to put in the work to do things in a way less damaging - and using the slop commonly misnamed AI is the antithesis of that

  • houseofleft@slrpnk.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    I think the pretty universal answer in all these comments is “no”- I think that’s fair but I’d add sone caveats.

    There’s a lot of negative sentiments here around LLMs, which I agree with, but I think it’s easy to imagine some hypothetical future where LLMs existing without the current water/energy overuse, hallucinations or big companies stealing individuals work. Whether that future is likely or not, I think it’s possible.

    The main reason vibe coding isn’t solarpunk is that, taken by itself, it’s not in any way related to ecological stewardship, anti-capitalist community building, or anything else that’s core to solarpunk. Vibe coding might or might not be part of some “cool techy future” in the same way as flying cars, robots, and floating cities but that’s not a reason to consider it as solarpunk.

    If you’re into LLMs and solarpunk, instead of arguing that LLMs are solarpunk, you can make efforts to push them to being more solarpunk. How can LLMs support communities instead of coorporations? How can, through weights sharing and various optimisations, we make LLMs less damaging to the environment? Etc. That’d at least be a solarpunk way to go about LLMs, even if LLMs aren’t inherently solarpunk.

    • canadaduane@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      This is exactly what I’m trying to do, but I was taken aback at how negative the solarpunk community took things. I thought of myself as solarpunk, but I’ve had to reconsider since posting this.

      • houseofleft@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        That’s sad to hear- people on the internet can seem harsh, I thinks its probably too easy to forget there’s a real person behind most questions.

        It’s been like a month now, and I still don’t really think LLMs are solarpunk, trying oto make them more.open and community based sounds worthwhile though, so good luck with it!

        Massive side point, but if you’re interested on “empowering people who don’t want to deal with technical details of coding” check out ideas as a whole around “end user programming”. It’s a pretty broad church, but there’s some cool stuff happening under that term that it sounds like you’d like.

    • strongoose@slrpnk.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      I agree with your assessment, but I’m more pessimistic about LLMs as a technology. The Luddites tell us that machines are not value-neutral - we should ask who the LLMs serve.

      The core function of an LLM is to enclose public commons (aggregate, open-access human knowledge) in a centrally-controlled black box. It’s not a coincidence that corporations are trying to replace search with LLM summaries - the point is for the model to be an intermediary between the user and the information they need.

      Vibecoding embraces this intermediation - to the vibecoder, an understanding of the technology they’re building is simply a cost that must be surmounted, and if they can avoid paying it, so much the better. This is misguided. Knowledge is power, and we cede that power at our peril. Solarpunk is punk, and punk is DIY, and DIY means taking back ownership of spaces and technologies.

      I won’t say that it’s inherently wrong to cede that ownership - tactically. Perhaps the OP is building essential tools that their communities can’t access otherwise. But short term fixes a solarpunk future do not make.

      • signaleleven@slrpnk.net
        link
        fedilink
        arrow-up
        1
        ·
        28 days ago

        I have all the same issues most of y’all have with the moral and environmental issues with giant corporate models but I take issue with this statement:

        The core function of an LLM is to enclose public commons (aggregate, open-access human knowledge) in a centrally-controlled black box.

        The core function of an LLM is to generate statically plausible text (which is what my totally open source mobile keyboard is doing as I type using a very small transformer based LLM, for instance)

        Using it to provide an answer to a search instead of returning sources is 100% the evil you described. But it is a shitty use for a technology that would be unfair to reflect entirely on the technlogy itself.

        LLMs are not going away. We might disagree on their usefulness (I flip flop daily on my opinion about it, which is usually a sign that something is inherently neutral) but zealot blanket rejections worry me a bit.

        The other knee-jerk reaction about energy (and water, but that is not unavoidable) usage is also something I try to process a bit compartimentalized. It needs to improve and the scale of growth is unsustainable. Does that invalidate everything currently explored or researched?

        The push for more efficiency is vital and rightful. Do more with less. But while it’s fair to criticize someone for using an incandescent light bulb instead of better technology to, say, illuminate a room, criticizing them for using light in that room is wrong, IMHO. We don’t need less light (well, yes, outdoors, but for different reasons), we need better technology and cleaner energy so we don’t need to worry about who is turning on which light. I get that “AI” is power hungry, and that needs to improve, but I am very uncomfortable with the idea that we should decide a priori if something is worth using energy or not. It’s… A bit draconian?

        I know its not a super original position (“a tool is just a tool”). I’m trying to work through this myself. As I type this I think of PoW blockchains as a counterexample that I would bring up to debate myself. Yes, it looks like there are usages that appear to be inherently “wrong”. Why do I find blockchain worse? Because I consider it unworthy of the energy spent for it, which makes me “guilty” of what I criticized…

        Damn, It’s hard to try to have opinions!


        More in topic: vibe coding (super icky name, jfc) might be vaguely OK for prototyping in some cases, or extremely limited cases where you can almost prove correctness. Or yeah, personal tools where you’re the only person to be responsible and affected by the results. Anything more than that, and it makes me nervous. It has not much to share with solarpunk per se. But AI aided development (maybe a broader and less silly named concept) is not antithetical to solarpunk, IMHO. The DIY nature you ( @[email protected]) describe doesn’t go down at infinity. To build a community garden from scratch you first need to invent the universe. You not knowing how to invent the universe. You still own the technology if you use a tool you don’t fully understand the internals of. You need to retain the option to understand it though, I agree.

  • keepthepace@slrpnk.net
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    2 months ago

    Most of the solarpunk crowd seems to equate anything LLM with Sam Altman and Elon Musk. They think it is a purely capitalistic endeavor that can’t run on anything else than methane-breathing datacenters. There needs to be some education about the real impact of it and the open source of it. To explain how it can fit into a post-capitalist society.

    I do think that vibe-coding is one way to reappropriate tech yes, and is extremely solarpunk. It makes manipulating machines and designing system a far more inclusive capability, bringing it from the work of specialist into the political sphere.

    But explaining that is an uphill battle. When I made a post about solarpunk AI a year ago, it was well received. I fear it would be downvoted into oblivion if I published the same thing today.

  • perestroika@slrpnk.net
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    3 months ago

    The concept is new to me, so I’m a bit challenged to give an opinion. I will try however.

    In some systems, software can be isolated from the real world in a nice sandbox with no unexpected inputs. If a clear way of expressing what one really wants is available, and more convenient than a programming language, I believe a well-trained and self-critical AI (capable of estimating its probability of success at a task) will be highly qualified to write that kind of software, and tell when things are doubtful.

    The coder may not understand the code, though, which is something I find politically unacceptable. I don’t want a society where people don’t understand how their systems work.

    It could even contain a logic bomb and nobody would know. Even the AI which wrote it may tomorrow fail to understand it, after the software has become sufficiently unique through customization. So, there’s a risk that the software lacks even a single qualified maintainer.

    Meanwhile some software is mission critical - if it fails, something irreversible happens in the real world. This kind of software usually must be understood by several people. New people must be capable of coming to understand it through review. They must be able to predict its limitations, give specifications for each subsystem and build testing routines to detect introduction of errors.

    Mission critical software typically has a close relationship with hardware. It typically has sensors coming from the real world and effectors changing the real world. Testing it resembles doing electronical and physical experiments. The system may have undescribed properties that an AI cannot be informed about. It may be impossible to code successfully without actually doing those experiments, finding out the limitations and quirks of hardware, and thus it may be impossible for an AI to build from a prompt.

    I’m currently building a drone system and I’m up to my neck in undocumented hardware interactions, but even a heating controller will encounter some. I don’t think people will experience success in the near future with letting an AI build such systems for them. In principle it can. In principle, you can let an AI teach a robot dog to walk, and it will take only a few hours. But this will likely require giving it control of said robot dog, letting it run experiments and learn from outcomes. Which may take a week, while writing the code might have also taken a week. In the end, one code base will be maintainable, the other likely not.