• brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    ·
    edit-2
    23 hours ago

    One can get Deepseek R1 from many providers (including US hosts, or various other nationalities). Microsoft even has their own anti-CCP finetune, MIT licensed: https://huggingface.co/microsoft/MAI-DS-R1

    …Banning the app is reasonable, and a tiny inconvenience for anyone who needs DS.

    In other words, this is a big nothingburger because V3/R1 are open models. The story would be different if it was (say) an API-only model like Qwen Max or GPT4o, where ultimately one is beholden to the trainer’s servers.

    • ilmagico@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      23 hours ago

      I literally run deepseek r1 on my laptop via ollama, and many other models, nothing gets sent to anybody. Granted, it’s the smaller 7b parameter model, but still plenty good.

      Microsoft could easily host the full model on their infrastructure if they needed it.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        22 hours ago

        True, though there’s a big output difference between the 7B distil (or even 32B/70B) and the full model.

        And Microsoft does host R1 already, heh. Again, this headline is a big nothingburger.

        Also (random aside here), you should consider switching from ollama. They’re making some FOSS unfriendly moves, and depending on your hardware, better backends could host 14B models at longer context, and similar or better speeds.

          • brucethemoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            21 hours ago

            Completely depends on your laptop hardware, but generally:

            • TabbyAPI (exllamav2/exllamav3)
            • ik_llama.cpp, and its openai server
            • kobold.cpp (or kobold.cpp rocm, or croco.cpp, depends)
            • An MLX host with one of the new distillation quantizations
            • Text-gen-web-ui (slow, but supports a lot of samplers and some exotic quantizations)
            • SGLang (extremely fast for parallel calls if thats what you want).
            • Aphrodite Engine (lots of samplers, and fast at the expense of some VRAM usage).

            I use text-gen-web-ui at the moment only because TabbyAPI is a little broken with exllamav3 (which is utterly awesome for Qwen3), otherwise I’d almost always stick to TabbyAPI.

            Tell me (vaguely) what your system has, and I can be more specific.