Image is from the Wikipedia article on the Sudanese Civil War.


Al-Fashir, the capital of North Darfur (a little east of that deep red zone in the west of the megathread map), is the last major holdout of the Sudanese government in that state, and is currently under siege by the RSF. Losing it would be a significant blow to the SAF, though given how the conflict lines are shaping up, it seems increasingly plausible that there will be a de facto - if not de jure - partition of Sudan, unless the military situation substantially changes. This is because the RSF have been pushed out of central Sudan, while the SAF are being pushed out of Western Sudan - although, the situation is pretty complex and has been known to change rapidly before.

As has been a constant feature of the Sudan Civil War - perhaps the single worst humanitarian crisis on the planet right now when measured by numbers - the civilian situation pales in comparison to the military situation, with hundreds of thousands of children dead from famine, and tens of millions of people experiencing extreme food insecurity.

Al-Fashir has been the destination of many thousands of refugees fleeing genocide, and food and aid supplies into the town are being explicitly blocked by the RSF, resulting in scenes similar to what is happening in Gaza right now. The big difference is that fleeing from major battle zones is at least somewhat of an option, though people are often caught and robbed or enslaved or trafficked while moving to neighbouring towns and cities - and these cities are often experiencing similar conditions to places that refugees are leaving.


Last week’s thread is here.
The Imperialism Reading Group is here.

Please check out the RedAtlas!

The bulletins site is here. Currently not used.
The RSS feed is here. Also currently not used.

Israel's Genocide of Palestine

If you have evidence of Zionist crimes and atrocities that you wish to preserve, there is a thread here in which to do so.

Sources on the fighting in Palestine against the temporary Zionist entity. In general, CW for footage of battles, explosions, dead people, and so on:

UNRWA reports on Israel’s destruction and siege of Gaza and the West Bank.

English-language Palestinian Marxist-Leninist twitter account. Alt here.
English-language twitter account that collates news.
Arab-language twitter account with videos and images of fighting.
English-language (with some Arab retweets) Twitter account based in Lebanon. - Telegram is @IbnRiad.
English-language Palestinian Twitter account which reports on news from the Resistance Axis. - Telegram is @EyesOnSouth.
English-language Twitter account in the same group as the previous two. - Telegram here.

English-language PalestineResist telegram channel.
More telegram channels here for those interested.

Russia-Ukraine Conflict

Examples of Ukrainian Nazis and fascists
Examples of racism/euro-centrism during the Russia-Ukraine conflict

Sources:

Defense Politics Asia’s youtube channel and their map. Their youtube channel has substantially diminished in quality but the map is still useful.
Moon of Alabama, which tends to have interesting analysis. Avoid the comment section.
Understanding War and the Saker: reactionary sources that have occasional insights on the war.
Alexander Mercouris, who does daily videos on the conflict. While he is a reactionary and surrounds himself with likeminded people, his daily update videos are relatively brainworm-free and good if you don’t want to follow Russian telegram channels to get news. He also co-hosts The Duran, which is more explicitly conservative, racist, sexist, transphobic, anti-communist, etc when guests are invited on, but is just about tolerable when it’s just the two of them if you want a little more analysis.
Simplicius, who publishes on Substack. Like others, his political analysis should be soundly ignored, but his knowledge of weaponry and military strategy is generally quite good.
On the ground: Patrick Lancaster, an independent and very good journalist reporting in the warzone on the separatists’ side.

Unedited videos of Russian/Ukrainian press conferences and speeches.

Pro-Russian Telegram Channels:

Again, CW for anti-LGBT and racist, sexist, etc speech, as well as combat footage.

https://t.me/aleksandr_skif ~ DPR’s former Defense Minister and Colonel in the DPR’s forces. Russian language.
https://t.me/Slavyangrad ~ A few different pro-Russian people gather frequent content for this channel (~100 posts per day), some socialist, but all socially reactionary. If you can only tolerate using one Russian telegram channel, I would recommend this one.
https://t.me/s/levigodman ~ Does daily update posts.
https://t.me/patricklancasternewstoday ~ Patrick Lancaster’s telegram channel.
https://t.me/gonzowarr ~ A big Russian commentator.
https://t.me/rybar ~ One of, if not the, biggest Russian telegram channels focussing on the war out there. Actually quite balanced, maybe even pessimistic about Russia. Produces interesting and useful maps.
https://t.me/epoddubny ~ Russian language.
https://t.me/boris_rozhin ~ Russian language.
https://t.me/mod_russia_en ~ Russian Ministry of Defense. Does daily, if rather bland updates on the number of Ukrainians killed, etc. The figures appear to be approximately accurate; if you want, reduce all numbers by 25% as a ‘propaganda tax’, if you don’t believe them. Does not cover everything, for obvious reasons, and virtually never details Russian losses.
https://t.me/UkraineHumanRightsAbuses ~ Pro-Russian, documents abuses that Ukraine commits.

Pro-Ukraine Telegram Channels:

Almost every Western media outlet.
https://discord.gg/projectowl ~ Pro-Ukrainian OSINT Discord.
https://t.me/ice_inii ~ Alleged Ukrainian account with a rather cynical take on the entire thing.


  • revolut1917 [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    46
    ·
    3 days ago

    So GPT-5 just launched and the presentation included a bunch of really dumb graphs that are obviously wrong…

    And also it’s just generally not that impressive/not a huge step up from previous capability, and that’s what the benchmarks are saying. Apparently they squished the hallucination issue, but again, benchmarks, not real usage.

    Is it over?

    • plinky [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      45
      ·
      edit-2
      3 days ago

      wowee

      but for semi-real (unrelated to gpt5 shit), how quickly datacenters amortization is priced in? this is one detail i find somewhat interesting in both pro- and against financial side of ai writing: build out of 100 billion datacenters looks mighty sus, if it’s expected to become poop in 5-10 years, that constant drain of 10 billion this side of compute itself

      • vovchik_ilich [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        The logic only works from the investment/stock point of view. It doesn’t matter how much revenue the data center generates. You put 10bn today not because the data center generates this revenue, but because you expect that the pace of inflation of your stock will outpace that of investment.

        • plinky [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          5
          ·
          2 days ago

          no, actually for big tech it matters, they have to use it to dodge taxes via amortization of capital stock, they print money otherwise. big tech the ones building them, and they are the ones with existing money piles to set on fire for them, and they’ll be the ones owning them. it’s little ones which construct vaporware on top of rented computer to be bought out, the big ones might have to answer why their cashflow is 150 billion instead of 200, of which 10% goes to dividends (hypothetical numbers) (or, why profit margins have fallen, which is roughly same, cause expanding capex with shrinking free income will drop margins by necessity)

    • Awoo [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      14
      ·
      3 days ago

      Apparently they squished the hallucination issue

      Bullshit. Give it a week and people will find ways to trigger it.

    • TheModerateTankie [any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      35
      ·
      3 days ago

      Damn. We really need to find a way to waste cpu cycles for some niche need and then convince the world it’s necessary to keep this tech bubble going. Blockchains are old news. Increasingly complex Markov chains are hitting their limit. How about… Markov Blockchains!

    • thethirdgracchi [he/him, they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      35
      ·
      3 days ago

      For OpenAI, sort of? It’s obvious to a lot of folks that “growth” in models is tapering off. The different between GPT-5 and GPT-4 is tiny compared to like GPT-1 and -2; despite all the “hype” around “artificial general intelligence” by 2027 or whatever it’s very clear we’re not hitting that, and OpenAI doesn’t really have a path to monetization that makes sense given its reported $500 billion (!!!) valuation recently. It is possible to make money with AI, and there’s clearly a customer base for it, but not at these spectacular valuations we’re seeing that promise total change to all of the human experience and complete automation of your entire workforce. I don’t think this lackluster announcement is going to be when the big crash happens though. That’ll be when a lot of these new AI data centers come online in the next few years and, like the massive buildout of rail infrastructure two hundred years ago, don’t print money like they’re expected and the new frontier models aren’t miles better than like qwen7 or whatever that can run locally on a decent graphics card for very little money. Once the free and open source models are as good at 99% of tasks as the frontier models, and can be run on consumer grade hardware, then the big crash comes, as OpenAI and all its data centers doesn’t really have a point or path to profitability anymore.

      • sodium_nitride [she/her, any]@hexbear.net
        link
        fedilink
        English
        arrow-up
        27
        ·
        3 days ago

        Once the free and open source models are as good at 99% of tasks as the frontier models, and can be run on consumer grade hardware, then the big crash comes, as OpenAI and all its data centers doesn’t really have a point or path to profitability anymore.

        Exactly. The release of deepseek was iirc the only time AI actually had a crash. Even then, deepseek got restricted in many places and has the baggage of being Chinese, and therefore subject to idiotic propaganda.

        If we want another big drop, we probably need a western deepseek moment, even then the western governments will likely artificially protec the big AI players in some way (likely through preferential contracts for building out surveillance systems and kill chains)

        • thethirdgracchi [he/him, they/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          20
          ·
          3 days ago

          Plus Deepseek was good, really good even, but not close to the top of the line frontier models at Anthropic, Google, and OpenAI. For agentic tasks like actually making software and managing commands in the terminal, the frontier models really excel. When (and it really is a matter of when) a Chinese firm is able to make something that rivals those and can run on consumer hardware, that’s the Big One.

          • Kereru [he/him]@hexbear.net
            link
            fedilink
            English
            arrow-up
            10
            ·
            3 days ago

            How are GLM, Kimi, qwen coder? They seem on par with Claude sonet at a glance? Still not as good as opus ig? But does that even matter for majority programming workflows?

      • ColombianLenin [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        21
        ·
        3 days ago

        Once the free and open source models are as good at 99% of tasks as the frontier models, and can be run on consumer grade hardware, then the big crash comes, as OpenAI and all its data centers doesn’t really have a point or path to profitability anymore.

        I was thinking the other day that China’s developments in AI are based on trying to prick the AI bubble and bring down the US economy with it, a la dot com bubble.

        • thethirdgracchi [he/him, they/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          24
          ·
          3 days ago

          Nah, Chinese developments in AI are mostly centered around trying to make a lot of money on AI and not fall behind the Americans. Crashing the AI bubble and the American economy with it is just a happy side effect.

          • ColombianLenin [he/him]@hexbear.net
            link
            fedilink
            English
            arrow-up
            9
            ·
            3 days ago

            Unless by making money you are referring to increasing production I dont see how DeepSeek or the such has been made to create profits on their own.

            • thethirdgracchi [he/him, they/them]@hexbear.net
              link
              fedilink
              English
              arrow-up
              18
              ·
              3 days ago

              Making money does not equal profits these days. “Making money” is often just investors pouring more money into your company for hype, or getting government subsidies or whatever. Deepseek was the pet project of a hedge fund billionaire, doesn’t even need to make money.

    • VILenin [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      18
      ·
      3 days ago

      If they can’t convince their idiot investors that the next model is going to be 100x better over and over again until the end of time, the bullshit train derails

    • tricerotops [they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      13
      ·
      3 days ago

      i guess the low hanging fruit of just throwing more data at gigantic transformers is basically picked clean. At this point they have some newer architectures that make the models a little cheaper to run and they’ve made some scripts that prod it into “thinking” about things differently. So I wonder how they intend to get to their stated goal of AGI without requiring encasing the sun in a dyson sphere.

      • sodium_nitride [she/her, any]@hexbear.net
        link
        fedilink
        English
        arrow-up
        10
        ·
        3 days ago

        They need to get over the hurdle of “garbage in = garbage out”. For AI to become smarter, you need to remove bad data and add new good data. And of course, that is a lot of work and requires probably an entirely different approach to AI than LLMs.

        And no, I’m not salty that my favorite approach “expert systems” never quite got off the ground (cause you need to pay humans to transfer their knowledge as explicitly coded rules)