• ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    20 days ago

    It really depends on how their particular system is set up. You’re just making sweeping vibe based statements without any evidence to support them.

    • Orcocracy [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      7
      ·
      20 days ago

      Yeah, like maybe this is one of those AIs that is actually just a guy in the Philippines being paid shit wages. Or maybe it’s a dumb LLM that makes lots of mistakes. Or maybe it’s all just bullshit from TechCrunch where an underpaid journalist is just recycling a fucking press release from Google and none of this actually happened anything like how it’s written.

        • Orcocracy [comrade/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          5
          ·
          20 days ago

          It’s not entirely impossible. But given the story is light on detail and the main source is Google PR it looks very much like a case of hypemongering.

          • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            19 days ago

            I mean we’ll see, in general stuff like finding vulnerabilities in large code bases seems like a good fit for this tech. All it’s doing is making statistical inferences based on training, and this can help spot problems that would be hard to track down by hand.