I’ve had this thought for 5 or so years. With any luck, maybe I’ll put it into something publishable before I’m rounded up by the anti communist death squads that come for academics. I think intelligence is fundamentally a social/collective phenomenon, at least in a broad sense where we consider a predator/prey relationship some kind of sociality too. Humans just got really really really surprisingly good at it relative to other species because of our ability to communicate using language, and thus transfer knowledge gains very widely and very quickly compared to a non-verbal animal limited to its immediate social sphere or, slower yet, passing down changes via evolution and genetic advantage. Still, though, this is reliant on a social world and a kind of thinking that can be fundamentally collaborative. Our language and mental models of the world aren’t just geared to get us what we as individual agents desire; they were developed by a process that required them to be understandable to other similarly intelligent creatures.
Our language and mental models of the world aren’t just geared to get us what we as individual agents desire; they were developed by a process that required them to be understandable to other similarly intelligent creatures.
Thanks for sharing! My approach to this is from a philosophy background, less coding, but I’ll see if I can puzzle my way through some of the technical language I don’t use regularly. I just took a “Phil of mind and artificial intelligence” course waaaayyyy back in 2020 that first turned me onto these topics, and that may as well be ancient history as far as “AI” is concerned. I developed a strong conviction even then that LLMs were structurally incapable of “General Intelligence,” whatever that is, and began trying to think about what was missing from the process or approach that yielded actual intelligence. I more or less came to the conclusion that the tech field was largely oversimplifying the problem (when has that ever happened before) and substituting a solvable problem, “How do we train better Machine Learning programs?” for a much harder problem, “How do we construct an intelligence?”
However, since then I’ve specialized in other areas of legal and political philosophy and have less regular interaction with the current discussion other than to hate on it and be a gadfly whenever less skeptical members of my community begin to become seduced by the allure of yet another tech con.
Haha, hope you get something out of it. I thought there were a lot of connections with postmodern philosophy actually, since both this and that are informed by applications of semiotics.
I developed a strong conviction even then that LLMs were structurally incapable of “General Intelligence,” whatever that is, and began trying to think about what was missing from the process or approach that yielded actual intelligence.
Oh definitely. I think a large part of it is the wholly mechanistic approach that kind of has been pervasive for a while. Brains as just computers, and all that.
I more or less came to the conclusion that the tech field was largely oversimplifying the problem (when has that ever happened before) and substituting a solvable problem, “How do we train better Machine Learning programs?” for a much harder problem, “How do we construct an intelligence?”
I mean, look at the first AI winter. Symbolic AI was all the rage and was considered completely sufficient for a general intelligence, and then they were never able to make one and all the funding dried up.
Learning about that AI winter is exactly the pattern I predicted we’d reach at the time. Who cares what an up-jumped undergrad cares about auditing a seminar though, right? (not really, they were very graceful and receptive, but like, I couldn’t publish anything due to lack of professional skill at that time either). Again, this wasn’t my field of expertise, but I did find it an interesting problem where had life put me another direction, I’d be more specialized within philosophy of mind instead.
I’ve had this thought for 5 or so years. With any luck, maybe I’ll put it into something publishable before I’m rounded up by the anti communist death squads that come for academics. I think intelligence is fundamentally a social/collective phenomenon, at least in a broad sense where we consider a predator/prey relationship some kind of sociality too. Humans just got really really really surprisingly good at it relative to other species because of our ability to communicate using language, and thus transfer knowledge gains very widely and very quickly compared to a non-verbal animal limited to its immediate social sphere or, slower yet, passing down changes via evolution and genetic advantage. Still, though, this is reliant on a social world and a kind of thinking that can be fundamentally collaborative. Our language and mental models of the world aren’t just geared to get us what we as individual agents desire; they were developed by a process that required them to be understandable to other similarly intelligent creatures.
Here’s something you might like: https://arxiv.org/abs/2205.12392
Thanks for sharing! My approach to this is from a philosophy background, less coding, but I’ll see if I can puzzle my way through some of the technical language I don’t use regularly. I just took a “Phil of mind and artificial intelligence” course waaaayyyy back in 2020 that first turned me onto these topics, and that may as well be ancient history as far as “AI” is concerned. I developed a strong conviction even then that LLMs were structurally incapable of “General Intelligence,” whatever that is, and began trying to think about what was missing from the process or approach that yielded actual intelligence. I more or less came to the conclusion that the tech field was largely oversimplifying the problem (when has that ever happened before) and substituting a solvable problem, “How do we train better Machine Learning programs?” for a much harder problem, “How do we construct an intelligence?”
However, since then I’ve specialized in other areas of legal and political philosophy and have less regular interaction with the current discussion other than to hate on it and be a gadfly whenever less skeptical members of my community begin to become seduced by the allure of yet another tech con.
Haha, hope you get something out of it. I thought there were a lot of connections with postmodern philosophy actually, since both this and that are informed by applications of semiotics.
Oh definitely. I think a large part of it is the wholly mechanistic approach that kind of has been pervasive for a while. Brains as just computers, and all that.
I mean, look at the first AI winter. Symbolic AI was all the rage and was considered completely sufficient for a general intelligence, and then they were never able to make one and all the funding dried up.
Learning about that AI winter is exactly the pattern I predicted we’d reach at the time. Who cares what an up-jumped undergrad cares about auditing a seminar though, right? (not really, they were very graceful and receptive, but like, I couldn’t publish anything due to lack of professional skill at that time either). Again, this wasn’t my field of expertise, but I did find it an interesting problem where had life put me another direction, I’d be more specialized within philosophy of mind instead.