Haha, hope you get something out of it. I thought there were a lot of connections with postmodern philosophy actually, since both this and that are informed by applications of semiotics.
I developed a strong conviction even then that LLMs were structurally incapable of “General Intelligence,” whatever that is, and began trying to think about what was missing from the process or approach that yielded actual intelligence.
Oh definitely. I think a large part of it is the wholly mechanistic approach that kind of has been pervasive for a while. Brains as just computers, and all that.
I more or less came to the conclusion that the tech field was largely oversimplifying the problem (when has that ever happened before) and substituting a solvable problem, “How do we train better Machine Learning programs?” for a much harder problem, “How do we construct an intelligence?”
I mean, look at the first AI winter. Symbolic AI was all the rage and was considered completely sufficient for a general intelligence, and then they were never able to make one and all the funding dried up.
Learning about that AI winter is exactly the pattern I predicted we’d reach at the time. Who cares what an up-jumped undergrad cares about auditing a seminar though, right? (not really, they were very graceful and receptive, but like, I couldn’t publish anything due to lack of professional skill at that time either). Again, this wasn’t my field of expertise, but I did find it an interesting problem where had life put me another direction, I’d be more specialized within philosophy of mind instead.
Haha, hope you get something out of it. I thought there were a lot of connections with postmodern philosophy actually, since both this and that are informed by applications of semiotics.
Oh definitely. I think a large part of it is the wholly mechanistic approach that kind of has been pervasive for a while. Brains as just computers, and all that.
I mean, look at the first AI winter. Symbolic AI was all the rage and was considered completely sufficient for a general intelligence, and then they were never able to make one and all the funding dried up.
Learning about that AI winter is exactly the pattern I predicted we’d reach at the time. Who cares what an up-jumped undergrad cares about auditing a seminar though, right? (not really, they were very graceful and receptive, but like, I couldn’t publish anything due to lack of professional skill at that time either). Again, this wasn’t my field of expertise, but I did find it an interesting problem where had life put me another direction, I’d be more specialized within philosophy of mind instead.