Alexander seems to have been open to learning about the horse's nature before approaching it. It'll be interesting to see whether and in what ways technological tools like AI have a "nature". I suspect they may be constrained by the contexts in which they are built and will reflect the datasets they're trained on (so we should be careful about that!). But I suspect it will be difficult to create a machine that is self-motivated because there are no evolutionary reasons for an AI to seek particular goals.
Dan, I'm cautious about whether there are evolutionary reasons for AI to seek a goal. Life on earth thrives because of fecundity by way of biology, and it simply means that life that is less fecund will be outcompeted, right? Perhaps AI systems that don't spread widely will fall into extinction as much larger systems that are everywhere begin to dominate.
There's just so much we don't know about what's ahead. I guess the more folks who are thinking about what that means, the better. I'm glad you're helping us think about it!
I can't see how AI won't be self-motivated, since they are and will continue to be built to solve problems. The problem will come from an inability (or outright unwillingness) to consider the consequences.
Large language models have a goal: generate plausible sentences, as Cory Doctorow puts it. But we've already seen how giving them this goal caused issues because plausible != factual.
So it's only hard to see how an AI with sufficient ability and vague enough purpose can do real damage if you're paid not to, and that's why we won't get nimble and adaptive regulation.
I am not following. How is Alexander the great and Bucephalus vs us and AI a valid comparison? Alexander was super intelligent and a horse whisperer whereas we get distracted and excited by AI which is offering us loss of creativity. We don’t have to jump on every trend. I loved the start of this story but got lost. Letting AI write gives you limited powers. Alexander worked and used his brain to connect with his horse. AI offers no moral or spiritual connection.
I wanted to make this into a cautionary tale. We are approaching Bucephalus today. We need to be cautious like young Alexander, but we're doing the opposite, and I don't see any way out of the prisoner's dilemma that's causing it. I hoped to make this conclusion firm toward the end, but since I write every day, sometimes I don't make as firm a conclusion as I think I do.
That’s amazing that you write every day! I was wondering why a living thing though would be linked to AI. Conclusions are the hardest so am just curious to understand what you mean here :) thank you for responding. It gave more understanding about it especially when you write se do the opposite.
Excellent perspective....the comparison of AI with a headstrong horse.
Tomorrow's AI news: "OpenAI releases a new large language model, code named Bucephalus."
Andrew: That was a bit on the nose, wasn't it?
Alexander seems to have been open to learning about the horse's nature before approaching it. It'll be interesting to see whether and in what ways technological tools like AI have a "nature". I suspect they may be constrained by the contexts in which they are built and will reflect the datasets they're trained on (so we should be careful about that!). But I suspect it will be difficult to create a machine that is self-motivated because there are no evolutionary reasons for an AI to seek particular goals.
Dan, I'm cautious about whether there are evolutionary reasons for AI to seek a goal. Life on earth thrives because of fecundity by way of biology, and it simply means that life that is less fecund will be outcompeted, right? Perhaps AI systems that don't spread widely will fall into extinction as much larger systems that are everywhere begin to dominate.
There's just so much we don't know about what's ahead. I guess the more folks who are thinking about what that means, the better. I'm glad you're helping us think about it!
I can't see how AI won't be self-motivated, since they are and will continue to be built to solve problems. The problem will come from an inability (or outright unwillingness) to consider the consequences.
Large language models have a goal: generate plausible sentences, as Cory Doctorow puts it. But we've already seen how giving them this goal caused issues because plausible != factual.
So it's only hard to see how an AI with sufficient ability and vague enough purpose can do real damage if you're paid not to, and that's why we won't get nimble and adaptive regulation.
Brilliant!
I am not following. How is Alexander the great and Bucephalus vs us and AI a valid comparison? Alexander was super intelligent and a horse whisperer whereas we get distracted and excited by AI which is offering us loss of creativity. We don’t have to jump on every trend. I loved the start of this story but got lost. Letting AI write gives you limited powers. Alexander worked and used his brain to connect with his horse. AI offers no moral or spiritual connection.
I wanted to make this into a cautionary tale. We are approaching Bucephalus today. We need to be cautious like young Alexander, but we're doing the opposite, and I don't see any way out of the prisoner's dilemma that's causing it. I hoped to make this conclusion firm toward the end, but since I write every day, sometimes I don't make as firm a conclusion as I think I do.
That’s amazing that you write every day! I was wondering why a living thing though would be linked to AI. Conclusions are the hardest so am just curious to understand what you mean here :) thank you for responding. It gave more understanding about it especially when you write se do the opposite.