“I’m sorry, Dave. I’m afraid I can’t do that.” In 2023, we’ve seen an awful lot of change in terms of what the public thinks AI can do. Before the recent LLM (Large Language Model) revolution featuring superstar ChatGPT, most people working on AI predicted that AGI (Artificial General Intelligence) would take decades to arrive, if indeed it ever arrived at all.
I’m not overly concerned, personally about ASI or really even AGI. I feel like there’s a sentience barrier that we haven’t even seen, let alone solved for, that ASI requires. AGI is closer, but I still use closer as a comparative. There are so many problems to solve there that decades is probably a better measure than years. What I do find fascinating is the applications of ANI.
But I think the next frontier of AI is not in achieving AGI but rather how do we get AI efficiency to the point that my toaster can run it? We’ve seen that with every major technological epoch so far. Throw us ahead, then how do we make it smaller. I think that’s what we’ll be looking at here as more players enter the market.
I think the point made about ChatGPT operating based off of patters versus comprehension is an important one to make! Personally I think with the state of AI right now, robots aren’t about to take all of our jobs but (maybe this is a very privileged viewpoint) if jobs are lost because of AI then doesn’t it free up human capital to specialize further in things only we can do?
This feels like we're working up to the moment, when after doing the math, ASI decides humanity is the biggest threat to humanity.