10 Comments

I’m not overly concerned, personally about ASI or really even AGI. I feel like there’s a sentience barrier that we haven’t even seen, let alone solved for, that ASI requires. AGI is closer, but I still use closer as a comparative. There are so many problems to solve there that decades is probably a better measure than years. What I do find fascinating is the applications of ANI.

But I think the next frontier of AI is not in achieving AGI but rather how do we get AI efficiency to the point that my toaster can run it? We’ve seen that with every major technological epoch so far. Throw us ahead, then how do we make it smaller. I think that’s what we’ll be looking at here as more players enter the market.

Expand full comment

I think the point made about ChatGPT operating based off of patters versus comprehension is an important one to make! Personally I think with the state of AI right now, robots aren’t about to take all of our jobs but (maybe this is a very privileged viewpoint) if jobs are lost because of AI then doesn’t it free up human capital to specialize further in things only we can do?

Expand full comment

This feels like we're working up to the moment, when after doing the math, ASI decides humanity is the biggest threat to humanity.

Expand full comment