9 Comments

I’m not overly concerned, personally about ASI or really even AGI. I feel like there’s a sentience barrier that we haven’t even seen, let alone solved for, that ASI requires. AGI is closer, but I still use closer as a comparative. There are so many problems to solve there that decades is probably a better measure than years. What I do find fascinating is the applications of ANI.

But I think the next frontier of AI is not in achieving AGI but rather how do we get AI efficiency to the point that my toaster can run it? We’ve seen that with every major technological epoch so far. Throw us ahead, then how do we make it smaller. I think that’s what we’ll be looking at here as more players enter the market.

Expand full comment

This feels like we're working up to the moment, when after doing the math, ASI decides humanity is the biggest threat to humanity.

Expand full comment
deletedJul 29, 2023Liked by Andrew Smith
Comment deleted
Expand full comment