I am obsessed with the predictions that GPT-4 or something like it was coming in the early 2020s. Ray Kurzweil was able to predict this in 2005.

The simple reasoning is this. The model architecture for a computer, like the human brain, is not hard to find. Moore’s law will not bend, so we will have compute at the level of the human brain by 2020-2025. So we can make a computer like a human brain around this time.

And all of these were right. Now that we have models that scale, there are a few implications for the future. Moore’s law will likely continue, meaning the compute of large supercomputers will 10x every 6 years. Even extrapolating the current LLMs forward, we will have computers that can take your design specification to a complete codebase and perform actions. There is little risk here. They can do anything that doesn’t require true out-of-the-box thinking. And the training costs will decrease 10x every six years or so.