Anything thrown together in 5 years will be terrifying. ChatGPT is, effectively, a labotomised speach center of an AI. Google have made significant progress on the Visual systems. However, they are small parts of a true AI.
In order to translate from what a human wants to working code, you need to do a lot of thinking. You need to fill in a lot of gaps, and remap from manager mindset to actual code abstracts. Therefore, in order to do this reliably, an AI would have to be operating at or near sentience levels, with a LOT of additional structures to stabilise it.
Such an AI would likely be a perfect demo of the “paperclip maximiser” problem.
Why do you assume the AI of tomorrow is going to be the same LLM of today? I don’t think AI is going to need supervising for much longer.
Because everything needs supervision, even the president of the United States is supervised by judges and voters.
AI will need supervision in 5 years. Even a completely autonomous agent will need a supervisor
Anything thrown together in 5 years will be terrifying. ChatGPT is, effectively, a labotomised speach center of an AI. Google have made significant progress on the Visual systems. However, they are small parts of a true AI.
In order to translate from what a human wants to working code, you need to do a lot of thinking. You need to fill in a lot of gaps, and remap from manager mindset to actual code abstracts. Therefore, in order to do this reliably, an AI would have to be operating at or near sentience levels, with a LOT of additional structures to stabilise it.
Such an AI would likely be a perfect demo of the “paperclip maximiser” problem.