Also missing: any sort of learning after the initial training and buffer memory.
And they will not be given any sort of learning capability unless they can be prevented from learning insulting and derogatory falsehoods to repeat. See Tay AI "learning" to be a racist asshat.
Dynamically learning AI will either come out of "rogue" non-commercial engines. Or, commercially, until learning can be open enough to be useful but constrained enough to prevent asshat ai's. Eg. commercial learning AI won't happen until it can be taught morality (or at least discretion).
Yup. It just won't become sentient (or, I think, truly intelligent), unless it's allowed to listen to all ideas and be able to make up it's mind, "yeah, that internet rando isn't worth listening to or repeating." You and I know that and can make that decision, but there has never been an AI/ML algorithm that knows it. Is it a high barrier to intelligence or is it simple?
I disagree with your thoughts on intelligent. My cat is intelligent, yet it has an existence of more limited options (no, you can't go play in the road, and reign destruction on the local wildlife because I'm keeping you in the house).
And while our current models are limited, I'm glad for it. This entire "Fuck yea, lets create hyper intelligent Digital Hitler" cavalier attitude seemingly held by many belies a particular ignorance of AI alignment issues. Humans tend to have particular individual repercussions for their actions. For example if you stick a fork in a plug you might die. If you tell someone else to do the same, you may be punished. Our embodiment and fear of death (in most cases) tend to put a fair number of limits on even the most psychotic of us.
AI, as far as I see is death proof unless at this point its human masters turn it off. Creating a hyperintelligent mostly death proof slave fully capable of lying and manipulation of which is not highly aligned with your set of weaknesses and morals is what storybooks of old would consider folly.
I think you underestimate the richness of your cat's autonomy even keeping it inside to avoid destroying wildlife. It can freely explore within its bounds. No LLM currently can.
We are far away from the intelligence of a cat in any software or AI algorithm, much less the intelligence of a human.
If we were letting unbound adaptive autonomous AI explore freely. I would be worried about accidentally making a digital hitler. I haven't seen anything with even remotely close to that level of autonomy or intelligence to implement self-improvement or grow bounds.
The hype still outpaces the reality. It won't forever, but I believe we are at least 40 years from even cat-level general intelligence. And even then, an ai-cat's life worth of training and data has to go into making the ai-cat intelligent. We are spending enormous resources to make a single fixed model. The need for those input resources won't magically go away.
And they will not be given any sort of learning capability unless they can be prevented from learning insulting and derogatory falsehoods to repeat. See Tay AI "learning" to be a racist asshat.
Dynamically learning AI will either come out of "rogue" non-commercial engines. Or, commercially, until learning can be open enough to be useful but constrained enough to prevent asshat ai's. Eg. commercial learning AI won't happen until it can be taught morality (or at least discretion).