Just imagine that you can interactively by voice or by touch tell AI what/how to adjust stuff and it will use it to improve itself for your future similar tasks. Now project there will be 1,000,000 users like that, telling app what exactly did they mean and pointing to proper places in the app. So exactly this will be the conversation you desire, you'd directly tell your app builder what you want, and if it is not doing what you like, you either show it to builder by simple gestures or rely on some other user going through the same problem before you and app builder taping onto that knowledge. Obviously, first for simpler web or mobile apps. This sounded like sci-fi just a decade ago, but we now have means to do simple app builders like that.
ML by itself is incapable of inference, hence you need some guiding meta-programming framework that could integrate partial ML results from submodules you prepare.
As for squirrel example, it was probably one of "under threshold" classifications of ResNet, i.e. tree was 95%, grass was 90%, but squirrel was 79%, so it got cut out of what was presented back to you. Mind you, this area went from "retarded" in 2011 to "better than human in many cases" in 2016. I know there are many low-hanging fruits and plenty of problems will still be out of reach, but some are getting approachable soon, especially if you have 1M ML capable machines at your disposal.
>Now project there will be 1,000,000 users like that, telling app what exactly did they mean and pointing to proper places in the app. So exactly this will be the conversation you desire
That's not a conversation, that's a statistic. A conversation might start with a user showing me visually how they want something done. Then I may point out why that's not such a good idea and I will be asking why the user wanted it done that way so I can come up with an alternative approach to achieve the same goal.
In the course of that conversation we may find that the entire screen is redundant if we redesign the workflow a little bit, which would require some changes to the database schema and other layers of the application. The result could be a simpler, better application instead of a pile of technical debt.
This isn't rocket science. It doesn't take exceptionally talented developers, but it does require understanding the context and purpose of an application.
Sure, but AI listening to you can be exactly that conversation partner, maybe by utilizing "General's effect" - i.e. just talking about some topic gets you to solution, even if the person next to you have no clue and just listens to you. Here AI can be that person and you can immediately see the result of your talk in a form of changing app you are building, and easily decide something has to be changed. Initially granularity of your changes will be large, i.e. the pre-baked operations will be simple. Later you can get more and more precise, as AI will be developing, and more and more people will be contributing more specialized operations.
ML by itself is incapable of inference, hence you need some guiding meta-programming framework that could integrate partial ML results from submodules you prepare.
As for squirrel example, it was probably one of "under threshold" classifications of ResNet, i.e. tree was 95%, grass was 90%, but squirrel was 79%, so it got cut out of what was presented back to you. Mind you, this area went from "retarded" in 2011 to "better than human in many cases" in 2016. I know there are many low-hanging fruits and plenty of problems will still be out of reach, but some are getting approachable soon, especially if you have 1M ML capable machines at your disposal.