LLMs should be legally required to act in the interest of their users (not their creators).
This is a standard that already applies to positions of advisors such as Medical professionals, lawyers and financial advisors.
I haven't seen this discussed much by regulators, but I have made a couple of submissions here and there expressing this opinion.
AIs will get better, and they will become more trusted. They cannot be allowed to sell the answer to the question "Who should I vote for?" To the highest bidder.
The same same for the human professions, a set of agreed upon guidelines on acting in service of the client, and enforcement of penalties against identifiable instances of prioritizing the interests of another party over the client.
There will always be grey areas, these exist when human responsibilities are set also, and there will be those who skirt the edges. The matters of most concern are quite easily identifiable.
Of course not. You’d have to pay for the product, just like we do with every other product in existence, other than software.
Software is the only type of product where this is even an issue. And we’re stuck with this model because VCs need to see hockey stick growth, and that generally doesn’t happen to paid products.
This is a standard that already applies to positions of advisors such as Medical professionals, lawyers and financial advisors.
I haven't seen this discussed much by regulators, but I have made a couple of submissions here and there expressing this opinion.
AIs will get better, and they will become more trusted. They cannot be allowed to sell the answer to the question "Who should I vote for?" To the highest bidder.