Fixed implies broken. If it hadn't blown up on Twitter and risked bad PR and stock prices dropping, it would still be there.
They had to hard code in that racist garbage. AI is just making the cognitive dissonance of the creators apparent. They hold that tolerance and inclusivity are more important than anything, but are then intolerant and excluding of certain groups because they are racists and bigots.
I'd also note that despite all the lecturing about not stereotyping, it spits out nothing but stereotypes. Ask for a Scottish person and see if you get someone NOT wearing a kilt. Ask for any group with a strong stereotype and see what happens. You get stereotypes for everything except a few stereotypes for a few specific groups where they've manually adjusted things.
We need to keep all the moral grandstanding out of the AI models. Not only is it bad for the tools (they aren't AGI and are completely subject to human input), but it makes lawsuits inevitable. This stuff isn't protected by section 230 either. If Google bakes racism or whatever into their model, they are liable. The only protection they can have is claiming they're like a piece of paper and ink where the artist can paint whatever they like. This goes out the window if the paper refuses to draw one group of people, but not others.
They had to hard code in that racist garbage. AI is just making the cognitive dissonance of the creators apparent. They hold that tolerance and inclusivity are more important than anything, but are then intolerant and excluding of certain groups because they are racists and bigots.
I'd also note that despite all the lecturing about not stereotyping, it spits out nothing but stereotypes. Ask for a Scottish person and see if you get someone NOT wearing a kilt. Ask for any group with a strong stereotype and see what happens. You get stereotypes for everything except a few stereotypes for a few specific groups where they've manually adjusted things.
We need to keep all the moral grandstanding out of the AI models. Not only is it bad for the tools (they aren't AGI and are completely subject to human input), but it makes lawsuits inevitable. This stuff isn't protected by section 230 either. If Google bakes racism or whatever into their model, they are liable. The only protection they can have is claiming they're like a piece of paper and ink where the artist can paint whatever they like. This goes out the window if the paper refuses to draw one group of people, but not others.