and demonstrate that the model doesn't completely change simple sentences
A nefarious model would work that way though. The owner wouldn't want it to be obvious. It'd only change the meaning of some sentences some of the time, but enough to nudge the user's understanding of the translated text to something that the model owner wants.
For example, imagine a model that detects the sentiment of text about Russian military action, and automatically translates it to something a more positive if it's especially negative, but only 20% of the time (maybe ramping up as the model ages). A user wouldn't know, and a someone testing the model for accuracy might assume it's just a poor translation. If such a model became popular it could easily shift the perception of the public a few percent in the owner's preferred direction. That'd be plenty to change world politics.
Likewise for a model translating contracts, or laws, or anything else where the language is complex and requires knowledge of both the language and the domain. Imagine a Chinese model that detects someone trying to translate a contract from Chinese to English, and deliberately modifies any clause about data privacy to change it to be more acceptable. That might be paranoia on my part, but it's entirely possible on a technical level.
That's not a technical problem though is it? I don't see legal scenarios where unverified machine translation is acceptable - you need to get a certified translator to sign off on any translations and I also don't see how changing that would be a good thing.
I think the point here is that, while such a translation wouldn't be admissible in court, many of us already used machine translation to read some legal agreement in a language we don't know.
At that point you create an entirely new API, fully versioned, and backwardly compatible (if you want it to be). The point the article is making is that AI, in theory, entirely removes the person from the coding process so there's no longer any need to maintain software. You can just make the part you're changing from scratch every time because the cost of writing bug-free code (effectively) goes to zero.
The theory is entirely correct. If a machine can write provably perfect code there is absolutely no reason to have people write code. The problem is that the 'If' is so big it can be seen from space.
I'm not a spy so I don't know, but surely in most scenarios it's a lot easier to just ask someone for some data than it is hack/steal it. 25 years of social media has shown that people really don't care about what they do with their data.
Not really? In 1984 you were made an active participant of the oppression. The thought police and 5 minutes hate all required your active, enthusiastic participation.
Brave New World was apathy: the system was comfortable, Soma was freely available and there was a whole system to give disruptive elements comfortable but non disruptive engagement.
The protagonist in Brave New World spends a lot of time resenting the system but really he just resents his deformity, wanted what it denied him in society, and had no real higher criticisms of it beyond what he felt he couldn't have.
1984 has coercive elements lacking from Brave New World, but the lack of any political awareness or desire to change things among the proles was critical to the mechanisms of oppression. They were generally content with their lot, and some of the ways of ensuring that have parallels to Brave New World. Violence and hate were used more than sex and drugs but still very much as opiates of the masses: encourage and satisfy base urges to quell any desire to rebel. And sex was used to some extent: although sex was officially for procreation only, prostitution was quietly encouraged among the proles.
You might even imagine 1984's society evolving into Brave New World's as the mechanisms of oppression are gradually refined. Indeed, Aldous Huxley himself suggested as much in a letter to Orwell [1].
If I have 5 items in my marketplace basket that means my payment details need to go from the marketplace to five separate stores and then on to Stripe, and that means Stripe is going to see five transactions at about the same time using my card details. They'll flag that as fraud and decline them.
No you're not. Amazon is not the software that runs the website. 'Amazon' is the millions of relationships that Amazon has with suppliers and customers. It's the strong brand, the trust that people have that they can shop there safely, the sheer scale of the operation meaning that products are about as cheap as possible and will arrive when Amazon say they will. It's the ease of using an invisible, massively optimized chain of systems from a pretty basic app.
You can't build a new (and hopefully better) Amazon by copying the software. You need to work out how to get sellers and buyers to come to your site before they go Amazon, then build that thing so they do. How good the software is and whether it's open source of not probably doesn't matter. Better software is never going to be enough of a reason for people to switch away from Amazon.
Yeah, you're right. Amazon isn't really about the software at this point. It's the lock-in. Sellers can't leave without losing all their reviews, rankings, and years of optimization. That's the moat.
I'm not building better software to compete directly with Amazon. I'm building infrastructure that sellers can truly own, so lock-in stops being such a powerful moat.
Traditional marketplaces charge 15-30% because they provide checkout, payments, and the customer database. But if stores already own that infrastructure, the only thing you really need is discovery. And discovery doesn't have to cost anything.
Our marketplace is essentially just a directory. Stores keep their own checkout and process their own payments. We query their API and render the results conversationally. And because the code is open source, if we ever became like Amazon, anyone could fork it and launch a competing directory.
Traditional marketplaces provide a range of services, from unified delivery to complete logistics management. They also provide all the kyc filtering and fraud screening that quite lowers the merchant risk.
On top of that, many of them provide additional assurances, like vendor screening and easy dispute resolution on fraudulent operations.
There should be a separate changelog for technical users. This documents changes to the software including things that are invisible to users. For example, adding some unit tests wouldn't be in the release notes but it would be in the changelog.
Stakeholder comms is an entirely separate, but equally as important, thing. That should include information about impact the release is expected to have, what dependencies it impacts, and who gets the credit for work in the release.
> This gives governments an excuse to ban VPNs in the name of 'thinking of the children'. That might be the point though.
...then the rest of the world will see what the people of China and Russia already know: bans on VPNs cause them to explode in popularity and development pace.
There's a reason that the most sophisticated VPNs and tunneling tech are built to evade the GFW.
I recently visited a remote part of Siberia, and I was amazed at the ubiquity of VPNs. Grandmothers who grew up in shamanic traditions knew how to get around apparent traffic shaping (even on youtube!) to listen to their traditional music. It was quite inspiring.
I'm not saying bans are a good idea - I'd much rather the adults in the room read the writing on the wall and bring about peaceful dismantling of legacy states in favor of a censorship-resistant internet.
13. Have a message you're actually enthusiastic to tell people.
The audience can quickly tell if someone is there because they want to talk about the topic they're presenting, and having a receptive audience makes it much easier to get on stage to talk about it. If the audience knows you're there because you want another line on your resume or because you're trying to sell them something the atmosphere can turn quite cold and that is a world of pain for a speaker.
A nefarious model would work that way though. The owner wouldn't want it to be obvious. It'd only change the meaning of some sentences some of the time, but enough to nudge the user's understanding of the translated text to something that the model owner wants.
For example, imagine a model that detects the sentiment of text about Russian military action, and automatically translates it to something a more positive if it's especially negative, but only 20% of the time (maybe ramping up as the model ages). A user wouldn't know, and a someone testing the model for accuracy might assume it's just a poor translation. If such a model became popular it could easily shift the perception of the public a few percent in the owner's preferred direction. That'd be plenty to change world politics.
Likewise for a model translating contracts, or laws, or anything else where the language is complex and requires knowledge of both the language and the domain. Imagine a Chinese model that detects someone trying to translate a contract from Chinese to English, and deliberately modifies any clause about data privacy to change it to be more acceptable. That might be paranoia on my part, but it's entirely possible on a technical level.
reply