ここが重要なポイント
>that it is the role of government to create trust in society. And therefore, it is their role to create an environment for trustworthy AI. And that means regulation. Not regulating AI, but regulating the organizations that control and use AI.
https://www.schneier.com/blog/archives/2023/12/ai-and-trust.html
>Social trust scales better, but embeds all sorts of bias and prejudice. That’s because, in order to scale, social trust has to be structured, system- and rule-oriented, and that’s where the bias gets embedded. And the system has to be mostly blinded to context, which removes flexibility.
https://www.schneier.com/blog/archives/2023/12/ai-and-trust.html
>Because of how large and complex society has become, we have replaced many of the rituals and behaviors of interpersonal trust with security mechanisms that enforce reliability and predictability—social trust.
>[...]
>We might think of them as friends, when they are actually services. Corporations are not moral; they are precisely as immoral as the law and their reputations let them get away with.
https://www.schneier.com/blog/archives/2023/12/ai-and-trust.html
>And near-term AIs will be controlled by corporations. Which will use them towards that profit-maximizing goal. They won’t be our friends. At best, they’ll be useful services. More likely, they’ll spy on us and try to manipulate us.
>This is nothing new. Surveillance is the business model of the Internet. Manipulation is the other business model of the Internet.
https://www.schneier.com/blog/archives/2023/12/ai-and-trust.html
あー。私これに似たことを5年前にブログに書いてたわ。やっぱそーゆーことなんよねぇ
>AI アシスタントはユーザをアシストしない
https://text.baldanders.info/remark/2018/04/ai-assistant/
>We use all of these services as if they are our agents, working on our behalf. In fact, they are double agents, also secretly working for their corporate owners. We trust them, but they are not trustworthy. They’re not friends; they’re services.
https://www.schneier.com/blog/archives/2023/12/ai-and-trust.html