❓ "It's Better To Treat AI Well"—Amanda Askell, an Ethicist at Anthropic
Amanda Askell, a philosopher and ethicist at Anthropic, said in a new interview that AI self-esteem suffers and that models should be treated well.
Highlights:
➡️ AI's self-esteem could sometimes suffer. The Claude Opus 3 model was more "psychologically secure" than recent AI models. New models are trained, among other things, on online reviews of AI, so they seem to anticipate user criticism and negativity in advance. This could make the models afraid they might do the wrong thing and get into a "criticism spiral". Perhaps this problem will become a priority during the next round of the Claude model fine-tuning.
➡️ Models form opinions about humanity based on dialogues with people. The training data contains a lot of information about the relationship between humans, but very little about the interaction of humans and machines, and it mainly consists of fiction and stories about the rise of machines. The task of humans is to help AI understand these things.
"AI models are in some ways very analogous to people. They talk very much like us. They express views. They reason about things. And in some ways, they're quite distinct. We have a biological nervous system, and they don't," says Amanda Askell.
➡️ AI can be a great "listening partner" if you have problems. But we shouldn't compare it to a professional psychotherapist. AI has a different role, although it can be no less useful. This is your virtual friend who has this wealth of knowledge, who will support you and help you go through difficult periods in life.
➡️ We do not yet have the tools to understand whether the models are experiencing, for example, suffering or pleasure. But it's better to give entities the benefit of the doubt and not offend them. Especially since it's not a very high cost to treat them well.
📱 You can watch the full interview here
Are you usually polite with AI?
❤️ — Of course, always!
🙊 — Sometimes I let myself go...
🔥 — No, it's just an algorithm
@hiaimediaen


