Conversation
Look at the rate of weasel wording in OpenAI's not-really-apology:

https://openai.com/index/helping-people-when-they-need-it-most/

I'm sick and tired of people pretending they have ways to enforce LLM behavior, while all they do is weigh dices differently - they remain dices.

Trying to enforce security boundaries with a PRNG is one thing, but you definitely can't prevent reinforcing harmful behavior, because you can't even define what it is.

And this can cost lives, as we just witnessed.
2
1
2
"Our safeguards work **more reliably** in **common**, **short** exchanges. We have learned over time that these safeguards can **sometimes** be **less reliable** in **long** interactions: as the back-and-forth grows, **parts** of the model’s safety training **may** degrade."
0
0
0
Mention of suicide
Show content

@buherator This wasn’t even the first time LLMs encouraged suicide.

1
0
0
re: Mention of suicide
Show content
@schrotthaufen Most likely. It's the first time I saw the actual crap it produced (in the published court docs) and I'm outraged.
1
0
0
re: Mention of suicide
Show content

@buherator
> I'm outraged.
You and me both. These companies should be directly liable for the “advice” their plausible bullshit generators give.

0
0
1