
Alisa Liu
@alisawuffles
PhD student at @uwcse @uwnlp
ID: 1197247629136203776
https://alisawuffles.github.io/ 20-11-2019 20:17:38
324 Tweet
2,2K Followers
350 Following

















I’ve been fascinated lately by the question: what kinds of capabilities might base LLMs lose when they are aligned? i.e. where can alignment make models WORSE? I’ve been looking into this with Christopher Potts and here's one piece of the answer: randomness and creativity



