It's remarkably easy to inject new medical misinformation into LLMs arstechnica.com 3 points by prng2021 13 hours ago
gnabgib 12 hours ago The paper: [Medical large language models are vulnerable to data-poisoning attacks](https://news.ycombinator.com/item?id=42640260)
The paper: [Medical large language models are vulnerable to data-poisoning attacks](https://news.ycombinator.com/item?id=42640260)