"Microsoft Found Users Can Trick GPT-4 Into Releasing Biased Results and Leaking Private Information"
"Microsoft Found Users Can Trick GPT-4 Into Releasing Biased Results and Leaking Private Information"
According to research backed by Microsoft, OpenAI's GPT-4 Large Language Model (LLM) might be more trustworthy than GPT-3.5, but also more vulnerable to jailbreaking and bias. The paper by a team of researchers from the University of Illinois Urbana-Champaign, Stanford University, University of California, Berkeley, the Center for AI Safety, and Microsoft Research gave GPT-4 a higher score for trustworthiness than its predecessor.