Photo Illustration: WIRED Staff; Getty Images Get the Daily newsletter for our best tech, culture, and science stories. When talking with a chatbot , you might inevitably give up your personal information—your name, for instance, and maybe details about where you live and work, or your interests. The more you share with a large language model, the greater the risk of it being abused if there’s a security flaw. A group of security researchers from the University of California, San Diego (UCSD) and Nanyang Technological University in Singapore are now revealing a new attack that secretly commands an LLM to gather your personal information—including names, ID numbers, payment card details, email addresses, mailing addresses, and more—from chats and send it directly to a hacker. The attack, named Imprompter by […]
Original web page at www.wired.com