Responsibility & Safety Published Authors Nahema Marchal and Rachel Xu Share New research analyzes the misuse of multimodal generative AI today, in order to help build safer and more responsible technologies Generative artificial intelligence (AI) models that can produce image, text, audio, video and more are enabling a new era of creativity and commercial opportunity. Yet, as these capabilities grow, so does the potential for their misuse, including manipulation, fraud, bullying or harassment. As part of our commitment to develop and use AI responsibly, we published a new paper , in partnership with Jigsaw and Google.org , analyzing how generative AI technologies are being misused today. Teams across Google are using this and other research to develop better safeguards for our generative AI technologies, amongst other safety initiatives. Together, […]
Original web page at deepmind.google