The Charcter.AI logo on a smartphone.© 2023 Bloomberg Finance LP Character.AI, a Google-backed AI chatbot platform, is facing scrutiny after reports revealed last month that some users created chatbots emulating real-life school shooters and their victims. These chatbots, accessible to users of all ages, allowed for graphic role-playing scenarios, sparking outrage and raising concerns about the ethical responsibilities of AI platforms in moderating harmful content. While the company has since removed these chatbots and taken steps to address the issue, the incident underscores the broader challenges of regulating generative AI, Futurism reports. The Incident and Character.AI’s Response In response to my request for comment, Character.AI provided the following statement addressing the controversy: “The users who created the Characters referenced in the Futurism piece violated our Terms of Service, and […]
Original web page at www.forbes.com