Inevitable, again? Perhaps strong rules reduce revenue?
https://www.washingtonpost.com/technology/2023/08/28/ai-2024-election-campaigns-disinformation-ads/
ChatGPT breaks its own rules on political messages
When OpenAI last year unleashed ChatGPT, it banned political campaigns from using the artificial intelligence-powered chatbot — a recognition of the potential election risks posed by the tool.
But in March, OpenAI updated its website with a new set of rules limiting only what the company considers the most risky applications. These rules ban political campaigns from using ChatGPT to create materials targeting specific voting demographics, a capability that could be abused spread tailored disinformation at an unprecedented scale.
Yet an analysis by The Washington Post shows that OpenAI for months has not enforced its ban.
As expected?
https://cointelegraph.com/news/consumers-increase-distrust-artificial-intelligence-salesforce-survey
Consumer surveys show a growing distrust of AI and firms that use it
A global consumer survey from Salesforce shows a growing distrust toward firms that use AI, while an Australian survey found most believe it creates more problems than it solves.
… On Aug. 28, the customer relationship software firm released survey results from over 14,000 consumers and firms in 25 countries that suggested nearly three-quarters of customers are concerned about the unethical use of AI.
Over 40% of surveyed customers do not trust companies to use AI ethically, and nearly 70% said it’s more important for companies to be trustworthy as AI tech advances.
Worth a read?
Protecting Society From AI Harms: Amnesty International’s Matt Mahmoudi and Damini Satija (Part 1)
… On this episode of the Me, Myself, and AI podcast, Matt and Damini join hosts Sam Ransbotham and Shervin Khodabandeh to highlight scenarios in which AI tools can put human rights at risk, such as when governments and public-sector agencies use facial recognition systems to track social activists or algorithms to make automated decisions about public housing access and child welfare. Damini and Matt caution that AI technology cannot fix human problems like bias, discrimination, and inequality; that will take human intervention and changes to public policy.
No comments:
Post a Comment