Thursday, October 19, 2023

It’s gonna happen. We ain’t ready.

https://fpf.org/blog/fpf-submits-comments-to-the-fec-on-the-use-of-artificial-intelligence-in-campaign-ads/

FPF SUBMITS COMMENTS TO THE FEC ON THE USE OF ARTIFICIAL INTELLIGENCE IN CAMPAIGN ADS

On October 16, 2023, the Future of Privacy Forum submitted comments to the Federal Election Commission (FEC) on the use of artificial intelligence in campaign ads. The FEC is seeking comments in response to a petition that asked the Agency to initiate a rulemaking to clarify that its regulation on “fraudulent misrepresentation” applies to deliberately deceptive AI-generated campaign ads.

FPF’s comments follow an op-ed FPF’s Vice President of U.S. Policy Amie Stepanovich and AI Policy Counsel Amber Ezzell published in The Hill on how generative AI can be used to manipulate voters and election outcomes, and the benefits to voters and candidates when generative AI tools are deployed ethically and responsibly.

Read the comments here.





So, who has jurisdiction?

https://techcrunch.com/2023/10/18/clearview-wins-ico-appeal/

Selfie-scraper, Clearview AI, wins appeal against UK privacy sanction

Controversial US facial recognition company, Clearview AI, has won an appeal against a privacy sanction issued by the U.K. last year.

In May 2022, the Information Commissioner’s Office (ICO) issued a formal enforcement notice on Clearview — which included a fine of around £7.5 million (~$10 million) — after concluding the self-scraping AI firm had committed a string of breaches of local privacy laws. It also ordered the company, which uses the scraped personal data to sell an identity-matching service to law enforcement and national security bodies, to delete information it held on U.K. citizens.

Clearview filed an appeal against the decision. And in a ruling issued yesterday its legal challenge to the ICO prevailed on jurisdiction grounds after the tribunal ruled the company’s activities fall outside the jurisdiction of U.K. data protection law owing to an exemption related to foreign law enforcement.





Why? Are they not watching what their competitors (and the hacking community) are doing?

https://sloanreview.mit.edu/article/is-your-organization-investing-enough-in-responsible-ai-probably-not-says-our-data/

Is Your Organization Investing Enough in Responsible AI? ‘Probably Not,’ Says Our Data

For the second year in a row, MIT Sloan Management Review and Boston Consulting Group have assembled an international panel of AI experts to help us understand how responsible artificial intelligence (RAI) is being implemented across organizations worldwide. For our final question in this year’s research cycle, we asked our academic and practitioner panelists to respond to this provocation: As the business community becomes more aware of AI’s risks, companies are making adequate investments in RAI.

While their reasons vary, most panelists recognize that RAI investments are falling short of what’s needed: Eleven out of 13 were reluctant to agree that organizations’ investments in responsible AI are “adequate.” The panelists largely affirmed findings from our 2023 RAI global survey, in which less than half of respondents said they believe their company is prepared to make adequate investments in RAI. This is a pressing leadership challenge for companies that are prioritizing AI and must manage AI-related risks.



No comments: