De-Trumping elections?
https://www.bespacific.com/proposed-rule-artificial-intelligence-in-campaign-ads/
Proposed Rule – Artificial Intelligence in Campaign Ads
Federal Election Commission – The Commission announces its receipt of a Petition for Rulemaking filed by Public Citizen. The Petition asks the Commission to amend its regulation on fraudulent misrepresentation of campaign authority to make clear that the related statutory prohibition applies to deliberately deceptive Artificial Intelligence campaign ads… The Petition asserts that generative Artificial Intelligence and deepfake technology, is being ‘‘used to create convincing images, audio and video hoaxes.’’ The Petition asserts that while the technology is not so far advanced currently as for viewers to not be able to identify when it is used disingenuously, if the use of the ‘‘technology continues to improve, it will become increasingly difficult, and perhaps, nearly impossible for an average person to distinguish deepfake videos and audio clips from authentic media.’’ The Petition notes that the technology will ‘‘almost certainly create the opportunity for political actors to deploy it to deceive voters[,] in ways that extend well beyond any First Amendment protections for political expression, opinion or satire.’’ According to the Petition, this technology might be used to ‘‘create a video that purports to show an opponent making an offensive statement or accepting a bribe’’ and, once disseminated, be used for the purpose of ‘‘persuading voters that the opponent said or did something they did not say or do.’’ The Petition explains that a deepfake audio clip or video by a candidate or their agent would violate the fraudulent misrepresentation provision by ‘‘falsely putting words into another candidate’s mouth, or showing the candidate taking action they did not [take],’’ thereby ‘‘fraudulently speak[ing] or act[ing] ‘for’ that candidate in a way deliberately intended to [harm] him or her.’’ The Petitioner states that because the deepfaker misrepresents themselves as speaking for the deepfaked candidate, ‘‘the deepfake is fraudulent because the deepfaked candidate in fact did not say or do what is depicted by the deepfake and because the deepfake aims to deceive the public.’’ The Petitioner draws a distinction between deepfakes, which it contends violates the prohibition on fraudulent misrepresentation, and other uses of Artificial Intelligence in campaign communications, such as in parodies, where the purpose and effect are not to deceive voters, or as in other communications where ‘‘there is a sufficiently prominent disclosure that the image, audio or video was generated by [A]rtificial [I]ntelligence and portrays fictitious statements and actions.’’ …
Do I have to share my data?
DATA SHARING FOR RESEARCH: A COMPENDIUM OF CASE STUDIES, ANALYSIS, AND RECOMMENDATIONS
Today, the Future of Privacy Forum (FPF) published a report on corporate-academic partnerships that provides practical recommendations for companies and researchers who want to share data for research. The Report, Data Sharing for Research: A Compendium of Case Studies, Analysis, and Recommendations, demonstrates how, for many organizations, data-sharing partnerships are transitioning from being considered an experimental business activity to an expected business competency.
DOWNLOAD THE FULL REPORT DOWNLOAD THE EXEC SUMMARY
Reasonable?
https://www.theverge.com/2023/8/16/23834586/associated-press-ai-guidelines-journalists-openai
The Associated Press sets AI guidelines for journalists
… Journalists for AP can experiment with ChatGPT but are asked to exercise caution by not using the tool to create publishable content. Any result from a generative AI platform “should be treated as unvetted source material” and subject to AP’s existing sourcing standards. The publication said it will not allow AI to alter photos, videos, or audio and will not use AI-generated images unless it is the subject of a news story. In that event, AP said it would label AI-generated photos in captions.
Can the peak get peakyer?
Gartner Hype Cycle places generative AI on the ‘Peak of Inflated Expectations’
Many might not be surprised, but today the 2023 Gartner Hype Cycle for emerging technologies placed generative AI on the ‘Peak of Inflated Expectations’ for the first time.
No comments:
Post a Comment