Monday, January 29, 2024

Same old strategy.

https://www.bespacific.com/alphabets-plans-to-intercept-100s-of-billions-of-messages-to-train-bard/

Complaint filed against Alphabets plans to intercept 100s of billions of messages to train Bard

LinkedIn, Alexander Hanff: “Today I filed a complaint [included with lead link] with the Data Protection Commission Ireland as an open letter against Alphabets plans to introduce their Bard AI into Android Messages app and to intercept 100s of billions of confidential communications for the purpose of training their AI. This is a direct breach of Article 5(1) of 2002/58/EC and in many member States constitutes a breach of criminal law for the interception of communications content. Under Article 5(1) the consent of *all parties* involved in a communication is required before it can be intercepted. This means that Alphabet cannot simply rely on the consent of the user of the App and they know this because they were caught breaking the same law in 2010 with their Streetview cars when they intercepted the WiFi communications of EU persons.”





Here’s one to build on…

https://www.bespacific.com/ai-law-best-practices/

AI & Law Best Practices

AI & Law: Download the Suggested Best Practices Guide, Carolyn Elefant, January 19, 2024: “The legal field is undergoing a tech revolution, and AI is at the forefront. That’s why I created “Frequently Asked Questions and Suggested Best Practices Related to Generative Artificial Intelligence in the Legal Profession. This resource addresses critical AI topics like copyright issues, client privacy, ethical use, and more. It’s an essential read for any legal professional looking to navigate the AI landscape wisely and ethically. Elevate your practice with informed AI integration. Click here to get your free copy.”





Clearly the solution is a jury of AIs.

https://www.govtech.com/artificial-intelligence/keeping-deepfakes-out-of-court-may-take-shared-effort

Keeping Deepfakes Out of Court May Take Shared Effort

No solution will be foolproof, but experts say the time has come to start preparing guardrails and considering countermeasures. Members of judicial and tech spaces alike are sounding this alarm about the possibility — and probability — that deepfaked evidence could soon show up in courts. If juries fall for fabrications, they’d base decisions on falsehoods and unfairly harm litigants. And real images and videos could be mistakenly discounted as fakes, causing similar damages.

Evidence must be proven to be more likely to be authentic than not before a judge will admit it for the jury’s consideration. That’s a new problem in the era of generative AI, where studies suggest jurors are likely to be biased by video evidence even when they know it might be a fabrication.



(Related)

https://www.coloradopolitics.com/quick-hits/ai-deepfakes-elections-colorado/article_0edb7b1c-bba5-11ee-96ef-3b82ad2be9c4.html

Colorado's top election official targets AI-generated 'deepfakes' in elections

"This legislative package ensures Colorado is ready for the emergence of AI disruptions in elections; protects Colorado elections from any future fake elector schemes; and ensures Colorado’s tribal communities have a voice at the table for years to come," she said.

The bills Griswold is advocating for included a measure on artificial intelligence transparency, which requires AI-generated communications that show Colorado candidates or officeholders to include disclaimers so that people know these images are not real.

Under her proposal, AI-generated communications without a disclaimer would be subject to penalties and campaign finance enforcement. Notably, the person who is the subject of the AI generation would be able to sue those responsible for the communication.





Perspective.

https://www.theregister.com/2024/01/24/willison_ai_software_development/

Simon Willison interview: AI software still needs the human touch

Simon Willison, a veteran open source developer who co-created the Django framework and built the more recent Datasette tool, has become one of the more influential observers of AI software recently.

His writing and public speaking about the utility and problems of large language models has attracted a wide audience thanks to his ability to explain the subject matter in an accessible way. The Register interviewed Willison in which he shares some thoughts on AI, software development, intellectual property, and related matters.

No comments: