Wednesday, June 14, 2023

Is this because AI isn’t human? Another area for the recognition of AI personhood?

https://www.axios.com/pro/tech-policy/2023/06/14/hawley-blumenthal-bill-section-230-ai

First look: Bipartisan bill denies Section 230 protection for AI

Sens. Josh Hawley and Richard Blumenthal want to clarify that the internet's bedrock liability law does not apply to generative AI, per a new bill introduced Wednesday that was shared exclusively with Axios.

Details: Hawley and Blumenthal's "No Section 230 Immunity for AI Act" would amend Section 230 "by adding a clause that strips immunity from AI companies in civil claims or criminal prosecutions involving the use or provision of generative AI," per a description of the bill from Hawley's office.





A common problem. Employees (especially techies) want to try the latest, flashy gizmos, even if the organization hasn’t approved the risk.

https://www.theverge.com/2023/6/13/23759101/stack-overflow-developers-survey-ai-coding-tools-moderators-strike

Stack Overflow survey finds developers are ready to use AI tools — even if they don’t fully trust them

A survey of developers by coding Q&A site Stack Overflow has found that AI tools are becoming commonplace in the industry even as coders remain skeptical about their accuracy. The survey comes at an interesting time for the site, which is trying to work out how to benefit from AI while dealing with a strike by moderators over AI-generated content.

The survey found that 77 percent of respondents felt favorably about using AI in their workflow and that 70 percent are already using or plan to use AI coding tools this year.

Respondents cited benefits like increased productivity (33 percent) and faster learning (25 percent) but said they were wary about the accuracy of these systems. Only 3 percent of respondents said they “highly trust” AI coding tools, with 39 percent saying they “somewhat trust” them. Another 31 percent were undecided, with the rest describing themselves as somewhat distrustful (22 percent) or highly distrustful (5 percent).





The automated lawyer?

https://www.bespacific.com/the-gptjudge-justice-in-a-generative-ai-world/

The GPTJudge: Justice in a Generative AI World

Grossman, Maura and Grimm, Paul and Brown, Dan and Xu, Molly, The GPTJudge: Justice in a Generative AI World (May 23, 2023). Duke Law & Technology Review, Vol. 23, No. 1, 2023, Available at SSRN: https://ssrn.com/abstract=4460184

Generative AI (“GenAI”) systems such as ChatGPT recently have developed to the point where they are capable of producing computer-generated text and images that are difficult to differentiate from human-generated text and images. Similarly, evidentiary materials such as documents, videos and audio recordings that are AI-generated are becoming increasingly difficult to differentiate from those that are not AI-generated. These technological advancements present significant challenges to parties, their counsel, and the courts in determining whether evidence is authentic or fake. Moreover, the explosive proliferation and use of GenAI applications raises concerns about whether litigation costs will dramatically increase as parties are forced to hire forensic experts to address AI- generated evidence, the ability of juries to discern authentic from fake evidence, and whether GenAI will overwhelm the courts with AI-generated lawsuits, whether vexatious or otherwise. GenAI systems have the potential to challenge existing substantive intellectual property (“IP”) law by producing content that is machine, not human, generated, but that also relies on human-generated content in potentially infringing ways. Finally, GenAI threatens to alter the way in which lawyers litigate and judges decide cases. This article discusses these issues, and offers a comprehensive, yet understandable, explanation of what GenAI is and how it functions. It explores evidentiary issues that must be addressed by the bench and bar to determine whether actual or asserted (i.e., deepfake) GenAI output should be admitted as evidence in civil and criminal trials. Importantly, it offers practical, step-by- step recommendations for courts and attorneys to follow in meeting the evidentiary challenges posed by GenAI. Finally, it highlights additional impacts that GenAI evidence may have on the development of substantive IP law, and its potential impact on what the future may hold for litigating cases in a GenAI world.”

See also e-Discovery Team – REAL OR FAKE? New Law Review Article Provides a Good Framework for Judges to Make the Call





So, whatch’a doin’?

https://www.bespacific.com/surveillance-and-digital-control-at-work/

Surveillance and Digital Control at Work

Cracked Labs: “A research project on the datafication of work with a focus on Europe – Data collection is becoming ubiquitous, including at work. Systems that constantly record data about activities and behaviors in the workplace can quickly turn into devices for extensive monitoring and control, deeply affecting the rights and freedoms of employees. Opportunities and risks are not distributed equally. While employers optimize their business processes, workers are being rated, ranked, pressured and disciplined. Companies use this recorded data to monitor behavior, assess performance and, increasingly, to direct tasks, manage workers and make automated decisions about them. The project examines and maps how companies use personal data on (and against) employees. Based on previous German-language research, it investigates and documents systems and technologies that process personal data in the workplace and identifies key developments and issues relevant to worker rights. The project, which is described in more detail here, results in a series of case studies and research reports, which are published online over the course of 2023 and 2024 below.

  • Surveillance and Algorithmic Control in the Call Center. A case study on contact and service center software, automated management and outsourced work (52 pages, May 2023) – This project follows up on previous research on surveillance and digital control at work that focused on German-speaking countries, which was carried out by Cracked Labs between 2019 and 2021 and which resulted in a comprehensive German-language report, a web publication and a report on research based on interviews with work councils in Austria.”





Tools & Techniques. (Some may actually be useful!)

https://www.makeuseof.com/best-ai-tools-boredhumans/

The 7 Best AI Tools on BoredHumans

… BoredHumans is a website that offers a wide variety of free AI tools for anyone feeling a bit bored. Some examples include virtual pets, tarot card readings, deepfake videos, and a quote generator. While most of the AI functions are meant for entertainment, some are truly impressive and can be quite useful.

Once you're done getting a good laugh from the meme generator and seeing what you'd look like when you're older with the age progression tool, you can check out some of the best tools on the site. This includes a fake person generator, a super resolution tool, an interior design tool, and many more.



No comments: