Saturday, February 03, 2024

Is it surprising that they got it wrong?

https://www.washingtonpost.com/technology/2024/02/01/online-safety-hearing-opposition/?pwapi_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJyZWFzb24iOiJnaWZ0IiwibmJmIjoxNzA2ODUwMDAwLCJpc3MiOiJzdWJzY3JpcHRpb25zIiwiZXhwIjoxNzA4MjMyMzk5LCJpYXQiOjE3MDY4NTAwMDAsImp0aSI6ImZhN2VlNGRmLTkyMjAtNDY1MC1hMjg0LWEzNzhmMTZhYTMwOSIsInVybCI6Imh0dHBzOi8vd3d3Lndhc2hpbmd0b25wb3N0LmNvbS90ZWNobm9sb2d5LzIwMjQvMDIvMDEvb25saW5lLXNhZmV0eS1oZWFyaW5nLW9wcG9zaXRpb24vIn0.179DIF134rr6la76-miQm9_sVl5xUzpqs1MsH7c6PYM

Online safety legislation is opposed by many it claims to protect

Gen Z TikTokers, LGBTQ organizations and free speech advocates react angrily to Wednesday’s hearing

Lawmakers who grilled the CEOs of Meta, TikTok, Snapchat, Discord and X on Wednesday all seemed to agree that protecting children’s safety online was a priority. Many of those children were less accepting of the idea, and they let their opinions flow as they listened to the hearing through a Discord server.

These senators don’t actually care about protecting kids, they just want to control information,” one teenager posted. “If congress wants to protect children, they should pass a ... privacy law,” another teenager said. Others in the server accused the lawmakers of “trying to demonize the CEOs to push their ... bills,” which were often described with profanity.





These might also be useful for autonomous drones…

https://www.reuters.com/technology/space/chinas-geely-launches-11-low-orbit-satellites-autonomous-cars-2024-02-03/

China's Geely launches 11 low-orbit satellites for autonomous cars

Chinese automaker Geely Holding Group said on Saturday it has launched 11 low-earth orbit satellites, its second dispatch, as it expands its capacity to provide more accurate navigation for autonomous vehicles.

… Geely said it expects 72 to be in orbit by 2025 and eventually plans to have a constellation of 240.



Friday, February 02, 2024

If politician A claims that all of politician B’s ads are AI generated will the FCC pull all the ads?

https://arstechnica.com/tech-policy/2024/02/fcc-to-declare-ai-generated-voices-in-robocalls-illegal-under-existing-law/

FCC to declare AI-generated voices in robocalls illegal under existing law

The Federal Communications Commission plans to vote on making the use of AI-generated voices in robocalls illegal. The FCC said that AI-generated voices in robocalls have "escalated during the last few years" and have "the potential to confuse consumers with misinformation by imitating the voices of celebrities, political candidates, and close family members."





Always readable.

https://teachprivacy.com/artificial-intelligence-and-privacy/

Artificial Intelligence and Privacy

This Article aims to establish a foundational understanding of the intersection between artificial intelligence (AI) and privacy, outlining the current problems AI poses to privacy and suggesting potential directions for the law’s evolution in this area. Thus far, few commentators have explored the overall landscape of how AI and privacy interrelate. This Article seeks to map this territory.

Some commentators question whether privacy law is appropriate for addressing AI. In this Article, I contend that although existing privacy law falls far short of addressing the privacy problems with AI, privacy law properly conceptualized and constituted would go a long way toward addressing them.

Privacy problems emerge with AI’s inputs and outputs. These privacy problems are often not new; they are variations of longstanding privacy problems. But AI remixes existing privacy problems in complex and unique ways. Some problems are blended together in ways that challenge existing regulatory frameworks. In many instances, AI exacerbates existing problems, often threatening to take them to unprecedented levels.

Overall, AI is not an unexpected upheaval for privacy; it is, in many ways, the future that has long been predicted. But AI glaringly exposes the longstanding shortcomings, infirmities, and wrong approaches of existing privacy laws.

Ultimately, whether through patches to old laws or as part of new laws, many issues must be addressed to address the privacy problems that AI is affecting. In this Article, I provide a roadmap to the key issues that the law must tackle and guidance about the approaches that can work and those that will fail.

You can download my article for free on SSRN here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4713111





Perspective.

https://www.insideprivacy.com/united-states/trends-in-ai-u-s-state-legislative-developments/

Trends in AI: U.S. State Legislative Developments

U.S. policymakers have continued to express interest in legislation to regulate artificial intelligence (“AI”), particularly at the state level. Although comprehensive AI bills and frameworks in Congress have received substantial attention, state legislatures also have been moving forward with their own efforts to regulate AI. This blog post summarizes key themes in state AI bills introduced in the past year. Now that new state legislative sessions have commenced, we expect to see even more activity in the months ahead.





Could be amusing…

https://www.bespacific.com/a-search-engine-that-finds-you-weird-old-books/

A Search Engine That Finds You Weird Old Books

Clive Thompson:(tl;dr — if you want to skip this essay and just try out my search tool, it’s here,) Last fall, I wrote about the concept of “rewilding your attention — why it’s good to step away from the algorithmic feeds of big social media and find stranger stuff in nooks of the Internet. I followed it up with a post about “9 Ways to Rewild Your Attention” — various strategies I’d developed to hunt down unexpected material. One of those strategies? “Reading super-old books online.” As I noted, I often find it fun to poke around in books from the 1800s and 1700s, using Google Books or Archive.org…

Any book published in the U.S. before 1925 is in the public domain, so you can do amazingly fun book-browsing online. I’ll go to Archive.org or Google Books and pump in a search phrase, then see what comes up. (In Google Books, sort the results by date — pick a range that ends in 1924 — and by “full view,” and you’ll get public-domain books that are free to read entirely.) I cannot recommend this more highly. The amount of fascinating stuff you can encounter in old books and magazines is delightful.

I still do this! Old books are socially and culturally fascinating; they give you a glimpse into how much society has changed, and also what’s remained the same. The writing styles can be delightfully archaic, but also sometimes amazingly fresh. Nonfiction writers from 1780 can be colloquial and funny as hell…”



Thursday, February 01, 2024

My AI agrees. (Imagine a Super Bowl ad announcing that Taylor Swift is running for president, as a Republican!)

https://www.bespacific.com/ai-in-politics-is-so-much-bigger-than-deepfakes/

AI in Politics Is So Much Bigger Than Deepfakes

The Atlantic [read free ]: “…Up to this point, much of the attention on AI and elections has focused on deepfakes, and not without reason. The threat—that even something seemingly captured on tape could be false—is immediately comprehensible, genuinely scary, and no longer hypothetical. With better execution, and in a closer race, perhaps something like the fake-Biden robocall would not have been inconsequential. A nightmare scenario doesn’t take imagination: In the final days of Slovakia’s tight national election this past fall, deepfaked audio recordings surfaced of a major candidate discussing plans to rig the vote (and, of all things, double the price of beer). Even so, there’s some reason to be skeptical of the threat. “Deepfakes have been the next big problem coming in the next six months for about four years now,” Joshua Tucker, a co-director of the NYU Center for Social Media and Politics, told me. People freaked out about them before the 2020 election too, then wrote articles about why the threats hadn’t materialized, then kept freaking out about them after. This is in keeping with the media’s general tendency to overhype the threat of efforts to intentionally mislead voters in recent years, Tucker said: Academic research suggests that disinformation may constitute a relatively small proportion of the average American’s news intake, that it’s concentrated among a small minority of people, and that, given how polarized the country already is, it probably doesn’t change many minds. Even so, excessive concern about deepfakes could become a problem of its own. If the first-order worry is that people will get duped, the second-order worry is that the fear of deepfakes will lead people to distrust everything. Researchers call this effect “the liar’s dividend,” and politicians have already tried to cast off unfavorable clips as AI-generated: Last month, Donald Trump falsely claimed that an attack ad had used AI to make him look bad. “Deepfake” could become the “fake news” of 2024, an infrequent but genuine phenomenon that gets co-opted as a means of discrediting the truth. Think of Steve Bannon’s infamous assertion that the way to discredit the media is to “flood the zone with shit.”..

See also The New York Times: Universal Music Group pulled songs from TikTok after licensing negotiations broke down, silencing many videos across the social media platform. The music giant, home to stars like Taylor Swift and Drake, had accused TikTok of offering unsatisfactory payment for music, and of allowing its platform to be “flooded with A.I.-generated recordings.”



Wednesday, January 31, 2024

Imagine an AI that tells you how good other AIs are…

https://www.bespacific.com/good-ai-legal-help-bad-ai-legal-help/

Good AI Legal Help, Bad AI Legal Help: Establishing quality standards for response to people’s legal problem stories

Hagan, Margaret, Good AI Legal Help, Bad AI Legal Help: Establishing quality standards for response to people’s legal problem stories (November 21, 2023). Available at SSRN: https://ssrn.com/abstract=4640596

Much has been made of generative AI models’ ability to perform legal tasks or pass legal exams, but a more important question for public policy is whether AI platforms can help the millions of people who are in need of legal help around their housing, family, domestic violence, debt, criminal records, and other important problems. When a person comes to a well-known, general generative AI platform to ask about their legal problem, what is the quality of the platform’s response? Measuring quality is difficult in the legal domain, because there are few standardized sets of rubrics to judge things like the quality of a professional’s response to a person’s request for advice. This study presents a proposed set of 22 specific evaluation criteria to evaluate the quality of a system’s answers to a person’s request for legal help for a civil justice problems. It also presents the review of these evaluation criteria by legal domain experts like legal aid lawyers, courthouse self help center staff, and legal help website administrators. The result is a set of standards, context, and proposals that technologists and policymakers can use to evaluate quality of this specific legal help task in future benchmark efforts.”





Clever. I would hand this to my SQL students.

https://www.kdnuggets.com/a-step-by-step-guide-to-reading-and-understanding-sql-queries

A Step by Step Guide to Reading and Understanding SQL Queries

Complex queries seem intimidating, but this guide gives you insight into how to work more easily with SQL queries.



Tuesday, January 30, 2024

Is there enough now for the feds to summarize into a national law?

https://www.insideprivacy.com/state-privacy/new-jersey-and-new-hampshire-pass-comprehensive-privacy-legislation/ 

New Jersey and New Hampshire Pass Comprehensive Privacy Legislation

New Jersey and New Hampshire are the latest states to pass comprehensive privacy legislation, joining California, Virginia, Colorado, Connecticut, Utah, Iowa, Indiana, Tennessee, Montana, Oregon, Texas, Florida, and Delaware.  Below is a summary of key takeaways.

…   On January 8, 2024, the New Jersey state senate passed S.B. 332 (“the Act”), which was signed into law on January 16, 2024.  The Act, which takes effect 365 days after enactment, resembles the comprehensive privacy statutes in Connecticut, Colorado, Montana, and Oregon, though there are some notable distinctions.

      …   On January 18, the New Hampshire legislature passed SB255 (“the Act”).  The Act, which will take effect on January 1, 2025, resembles similar statutes in Connecticut and other states with a few distinctions.



Have they figured everything out? 

https://abovethelaw.com/2024/01/this-is-how-am-law-100-law-firms-are-using-generative-ai/

This Is How Am Law 100 Law Firms Are Using Generative AI

…   What lawyers are finding is it can automate commodity low-rate work because there wasn’t a budget to do that kind of work.  We’re spending the same amount of time on the matter but we can automate some work.  We can dig deeper into issues.  And then the lawyer is able to spend more time on the higher-value work for the client.  We’re seeing it more as a quality play for us.  Lawyers can provide more value for the same amount of time.

— David Cunningham, chief innovation officer at Reed Smith, in comments given to the American Lawyer concerning the firm’s use of generative artificial intelligence.  Am Law reached out to members of the Am Law 100 to examine how those firms were using generative AI.  Of the 41 firms that offered responses, the most common uses of generative AI are for summarizing documents/generating transcripts (15), legal research (11), drafting marketing materials/attorney bios (8), drafting legal material (7), and e-discovery (5).


Monday, January 29, 2024

Same old strategy.

https://www.bespacific.com/alphabets-plans-to-intercept-100s-of-billions-of-messages-to-train-bard/

Complaint filed against Alphabets plans to intercept 100s of billions of messages to train Bard

LinkedIn, Alexander Hanff: “Today I filed a complaint [included with lead link] with the Data Protection Commission Ireland as an open letter against Alphabets plans to introduce their Bard AI into Android Messages app and to intercept 100s of billions of confidential communications for the purpose of training their AI. This is a direct breach of Article 5(1) of 2002/58/EC and in many member States constitutes a breach of criminal law for the interception of communications content. Under Article 5(1) the consent of *all parties* involved in a communication is required before it can be intercepted. This means that Alphabet cannot simply rely on the consent of the user of the App and they know this because they were caught breaking the same law in 2010 with their Streetview cars when they intercepted the WiFi communications of EU persons.”





Here’s one to build on…

https://www.bespacific.com/ai-law-best-practices/

AI & Law Best Practices

AI & Law: Download the Suggested Best Practices Guide, Carolyn Elefant, January 19, 2024: “The legal field is undergoing a tech revolution, and AI is at the forefront. That’s why I created “Frequently Asked Questions and Suggested Best Practices Related to Generative Artificial Intelligence in the Legal Profession. This resource addresses critical AI topics like copyright issues, client privacy, ethical use, and more. It’s an essential read for any legal professional looking to navigate the AI landscape wisely and ethically. Elevate your practice with informed AI integration. Click here to get your free copy.”





Clearly the solution is a jury of AIs.

https://www.govtech.com/artificial-intelligence/keeping-deepfakes-out-of-court-may-take-shared-effort

Keeping Deepfakes Out of Court May Take Shared Effort

No solution will be foolproof, but experts say the time has come to start preparing guardrails and considering countermeasures. Members of judicial and tech spaces alike are sounding this alarm about the possibility — and probability — that deepfaked evidence could soon show up in courts. If juries fall for fabrications, they’d base decisions on falsehoods and unfairly harm litigants. And real images and videos could be mistakenly discounted as fakes, causing similar damages.

Evidence must be proven to be more likely to be authentic than not before a judge will admit it for the jury’s consideration. That’s a new problem in the era of generative AI, where studies suggest jurors are likely to be biased by video evidence even when they know it might be a fabrication.



(Related)

https://www.coloradopolitics.com/quick-hits/ai-deepfakes-elections-colorado/article_0edb7b1c-bba5-11ee-96ef-3b82ad2be9c4.html

Colorado's top election official targets AI-generated 'deepfakes' in elections

"This legislative package ensures Colorado is ready for the emergence of AI disruptions in elections; protects Colorado elections from any future fake elector schemes; and ensures Colorado’s tribal communities have a voice at the table for years to come," she said.

The bills Griswold is advocating for included a measure on artificial intelligence transparency, which requires AI-generated communications that show Colorado candidates or officeholders to include disclaimers so that people know these images are not real.

Under her proposal, AI-generated communications without a disclaimer would be subject to penalties and campaign finance enforcement. Notably, the person who is the subject of the AI generation would be able to sue those responsible for the communication.





Perspective.

https://www.theregister.com/2024/01/24/willison_ai_software_development/

Simon Willison interview: AI software still needs the human touch

Simon Willison, a veteran open source developer who co-created the Django framework and built the more recent Datasette tool, has become one of the more influential observers of AI software recently.

His writing and public speaking about the utility and problems of large language models has attracted a wide audience thanks to his ability to explain the subject matter in an accessible way. The Register interviewed Willison in which he shares some thoughts on AI, software development, intellectual property, and related matters.

Sunday, January 28, 2024

Is it broken?

https://scholarship.claremont.edu/cmc_theses/3463/

Copyrights and Wrongs: Evaluating Copyright Law's Adaptability to Generative AI

The legality of generative artificial intelligence (GAI) within the realm of copyright law remains uncertain. GAI models, by ingesting and training on large amounts of data, develop the ability to generate novel expressive outputs such as text and images. The quality of these outputs is directly related to the quality of their inputs. Consequently, developers often use copyrighted works as a primary source for training, recently resulting in lawsuits from copyright holders alleging infringement. Developers defend themselves by claiming that their actions constitute fair use, but the murky nature of fair use casts doubt on whether GAI models can definitively be classified as such. Moreover, reliance on fair use doctrine may lead to undesirable outcomes, potentially impacting GAI innovation, human creativity, or both, thereby undermining the constitutional goal of copyright to `promote the Progress of Science and useful Arts.' This thesis evaluates whether current copyright law can adapt to GAI challenges and maintain its aim of balancing protections and incentives. It also explores the necessity of amendments to copyright law to effectively address GAI, envisioning possible alternatives.