Thursday, April 25, 2024

Congress is uncomfortable with TikTok.

https://www.theverge.com/2024/4/24/24139036/biden-signs-tiktok-ban-bill-divest-foreign-aid-package

Biden signs TikTok ‘ban’ bill into law, starting the clock for ByteDance to divest it



(Related) President Biden is comfortable with TikTok.

https://www.nbcnews.com/politics/joe-biden/biden-campaign-keep-using-tiktok-signed-ban-law-rcna149158

Biden campaign plans to keep using TikTok through the election





And then what? Do we trust it enough to send them the location and date of the next insurrection?

https://www.nationalreview.com/corner/good-news-ai-can-apparently-spot-conservatives-on-sight-via-facial-recognition-technology/

Good News: AI Can Apparently Spot Conservatives on Sight via Facial Recognition Technology



Wednesday, April 24, 2024

Consent is fiction.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4333743

Murky Consent: An Approach to the Fictions of Consent in Privacy Law

Consent plays a profound role in nearly all privacy laws. As Professor Heidi Hurd aptly said, consent works “moral magic” – it transforms things that would be illegal and immoral into lawful and legitimate activities. As to privacy, consent authorizes and legitimizes a wide range of data collection and processing.

There are generally two approaches to consent in privacy law. In the United States, the notice-and-choice approach predominates; organizations post a notice of their privacy practices and people are deemed to consent if they continue to do business with the organization or fail to opt out. In the European Union, the General Data Protection Regulation (GDPR) uses the express consent approach, where people must voluntarily and affirmatively consent.

Both approaches fail. The evidence of actual consent is non-existent under the notice-and-choice approach. Individuals are often pressured or manipulated, undermining the validity of their consent. The express consent approach also suffers from these problems – people are ill-equipped to decide about their privacy, and even experts cannot fully understand what algorithms will do with personal data. Express consent also is highly impractical; it inundates individuals with consent requests from thousands of organizations. Express consent cannot scale.

In this Article, I contend that most of the time, privacy consent is fictitious. Privacy law should take a new approach to consent that I call “murky consent.” Traditionally, consent has been binary – an on/off switch – but murky consent exists in the shadowy middle ground between full consent and no consent. Murky consent embraces the fact that consent in privacy is largely a set of fictions and is at best highly dubious.

Because it conceptualizes consent as mostly fictional, murky consent recognizes its lack of legitimacy. To return to Hurd’s analogy, murky consent is consent without magic. Rather than provide extensive legitimacy and power, murky consent should authorize only a very restricted and weak license to use data. Murky consent should be subject to extensive regulatory oversight with an ever-present risk that it could be deemed invalid. Murky consent should rest on shaky ground. Because the law pretends people are consenting, the law’s goal should be to ensure that what people are consenting to is good. Doing so promotes the integrity of the fictions of consent. I propose four duties to achieve this end: (1) duty to obtain consent appropriately; (2) duty to avoid thwarting reasonable expectations; (3) duty of loyalty; and (4) duty to avoid unreasonable risk. The law can’t make the tale of privacy consent less fictional, but with these duties, the law can ensure the story ends well.





Tools & Techniques. (Talking gooder to your AI)

https://www.makeuseof.com/ai-prompting-tips-and-tricks-that-actually-work/#explain-what-hasn-39-t-worked-when-you-39-ve-prompted-in-the-past

7 AI Prompting Tips and Tricks That Actually Work

A whole new world of prompt engineering is springing into life, all dedicated to crafting and perfecting the art of AI prompting. But you can skip the tricky bits and improve your AI prompting game with these tips and tricks.





Tools & Techniques. Soon, humans not required.

https://www.police1.com/police-products/police-technology/software/report-writing/axon-releases-draft-one-ai-powered-report-writing-software

Axon releases Draft One, AI-powered report-writing software

Axon has announced the release of Draft One, a new software product that drafts police report narratives in seconds based on auto-transcribed body-worn camera audio, according to a press release.

Reporting is a critical component of good police work, however, it has become a significant part of the job. Axon found that every week officers in the U.S. can spend up to 40% of their time — or 15 hours per week — on what is essentially data entry.





Tools & Techniques.

https://www.lawnext.com/2024/04/launching-today-the-first-meeting-bot-specifically-for-legal-professionals-for-use-in-depositions-hearings-and-more.html

Exclusive: Launching Today Is The First Meeting Bot Specifically for Legal Professionals, for Use In Depositions, Hearings, and More

You may have noticed of late that many of your video meetings have an unfamiliar attendee — a meeting bot, invited by one of the human participants, that produces a recording or transcript when the meeting is over. But while there are several such products on the market, none have been developed to meet the specific needs of legal professionals.

That changes today with the beta launch of CoCounsel.ai, the first legally nuanced meeting bot. It can join a legal event such as a deposition, hearing or arbitration, and it uses legal-specific AI speech-to-text to provide a legally formatted, highly accurate real-time transcript, along with features such as bookmarking, tagging and archiving.



Tuesday, April 23, 2024

This seems to be dominating the news, but I’m not going to spend much time with it.

https://www.bespacific.com/at-the-top-of-the-ticket-a-criminal-defendant/

At the Top of the Ticket, a Criminal Defendant

Greg Olear. Trump may well be a convicted felon by Election Day. He’s still the GOP nominee. “Yesterday, open statements were heard in the case of The People of the State of New York v. Donald J. Trump. The defendant—a fixture in the New York tabloids for decades, a former reality TV star, and, improbably, the 45th President of the United States—is accused of “the crime of FALSIFYING BUSINESS RECORDS IN THE FIRST DEGREE, in violation of Penal Law §175.10,” a Class E felony. There are 34 counts in the indictment, each one specifying a unique instance of Trump running afoul of the law… A Class E felony is as low-rung as it sounds. This isn’t instigating a coup against our democracy, or making off with top secret documents, or bullying Georgia election officials to ensure that an election went his way. In the grand scheme of things, these counts are minor crimes. All it takes is one intractable MAGA on the jury who thinks this is a Deep State conspiracy, or that Stormy Daniels is some vindictive gold-digger, and Trump will skate. Even so, a former POTUS is a criminal defendant. Let’s pause for a moment and—to use a phrase I abhor that was ubiquitous on Twitter seven years ago—let that sink in. None of the other 43 previous presidents (Grover Cleveland was 22 and 24) were indicted for even a single crime, Ulysses Grant’s need for speed notwithstanding. Nixon likely would have been but was pre-emptively pardoned, so we’ll never know. A FPOTUS indictment, therefore, is unprecedented. And this is just the first of Trump’s criminal trials. There are three more pending. Not one, not two, but three: four, altogether. Four! That doesn’t even take into account the civil fraud case, where the State of New York is poised to seize almost half a billion dollars in assets from Trump pending appeal—and that assumes that the bond he secured winds up being legit…”

See also Axios: New York Courts to release daily transcripts from Trump hush money trial



Yes and no. Some things change, some remain the same.

https://www.axios.com/2024/04/16/ai-top-secret-intelligence

"Top secret" is no longer the key to good intel in an AI world: report

… Today's intelligence systems cannot keep pace with the explosion of data now available, requiring "rapid" adoption of generative AI to keep an intelligence advantage over rival powers.

  • The U.S. intelligence community "risks surprise, intelligence failure, and even an attrition of its importance" unless it embraces AI's capacity to process floods of data, according to the report from the Special Competitive Studies Project.

  • The federal government needs to think more in terms of "national competitiveness" than "national security," given the wider range of technologies now used to attack U.S. interests.



Something I have been meaning to try. Could this turn Shakespeare into a graphic novel?

https://www.makeuseof.com/best-open-source-ai-image-generators/

The 5 Best Open-Source AI Image Generators

AI-based text-to-image generation models are everywhere and becoming easier to access daily. While it's easy just to visit a website and generate the image you're looking for, open-source text-to-image generators are your best bet if you want more control over the generation process.



 

Sunday, April 21, 2024

Long term implications? Pollution of the LLM corpus.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4771884

Do large language models have a legal duty to tell the truth?

Careless speech is a new type of harm created by large language models (LLM) that poses cumulative, long-term risks to science, education, and the development of shared social truths in democratic societies. LLMs produce responses that are plausible, helpful, and confident but that contain factual inaccuracies, inaccurate summaries, misleading references, and biased information. These subtle mistruths are poised to cause a severe cumulative degradation and homogenisation of knowledge over time. This article examines the existence and feasibility of a legal duty for LLM providers to create models that “tell the truth.” We argue that LLM providers should be required to mitigate careless speech and better align their models with truth through open, democratic processes. Careless speech is defined and contrasted with the simplified concept of “ground truth” in LLMs and prior discussion of related truth-related risks in LLMs including hallucinations, misinformation, and disinformation. The existence of truth-related obligations in EU law is then assessed, focusing on human rights law and liability frameworks for products and platforms. Current frameworks generally contain relatively limited, sector-specific truth duties. The article concludes by proposing a pathway to create a legal truth duty applicable to providers of both narrow- and general-purpose LLMs.





Law firms will use AI. How will they prepare?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4794225

Leveraging The Use of Artificial Intelligence In Legal Practice

The integration of Artificial Intelligence (AI) into legal practice has revolutionized the legal landscape, offering unprecedented opportunities for efficiency and accuracy. By embracing AI technologies and adapting to the evolving legal landscape, legal professionals can enhance efficiency, accuracy, and client satisfaction, ultimately shaping the future of the legal profession. However, the adoption of AI in legal practice also presents challenges, including ethical considerations, data privacy concerns, and the need for specialized training. As legal professionals embrace AI technologies, it becomes imperative to address these challenges proactively and ensure responsible and ethical use. This presentation explores the diverse applications of AI in legal practice and its implications for the legal profession.



Saturday, April 20, 2024

Do we need a chapter here?

https://www.geekwire.com/2024/seattle-tech-vet-calls-rapidly-growing-ai-tinkerers-meetups-the-new-homebrew-computer-club-for-ai/

Seattle tech vet calls rapidly growing ‘AI Tinkerers’ meetups the new ‘Homebrew Computer Club’ for AI

A first meetup in Seattle in November 2022 attracted 12 people. A second in Austin was led by GitHub Copilot creator Alex Graveley, who came up with the name “AI Tinkerers.”

Nearly a year-and-a-half later, Heitzeberg said the idea has taken off and is going global. In a LinkedIn post last week, he said eight cities — from Seattle to Chicago to Boston to Medellin, Colombia, and elsewhere — have AI Tinkerers meetups planned over the next month.

We are kind of the Homebrew Computer Club of AI,” Heitzeberg said, referencing the famed hobbyist group that gathered in Silicon Valley in the mid-1970s to mid-1980s and attracted the likes of Apple founders Steve Jobs and Steve Wozniak. “It was people trying stuff. It’s that for AI, and it’s really needed and really good for innovation.”



Friday, April 19, 2024

I worry that “force” might eventually include beating a password out of me.

https://www.bespacific.com/cops-can-force-suspect-to-unlock-phone-with-thumbprint-us-court-rules/

Cops can force suspect to unlock phone with thumbprint, US court rules

Ars Technica: “The US Constitution’s Fifth Amendment protection against self-incrimination does not prohibit police officers from forcing a suspect to unlock a phone with a thumbprint scan, a federal appeals court ruled yesterday. The ruling does not apply to all cases in which biometrics are used to unlock an electronic device but is a significant decision in an unsettled area of the law. The US Court of Appeals for the 9th Circuit had to grapple with the question of “whether the compelled use of Payne’s thumb to unlock his phone was testimonial,” the ruling in United States v. Jeremy Travis Payne said. “To date, neither the Supreme Court nor any of our sister circuits have addressed whether the compelled use of a biometric to unlock an electronic device is testimonial.” A three-judge panel at the 9th Circuit ruled unanimously against Payne, affirming a US District Court’s denial of Payne’s motion to suppress evidence. Payne was a California parolee who was arrested by California Highway Patrol (CHP) after a 2021 traffic stop and charged with possession with intent to distribute fentanyl, fluorofentanyl, and cocaine. There was a dispute in District Court over whether a CHP officer “forcibly used Payne’s thumb to unlock the phone.” But for the purposes of Payne’s appeal, the government “accepted the defendant’s version of the facts, i.e., ‘that defendant’s thumbprint was compelled.'” Payne’s Fifth Amendment claim “rests entirely on whether the use of his thumb implicitly related certain facts to officers such that he can avail himself of the privilege against self-incrimination,” the ruling said. Judges rejected his claim, holding “that the compelled use of Payne’s thumb to unlock his phone (which he had already identified for the officers) required no cognitive exertion, placing it firmly in the same category as a blood draw or fingerprint taken at booking.” “When Officer Coddington used Payne’s thumb to unlock his phone—which he could have accomplished even if Payne had been unconscious—he did not intrude on the contents of Payne’s mind,” the court also said…”





Perspective. Worth an hour of your time.

https://www.nationalreview.com/corner/the-rise-of-the-machines-john-etchemendy-and-fei-fei-li-on-our-ai-future/

The Rise of The Machines: John Etchemendy and Fei-Fei Li on Our AI Future

John Etchemendy and Fei-Fei Li are the co-directors of Stanford’s Institute for Human-Centered Artificial Intelligence (HAI), founded in 2019 to “advance AI research, education, policy and practice to improve the human condition.” In this interview, they delve into the origins of the technology, its promise, and its potential threats. They also discuss what AI should be used for, where it should not be deployed, and why we as a society should — cautiously — embrace it.





Interesting story of an unlevel playing field.

https://lawrencekstimes.com/2024/04/18/lhs-journalists-dispute-gaggle/

Lawrence journalism students convince district to reverse course on AI surveillance they say violates freedom of press

Journalism students at Lawrence High School have convinced the school district to remove their files from the purview of a controversial artificial intelligence surveillance system after months of debate with administrators.

The AI software, called Gaggle, sifts through anything connected to the district’s Google Workspace — which includes Gmail, Drive and other products — and flags content it deems a safety risk, such as allusions to self-harm, depression, drug use and violence.



Thursday, April 18, 2024

Sorry for the short notice but I just found out myself. Privacy Foundation Seminar:

Artificial Intelligence and the Practice of Law

Friday April 19th 11:30 – 1:00

1 ethics CLE credit. Contact Kristen Dermyer 303-871-6487 <Kristen.Dermyer@du.edu> to register.





...and not just for lawyers.

https://www.bespacific.com/the-smartest-way-to-use-ai-at-work/

The Smartest Way to Use AI at Work

WSJ via MSN: “By by day, there’s growing pressure at the office. Do you respond to all those clients—or let AI do it? Do you attend that meeting—or do you send a bot? About 20% of employed adults said they have used OpenAI’s ChatGPT for work as of February 2024, up from 8% a year ago, according to Pew Research Center. The most popular uses for AI at work are research and brainstorming, writing first-draft emails and creating visuals and presentations, according to an Adobe survey. Productivity boosts from AI are estimated to be worth trillions of dollars over the next decade, say consultants. Many companies are encouraging their workers to embrace and learn the new tools. The industries that will benefit most are sales and marketing, customer care, software engineering and product development. For most workers, it can make your day-to-day a bit less annoying. “If you’re going to use it as a work tool,” said Lareina Yee, a senior partner at the consulting firm McKinsey and chair of its Technology Council, “you need to think of all the ways it can change your own productivity equation.” Using AI at work could get you fired—or at least in hot water. A judge last year sanctioned a lawyer who relied on fake cases generated by ChatGPT, and some companies have restricted AI’s usage. Other companies and bosses are pushing staff to do more with AI, but you’ll need to follow guidelines. Rule No. 1: Don’t put any company data into a tool without permission. And Rule No. 2: Only use AI to do work you can easily verify, and be sure to check its work…” Uses include: Email; Presentations; Summaries; Meetings.





Too many tools, too little time.

https://www.makeuseof.com/custom-gpts-that-make-chat-gpt-better/

10 Custom GPTs That Actually Make ChatGPT Better

ChatGPT on its own is great, but did you know that you can use custom GPTs to streamline its functionality? Custom GPTs can teach you how to code, plan trips, transcribe videos, and much, much more, and there are heaps for you to choose from.

So, here are the best custom GPTs that actually make ChatGPT a better tool for any situation.





Not sure I believe these numbers…

https://www.edweek.org/technology/see-which-types-of-teachers-are-the-early-adopters-of-ai/2024/04

See Which Types of Teachers Are the Early Adopters of AI

Among social studies and English/language arts teachers, the number of AI users was higher than the general teaching population. Twenty-seven percent of English teachers and social studies teachers use AI tools in their work. By comparison, 19 percent of teachers in STEM disciplines said they use AI, and 11 percent of elementary education teachers reported doing so.