Saturday, January 13, 2024

Is it worth using AI for legal research if most of what it returns is bogus?

https://hai.stanford.edu/news/hallucinating-law-legal-mistakes-large-language-models-are-pervasive

Hallucinating Law: Legal Mistakes with Large Language Models are Pervasive

Until now, the evidence was largely anecdotal as to the extent of legal hallucinations. Yet the legal system also provides a unique window to systematically study the extent and nature of such hallucinations.

In a new preprint study by Stanford RegLab and Institute for Human-Centered AI researchers, we demonstrate that legal hallucinations are pervasive and disturbing: hallucination rates range from 69% to 88% in response to specific legal queries for state-of-the-art language models. Moreover, these models often lack self-awareness about their errors and tend to reinforce incorrect legal assumptions and beliefs. These findings raise significant concerns about the reliability of LLMs in legal contexts, underscoring the importance of careful, supervised integration of these AI technologies into legal practice.





Never issue an order you know won’t be obeyed?

https://theintercept.com/2024/01/12/open-ai-military-ban-chatgpt/

OPENAI QUIETLY DELETES BAN ON USING CHATGPT FOR “MILITARY AND WARFARE”

Up until January 10, OpenAI’s “usage policies” page included a ban on “activity that has high risk of physical harm, including,” specifically, “weapons development” and “military and warfare.” That plainly worded prohibition against military applications would seemingly rule out any official, and extremely lucrative, use by the Department of Defense or any other state military. The new policy retains an injunction not to “use our service to harm yourself or others” and gives “develop or use weapons” as an example, but the blanket ban on “military and warfare” use has vanished.





Tools & Techniques. I has improved my speach-a-fication and my righting. Ewe kin two!.

https://www.makeuseof.com/grammar-apps-for-improving-english/

The 8 Best English Grammar Apps to Improve Your Language Skills



Friday, January 12, 2024

Clearly Florida schoolchildren should learn about sex by trial and error, like their grandparents did.

https://www.bespacific.com/major-win-in-fight-against-dictionary-yanking-school-district/

Major Win’ in Fight Against Dictionary-Yanking School District

Newser: Federal judge allows lawsuit against Florida’s Escambia County School District to proceed – “A Florida school district is keeping students from accessing dictionaries which, in defining sex and other concepts, are considered to violate the state law prohibiting materials in schools that depict or describe sexual conduct, per the Messenger. Escambia County School District has pulled more than 1,600 books from school libraries while reviewing whether they violate state law HB 1069. At least five dictionaries—the American Heritage Children’s Dictionary, Webster’s Dictionary for Students, Merriam-Webster’s Elementary Dictionary, the Clear and Simple Thesaurus Dictionary, and the Dictionary of Costume—have been removed, according to a list shared by PEN America, which sued the district last May, alleging violations of the First and 14th amendments.” [h/t Pete Weiss ]

See also Florida law led school district to pull 1,600 books — including dictionaries [read free]: “…A Post analysis showed that books with LGBTQ characters or protagonists of color were most likely to be challenged nationwide — and that the wave of challenges came from a small handful of highly active adults. Half of challenged books return to schools. LGBTQ books are banned most. Just 11 people were responsible for more than 60 percent of schoolbook challenges filed nationwide in the 2021-2022 school year, The Post found. Almost half of challenged book are eventually returned to shelves, The Post found, although LGBTQ books are most likely to be banned.



Thursday, January 11, 2024

Is this the year we get a federal privacy law? (nah)

https://www.pogowasright.org/new-hampshire-legislature-passes-a-comprehensive-privacy-law/

New Hampshire Legislature Passes a Comprehensive Privacy Law, While NJ Bill Goes to the Governor’s Desk for Signature

Ali Jessani, Kirk Nahra, and Genesis Ruano of Wilmer Hale write:

On January 4, 2023, the New Hampshire House of Representatives passed Senate Bill 255 (the “Act”) with amendments, setting the stage for New Hampshire to become the latest state with a comprehensive privacy law. The Act will now move on to the House and awaits Senate concurrence (the Senate already has passed a mostly similar version, so concurrence is expected). Assuming the Senate passes the latest version of the bill, it will then move to the New Hampshire Governor’s desk for signature. If enacted, the new privacy law would go into effect on January 1, 2025.
Assuming the Act makes it through the remaining legislative process, New Hampshire will become the first state in 2024 to pass “comprehensive” privacy legislation (joining California, Colorado, Connecticut, Delaware, Indiana, Iowa, Montana, Oregon, Tennessee, Texas, Utah, and Virginia), though there is a chance that New Jersey beats it to the punch. Overall, the bill does not impose any new obligations on businesses that did not previously exist under other laws. Additionally, and like most of the other state laws, the Act is only enforceable by the state attorney general and provides a discretionary 60-day cure period for compliance violations. Despite its similarities to other laws, the Act adds to the complexity of the state privacy law landscape and demonstrates the need for companies to continuously reevaluate their privacy compliance programs to ensure compliance across rapidly evolving state laws.
In this post, we highlight key takeaways and provisions from the Act.

Read more at JDSupra.

In related news, The New Jersey Senate and Assembly passed consumer privacy bill S332/A1971. The bill now goes to Governor Phil Murphy for his signature. If signed, the law will become the 13th state to pass a broadly applicable privacy bill. Read more by Mark Brennan, Sophie Baum, and Harsimar Dhanoa at Hogan Lovells.





Not sure I agree with this approach. Is there any public review? Can a teen find information if he thinks a friend might be suicidal?

https://www.pogowasright.org/facebook-instagram-block-teens-from-sensitive-content-even-from-friends/

Facebook, Instagram block teens from sensitive content, even from friends

Ashley Belanger reports:

Meta has begun hiding sensitive content from teenagers under the age of 18 on Facebook and Instagram, a company blog announced on Tuesday.
Starting now, Meta will begin removing content from feeds and Stories about sensitive topics that have been flagged as harmful to teens by experts in adolescent development, psychology, and mental health. That includes content about self-harm, suicide, and eating disorders, as well as content discussing restricted goods or featuring nudity.

Read more at Ars Technica.





Are we likely to become too lazy to do any meaningful work?

https://teachprivacy.com/cartoon-ai-in-education/

Cartoon – AI in Education



Wednesday, January 10, 2024

Interesting, but this has been possible since a significant percentage of the population took up cell phones.

https://www.bespacific.com/jan-6-was-an-example-of-networked-incitement/

Jan. 6 was an example of networked incitement

Via LLRX Jan. 6 was an example of networked incitement. The shocking events of Jan. 6, 2021, signaled a major break from the nonviolent rallies that categorized most major protests over the past few decades. What set Jan. 6 apart was the president of the United States using his cellphone to direct an attack on the Capitol, and those who stormed the Capitol being wired and ready for insurrection. Joan Donovan and her co-authors, a media and disinformation scholar, call this networked incitement: influential figures inciting large-scale political violence via social media. Networked incitement involves insurgents communicating across multiple platforms to command and coordinate mobilized social movements in the moment of action.





Nothing new. Intelligence gatherers have always gone where the data is. (Imagine hackers searching a neighborhood for someone who left a garage door open.)

https://www.cpomagazine.com/cyber-security/russian-agents-hacking-residential-surveillance-cameras-to-gather-intel-in-ukraine/

Russian Agents Hacking Residential Surveillance Cameras to Gather Intel in Ukraine

The Security Service of Ukraine (SSU) is asking the public to cut off live feeds of residential and business surveillance cameras, as Russian hackers have been actively exploiting them as a means of scouting areas that their military intends to attack.

The hackers have reportedly accessed cameras in apartment buildings and parking facilities, and are most interested in those that are near critical infrastructure or air defense systems and can have their viewing angles changed remotely. The agency reports two recent compromises of surveillance cameras in Kyiv ahead of missile attacks on a nearby critical infrastructure facility.





This year’s biggest use for AI?

https://www.cnbc.com/2024/01/10/wef-ai-election-disruption-poses-the-biggest-global-risk-in-2024.html

Election disruption from AI poses the biggest global risk in 2024, Davos survey warns

As around half of the world’s adult population heads to the polls in a bumper year of elections, concern over the role of artificial intelligence in disrupting outcomes has topped the list of the biggest risks for 2024, according to a new report.

The World Economic Forum’s “Global Risks Report 2024,” released Wednesday, ranked AI-derived misinformation and disinformation — and its implications for societal polarization — ahead of climate change, war and economic weakness in its top 10 risks over the next two years.





Deepfake: It’s not just for elections!

https://www.404media.co/joe-rogan-taylor-swift-andrew-tate-ai-deepfake-youtube-medicare-ads/

Deepfaked Celebrity Ads Promoting Medicare Scams Run Rampant on YouTube

Shoddy AI clones of celebrities including Joe Rogan, Taylor Swift, Steve Harvey, Ice Cube, Andrew Tate, Oprah, and The Rock are hawking Medicare and Medicaid scams to millions of people on YouTube with seemingly little intervention from Google. Ads connected to this scam have been viewed more than 195 million times on YouTube according to a playlist of more than 1,600 videos compiled by a tipster who shared them with 404 Media.





A very common message, sent to a new industry…

https://www.lawnext.com/2024/01/thomson-reuters-message-to-law-firms-adapt-to-market-changes-or-become-the-pan-am-of-legal.html

Thomson Reuters’ Message to Law Firms: Adapt to Market Changes or Become the Pan Am of Legal

Remember Pan Am? It was the world’s largest international airline for much of the 20th century and an innovative pioneer in the modern airline industry. But when its management failed to appreciate the dramatic changes underway in the industry, it suffered a series of economic blows, and management’s last-ditch efforts to save it came too late.

The Thomson Reuters Institute, in its 2024 Report on the State of the US Legal Market, released today in partnership with the Center on Ethics and the Legal Profession at Georgetown Law (whose URL returns a page not found), uses Pan Am’s story to drive home a simple point for U.S. law firms: Innovate or die.

Law firm leaders who fail to respond to [changes in the legal market] and pivot quickly enough to prepare for the future may see their firms destined for the same fate as Pan Am,” the report warns.





And perhaps some tips on avoiding bogus citations?

https://www.bespacific.com/generative-ai-and-finding-the-law/

Generative AI and Finding the Law

Callister, Paul D., Generative AI and Finding the Law (December 8, 2023). Available at SSRN: https://ssrn.com/abstract=4608268 or http://dx.doi.org/10.2139/ssrn.4608268 – “Legal information science requires, among other things, principles and theories. The article states five principles or considerations that any discussion of generative AI large language models and their role in finding the law must include. The article concludes that law librarianship will increasingly become legal information science and require new paradigms. In addition to the five principles, the article applies ecological holistic media theory to understand the relationship of the legal community’s cognitive authority, institutions, techné (technology, medium and method), geopolitical factors, and the past and future to understand the changes in this information milieu. The article also explains generative AI, and finally, presents some examples of generative AI responses to various legal research problems and the issues that present themselves in such circumstances.”



Tuesday, January 09, 2024

Another way to shout “fire!” in a crowded theater. (How does reporting that the government made 3,456 requests impact any investigation?)

https://www.reuters.com/legal/us-supreme-court-rejects-x-corps-surveillance-disclosure-challenge-2024-01-08/

US Supreme Court rejects X Corp's surveillance disclosure challenge

The U.S. Supreme Court on Monday rejected a request by Elon Musk's X Corp to consider whether the social media company, formerly called Twitter, can publicly disclose how often federal law enforcement seeks information about users for national security investigations.

The justices declined to hear X's appeal of a lower court's ruling holding that the FBI's restrictions on what the company could say publicly about the investigations did not violate its free speech rights under the U.S. Constitution's First Amendment.





Perhaps a bit too soon?

https://www.theverge.com/2024/1/8/24027112/volkswagen-chatgpt-openai-voice-assistant-cars-ces

Volkswagen says it’s putting ChatGPT in its cars for ‘enriching conversations’

Get ready for some very spurious navigation directions.

… VW is using ChatGPT to augment its IDA in-car voice assistant to enable more naturalistic communication between car and driver. Vehicle owners can use the new super-powered voice assistant to control basic functions, like heating and air conditioning, or to answer “general knowledge questions.” (Though, given ChatGPT’s penchant for occasionally making stuff up, user discretion is advised.)



Monday, January 08, 2024

Disinformation has complex rules… Is following the law always right?

https://www.ft.com/content/0b33b19f-6ded-4458-be0b-b335cdf31f17

EU urges Big Tech to promote opposition media in Belarus

The European Commission is urging Google and other big technology companies to help dissident Belarusian media by promoting their stories higher than those published by pro-regime outlets, which opposition journalists argue are favoured by search algorithms.

Belarusian journalists in exile have complained to the commission that content critical of the regime of Alexander Lukashenko is failing to reach target audiences, in part because of search algorithms used by Google, Meta and others, which they claim wrongfully take into account Lukashenko’s media censorship rules.





Tip of the iceberg?

https://www.ft.com/content/38ab8068-9f09-4104-859d-111aa1dc47ad

Deloitte rolls out artificial intelligence chatbot to employees

Deloitte is rolling out a generative artificial intelligence chatbot to 75,000 employees across Europe and the Middle East to create power point presentations and write emails and code in an attempt to boost productivity.

The Big Four accounting and consulting firm first launched the internal tool, called “PairD”, in the UK in October, in the latest sign of professional services firms rushing to adopt AI.

However, in a sign that the fledgling technology remains a work in progress, staff were cautioned that the new tool may produce inaccurate information about people, places and facts.

Users have been told to perform their own due diligence and quality assurance to validate the “accuracy and completeness” of the chatbot’s output before using it for work, said a person familiar with the matter.

… Big Four rival PwC is using AI chatbots in its legal and tax divisions to speed up the work of its employees by summarising large documents and identifying compliance issues. Law firm Allen & Overy has also created an AI contract negotiation tool that drafts new agreements that lawyers can then amend or accept.





MIT webinars…

https://sloanreview.mit.edu/video/preparing-your-organization-for-a-generative-future/

Preparing Your Organization for a Generative Future

https://sloanreview.mit.edu/video/finding-transformation-opportunities-with-generative-ai/

Finding Transformation Opportunities with Generative AI

https://sloanreview.mit.edu/video/generative-ai-demystified-what-it-really-means-for-business/

Generative AI Demystified: What It Really Means for Business



Sunday, January 07, 2024

Are we unsure there is a relationship?

https://www.humanamente.eu/index.php/HM/article/view/435

The Possible Relationship Between Law and Ethics in the Context of Artificial Intelligence Regulation

The latest academic discussion has focused on the potential and risks associated with technological systems. In this perspective, defining a set of legal rules could be the priority but this action appears extremely difficult at the European level and, therefore, in the last years, a set of ethical principles contained in many different documents has been published. The need to develop trustworthy and human-centric AI technologies is accomplished by creating these two types of rule sets: legal and ethical. The paper aims to critically analyse and compare these rule sets in order to understand their possible relationships in the regulation of legal problems, not only theoretically but also, where present, in some practical applications of AI, such as self-driving cars, smart toys, smart contracts and legal design. Indeed, the purpose is to identify how legal rules and ethical principles can interact for adequate regulation of AI, with particular regard to the fields of application that will be analysed.





Looks like there are some benefits.

https://cajmhe.com/index.php/journal/article/view/271

ADVANTAGES AND DRAWBACKS OF CHATGPT IN THE CONTEXT OF DRAFTING SCHOLARLY ARTICLES

Incorporating Artificial Intelligence (AI), particularly ChatGPT, in academic endeavors has attracted significant interest due to its ability to optimize procedures and enhance human capacities. ChatGPT serves as an informed partner, assisting researchers in doing literature reviews, generating ideas, and even composing scholarly articles. Nevertheless, this revolutionary technology gives rise to ethical considerations in scientific investigation, namely authorship, information-data privacy, and bias. The article thoroughly examines the advantages and disadvantages of using ChatGPT for academic purposes. The benefits are seen in its effectiveness in retrieving information, surpassing language obstacles, boosting the synthesis of literature, easing the production of ideas, and assisting in the outlining of manuscripts. On the other hand, the complicated nature of using ChatGPT in scholarly activities is emphasized by worries about scientific integrity, the possibility of spreading disinformation, excessive dependence, and security and privacy issues. Finding a middle ground between utilizing the advantages of ChatGPT and maintaining academic integrity is crucial. Analyzing the dynamics will be crucial in navigating the changing junction of AI and research activities.





Resource.

https://www.databreaches.net/resources-breach-notification-laws-us-and-gdpr/

Resources: Breach notification laws: US and GDPR

The law firm of BakerHostetler has recently released several free resources of note:

They have also released their annual Data Security Incident Response Report for 2023.