Sunday, December 31, 2023

And hackers will develop similar technology to embed the same information in AI generated images.

https://asia.nikkei.com/Business/Technology/Nikon-Sony-and-Canon-fight-AI-fakes-with-new-camera-tech

Nikon, Sony and Canon fight AI fakes with new camera tech

Nikon, Sony Group and Canon are developing camera technology that embeds digital signatures in images so that they can be distinguished from increasingly sophisticated fakes.

Nikon will offer mirrorless cameras with authentication technology for photojournalists and other professionals. The tamper-resistant digital signatures will include such information as date, time, location and photographer.





Lawyers need ethical AI or ethical lawyers need AI?

https://journal.formosapublisher.org/index.php/fjss/article/view/7451

Ethical Challenges in the Practice of the Legal Profession in the Digital Era

Ethical challenges in legal practice in the digital era present significant debate along with the development of information technology. This article explores aspects of data privacy and security, the impact of social media, and the role of artificial intelligence (AI) in legal decision-making. Through a literature review and mixed qualitative and quantitative research methodology, this article discusses the implications of the use of technology, particularly AI, in legal practice and highlights the importance of considering ethical values



(Related)

https://dl.acm.org/doi/fullHtml/10.1145/3631935

News: Why Are Lawyers Afraid of AI?

Perlman analogized the release of user-friendly generative AI with three precursor "Aha" moments: the development of the Internet, the release of the Google search engine, and the release of the Apple iPhone. However, he thinks generative AI also may have a revolutionary effect on the legal industry compared to the evolutionary, if profound, effects the other landmark technologies did.

As Perlman pointed out in his (and ChatGPT's) paper, a significant part of lawyers' work takes the form of written words: email, memos, motions, briefs, complaints, discovery requests and responses, transactional documents of all kinds, and so forth.

"Although existing technology has made the generation of these words easier in some respects, such as by allowing us to use templates and automated document assembly tools, these tools have changed most lawyers' work in relatively modest ways," he wrote in his paper's preface. "In contrast, AI tools like ChatGPT hold the promise of altering how we generate a much wider range of legal documents and information."





Perhaps an interim step? (I hope not.)

https://brooklynworks.brooklaw.edu/blr/vol89/iss1/5/

Rise of the Machines: The Future of Intellectual Property Rights in the Age of Artificial Intelligence

Artificial intelligence (AI) is not new to generating outputs considered suitable for intellectual property (IP) protection. However, recent technological advancements have made it possible for AI to transform from a mere tool used to assist in developing IP to the mind behind novel artistic works and inventions. One particular AI, DABUS, has done just so. Yet, while technology has advanced, IP law has not. This note sets out to provide a solution to the legal concerns raised by AI in IP law, specifically in the context of AI authorship and inventorship. The DABUS test case offers a model framework for analyzing the different approaches that domestic and foreign courts, as well as IP offices, have adopted to address the issue of AI-generated IP. Despite the variety of solutions that exist and that have been proposed globally, no country has identified an optimal approach to balance encouraging innovation with the need to protect human authors and inventors. This note proposes expanding the Patent Act and Copyright Act to include a new type of IP right, called Digiwork, available exclusively to AI-generated IP. Digiwork patents and copyrights would be property of the AI machine’s owner, or alternatively of the person who commissioned the work, with the AI itself listed as the “source” rather than as an author or inventor of the IP. By granting IP protection to AI-generated outputs, Digiwork rights would promote the use of highly sophisticated AI to generate value for the economy and society. At the same time, they would also safeguard human authorship and inventorship by precluding AI from taking over a legal space they were not meant to occupy.





I hope this is an uncommon perspective…

https://philosophyjournal.spbu.ru/article/view/14218

AI and the Metaphor of the Divine

The idea of God is one of the most profound in human culture. Previously considered mainly in metaphysical and ethical discussions, it has now become part of the discourse in the philosophy of technology. The metaphor of God is used by some authors to represent the role of artificial intelligence (AI) in the modern world. The article explores four aspects of this metaphor: creation, omniscience, mystery, theodicy. The creative act shows the similarity of man with God, including in the sense that technology, being created by people, at the same time can get out of the control of the creator. AI's ability to use streams of data for analytics and prediction can be presented as "omniscience" and appears mysterious due to the inability of humans to fully understand the workings of AI. The discussion about building ethics into AI technology shows a desire to add another feature to omniscience and omnipotence, namely omnibenevolence. The metaphor of God in relation to AI reveals human fears and aspirations both in rational-pragmatic and symbolic terms. Like other technologies, AI aims to satisfy the human desire for more power. At the same time, the metaphor of God indicates the power of technology over man. It reveals the transcendental in modern ideas about technology and at the same time can contribute to the discussion about what the technological design of AI should be, since the roles of the employee or communicator already lead to thinking that AI is designed more perfectly.





AI isn’t trying to kill us. (You know I’m going to drag SciFi into this blog whenever I can.)

https://www.proquest.com/openview/b5e7b80dc2f8e70618511450189a593d/1?pq-origsite=gscholar&cbl=18750&diss=y

Narrating Posthuman Identities in Martha Wells’ The Murderbot Diaries and Selected Short Stories of Isaac Asimov



Saturday, December 30, 2023

A mess AI created. Is it possible AI could solve it? Has anyone asked ChatGPT? (Some really good bad examples…)

https://garymarcus.substack.com/p/things-are-about-to-get-a-lot-worse

Things are about to get a lot worse for Generative AI

A full of spectrum of infringement

At around the same time as news of the New York Times lawsuit vs OpenAI broke, Reid Southen, the film industry concept artist (Marvel, DC, Matrix Resurrections, Hunger Games, etc.) I wrote about last week, and I started doing some experiments together.

We will publish a full report next week, but it is already clear that what we are finding poses serious challenges for generative AI.

The crux of the Times lawsuit is that OpenAI’s chatbots are fully capable of reproducing text nearly verbatim:

The thing is, it is not just text. OpenAI’s image software (which we accessed through Bing) is perfectly capable of verbatim and near-verbatim repetition of sources as well.





No child left unsurveilled?

https://www.politico.com/news/2023/12/29/artificial-intelligence-privacy-schools-00132790

Artificial intelligence stirs privacy challenges for schools

State and local leaders are navigating protections for students despite a lag in federal support.

Dozens of Arizona school districts have been vetting technology vendors to weed out products that might use student data for advertising. Schools in West Virginia and Montana have started to boost their security using facial recognition systems even though it has a high rate of false matches among women and children and is already a concern across New York.

Oregon provides a checklist and other materials for schools looking to develop generative AI policies while California is directing schools on how they can integrate AI in the classroom in a way that prioritizes student safety. Mississippi expects to release school AI guidance in January, and Arizona is forming a committee in early 2024 to recommend policy procedures for implementing and monitoring the technology in schools.

After a legal challenge and subsequent moratorium, New York banned the use of facial recognition in schools in September after the state found the use of the technology for security purposes “may implicate civil rights laws,” noting that it could lead to a “potentially higher rate of false positives for people of color, non-binary and transgender people, women, the elderly and children.” But in Montana, the state barred the continuous use of facial recognition technology by state and local governments but carved schools out of the ban.



Friday, December 29, 2023

Perspective.

https://www.techdirt.com/2023/12/28/the-ny-times-lawsuit-against-openai-would-open-up-the-ny-times-to-all-sorts-of-lawsuits-should-it-win/

The NY Times Lawsuit Against OpenAI Would Open Up The NY Times To All Sorts Of Lawsuits Should It Win

This week the NY Times somehow broke the story of… well, the NY Times suing OpenAI and Microsoft. I wonder who tipped them off. Anyhoo, the lawsuit in many ways is similar to some of the over a dozen lawsuits filed by copyright holders against AI companies. We’ve written about how silly many of these lawsuits are, in that they appear to be written by people who don’t much understand copyright law. And, as we noted, even if courts actually decide in favor of the copyright holders, it’s not like it will turn into any major windfall. All it will do is create another corruptible collection point, while locking in only a few large AI companies who can afford to pay up.

I’ve seen some people arguing that the NY Times lawsuit is somehow “stronger” and more effective than the others, but I honestly don’t see that. Indeed, the NY Times itself seems to think its case is so similar to the ridiculously bad Authors Guild case, that it’s looking to combine the cases.

But while there are some unique aspects to the NY Times case, I’m not sure they are nearly as compelling as the NY Times and its supporters think they are. Indeed, I think if the Times actually wins its case, it would open the Times itself up to some fairly damning lawsuits itself, given its somewhat infamous journalistic practices regarding summarizing other people’s articles without credit. But, we’ll get there.





Keep learning.

https://www.kdnuggets.com/25-free-books-to-master-sql-python-data-science-machine-learning-and-natural-language-processing

25 Free Books to Master SQL, Python, Data Science, Machine Learning, and Natural Language Processing



Thursday, December 28, 2023

An idea not just for lawyers…

https://www.bespacific.com/openjustice-ai-a-global-open-source-legal-language-model

OpenJustice.ai: A Global Open-source Legal Language Model

Dahan, Samuel and Bhambhoria, Rohan and Liang, David and Zhu, Xiaodan, OpenJustice.ai: A Global Open-source Legal Language Model (October 2023). Available at SSRN: https://ssrn.com/abstract=4624814 or http://dx.doi.org/10.2139/ssrn.4624814

Generalized AI like ChatGPT cannot and should not be used for legal tasks. It presents significant risks for both the legal professions as well as litigants. However, domain-specific AI should not be ruled out. It has the potential for legal research as well as access to justice. In this paper, we call for the development of an open-source and distributed legal AI accessible to the entire legal community. We believe it has the potential to address some of the limitations related to the use of general AI for legal problems and resolving disputes – shortcomings that include legal misinformation or hallucinations, lack of transparency and precision, and inability to offer diverse and multiple narratives.”





Perspective.

https://searchengineland.com/seo-2023-recap-436035

SEO year in review 2023: The year of generative AI

It was one of the biggest years of change in search and SEO history. A recap of Google SGE, ranking revelations, Bing Chat and more.





Keep learning.

https://www.kdnuggets.com/25-free-courses-to-master-data-science-data-engineering-machine-learning-mlops-and-generative-ai

25 Free Courses to Master Data Science, Data Engineering, Machine Learning, MLOps, and Generative AI

In today's rapidly developing technological landscape, it is crucial to master skills in data science, machine learning, and AI. Whether you're seeking to embark on a new career or enhance your existing expertise, there is a plethora of online resources available, and many of them are free! We have gathered the top posts on free courses (that you love) from KDnuggets and compiled them to provide you with a collection of courses that are excellent. Bookmark this page for future reference, as you will likely return to it to learn new skills or try out new courses.





Tools & Techniques.

https://www.fastcompany.com/91000628/these-were-some-of-the-most-useful-tools-of-2023

These were some of the most useful tools of 2023





Tools & Techniques.

https://www.forbes.com/sites/lanceeliot/2023/12/28/must-read-best-of-practical-prompt-engineering-strategies-to-become-a-skillful-prompting-wizard-in-generative-ai/?sh=38e9534019cd

Must-Read Best Of Practical Prompt Engineering Strategies To Become A Skillful Prompting Wizard In Generative AI

In today’s column, I have put together my most-read postings on how to skillfully craft your prompts when making use of generative AI such as ChatGPT, Bard, Gemini, Claude, GPT-4, and other popular large language models (LLM). These are handy strategies and specific techniques that can make a tremendous difference when using generative AI. If you ever wondered what other people know about prompting but for which you don’t know, perhaps this recap will ensure that you are in the know.



 

Tuesday, December 26, 2023

Many players, many targets. Expect more of both next year.

https://www.bespacific.com/odni-intel-community-assessment-of-foreign-threats-to-2022-us-elections/

ODNI Releases Intelligence Community Assessment of Foreign Threats to the 2022 U.S. Elections

The Office of the Director of National Intelligence (ODNI) today released the declassified Intelligence Community Assessment of Foreign Threats to the 2022 U.S. Elections [redacted] Coordinated across the Intelligence Community (IC), the assessment addresses the intentions and efforts of foreign actors to influence or interfere with the 2022 U.S. elections. Within 45 days of the 2022 U.S. elections, ODNI completed and distributed the classified version of this report pursuant to Executive Order 13848. “We share our assessment and the accompanying material to help inform the American public about foreign influence efforts, including attempts by foreign actors to induce friction and undermine confidence in the electoral process that underpins our democracy,” said Director of National Intelligence Avril Haines. “As global barriers to entry lower and accessibility rises, such influence efforts remain a continuing challenge for our country, and an informed understanding of the problem can serve as one defense.” In addition to the declassified Intelligence Community Assessment, the accompanying National Intelligence Council Memorandum, Other Countries’ Activities During the 2022 Election Cycle, provides added insights…”





Something for politicians to consider?

https://www.bespacific.com/most-readers-want-publishers-to-label-ai-generated-articles-but-trust-outlets-less-when-they-do/

Readers want publishers to label AI-generated articles but trust outlets less when they do

Nieman Lab: “An overwhelming majority of readers would like news publishers to tell them when AI has shaped the news coverage they’re seeing. But, new research finds, news outlets pay a price when they disclose using generative AI. That’s the conundrum at the heart of new research from University of Minnesota’s Benjamin Toff and Oxford Internet Institute’s Felix M. Simon. Their working paper “‘Or they could just not use it?’: The paradox of AI disclosure for audience trust in news” is one of the first experiments to examine audience perceptions of AI-generated news. More than three-quarters of U.S. adults think news articles written by AI would be “a bad thing.” But, from Sports Illustrated to Gannett, it’s clear that particular ship has sailed. Asking Google for information and getting AI-generated content back isn’t the future, it’s our present-day reality. Much of the existing research on perceptions of AI in newsmaking has focused on algorithmic news recommendation, i.e. questions like how readers feel about robots choosing their headlines. Some have suggested news consumers may perceive AI-generated news as more fair and neutral owing to the “machine heuristic” in which people credit technology as operating without pesky things like human emotions or ulterior motives. For this experiment, conducted in September 2023, participants read news articles of varying political content — ranging from a piece on the release of the “Barbie” film to coverage of an investigation into Hunter Biden. For some stories, the work was clearly labeled as AI-generated. Some of the AI-labeled articles were accompanied by a list of news reports used as sources…”





Perspective. I think it loses something in translation but there are some interesting points.

https://english.elpais.com/technology/2023-12-25/gemma-galdon-algorithm-auditor-artificial-intelligence-is-of-very-poor-quality.html

Gemma Galdón, algorithm auditor: ‘Artificial intelligence is of very poor quality’

The founder of Eticas Consulting advises international organizations to help them identify and avoid bias. She distrusts the expectations of the sector: ‘To propose that a data system is going to make a leap into consciousness is a hallucination’



Monday, December 25, 2023

We’re the good guys so let us look over your shoulder…

https://richmond.com/zzstyling/column/microsoft-365-copilot-is-here-what-are-the-legal-risks-of-using-it/article_9004342e-9f9c-11ee-9b82-df3d9f4cc1df.html

Microsoft 365 Copilot is here. What are the legal risks of using it?

Copilot adds generative AI capability to core Microsoft Office applications, such as Word, Outlook, Excel, Teams, and PowerPoint. It can be used to create, summarize and analyze things in those applications.

The biggest concern is confidentiality. With many generally available generative AIs, such as ChatGPT, anything you put in a prompt is used in the AI’s training. That creates a risk that your input could appear in someone else’s output. Also, the AI provider can see your input and output.

Microsoft promises that, with Copilot, your inputs and outputs are kept confidential. It says it will not use your input or output to train its AI for its other customers, and your input will not show up in the output of other Copilot users (at least outside of your company).

But there is a major catch: Microsoft says it captures and may access your Copilot prompts and outputs for 30 days. It operates an abuse monitoring system to review that material for violations of its code of conduct and “other applicable product terms.” Microsoft says its customers with special needs regarding inputs containing sensitive, confidential, or legally regulated input data can apply to Microsoft for an exemption from this abuse monitoring.



Sunday, December 24, 2023

Looking for anyone who has a possible solution...

https://scholarship.law.marquette.edu/cgi/viewcontent.cgi?article=1042&context=ipilr

Artificial Intelligence Owning Patents: A Worldwide Court Debate

In the international sphere, a showdown is unfolding in the highest courts of many countries, including the United States, Canada, Australia, China, Japan, India, and several European countries.1 The surrounding issue is whether artificial intelligence (AI) can be recognized as the sole inventor of a patent.2

… As the events surrounding The Artificial Inventor Project and its legal adventures unfold around the world, this Comment explores what it means to be an inventor and different countries’ legal reasoning for their decisions to recognize, or not recognize, AI as a patent inventor. Specifically, this Comment will analyze the United States’ Patent Laws to better understand why The Artificial Inventor Project is not recognized in the United States. Following this analysis, the focus will turn to analyzing United Kingdom, Germany, South Africa, and Australia’s legal interpretations of AI as a patent inventor. The final section of this Comment proposes a better approach, based on the analyzed countries’ approaches, for the United States to ta7e regarding recognizing DABUS as a patent inventor.



(Related)

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4666432

Granting Legal Personality To Artificial Intelligences In Brazil’s Legal Context: A Possible Solution To The Copyright Limbo

This article investigates the feasibility and consequences of granting legal personality to Artificial Intelligences (AIs) in the context of Brazilian law, with a special focus on copyright law. It conducts a thorough analysis of how such a grant can enhance legal security and encourage innovation in AI technologies. Through an integrative review of the literature and a comparative analysis of national and international legislation and jurisprudence, the study explores the implications of this legislative innovation. The article highlights the importance of legal clarity for companies and investors in the AI sector, emphasizing that granting legal personality to AIs can simplify the identification of the copyright holder and protect investments. However, the work also recognizes challenges, such as the complexity of assigning authorship and evaluating the originality of works created by AIs. A careful debate is proposed on criteria for determining which AIs should be considered legal persons and how to balance the rights and duties of AIs and their creators. The study suggests adapting the legal structure of the LTDA to incorporate AIs as operational entities, aiming for an effective legal framework for managing risks associated with AI. It concludes that granting legal personality to AIs in Brazil is a promising strategy, requiring careful consideration and forward-looking vision, emphasizing the need for Brazilian law to prepare for the opportunities and challenges of the AI era.





Perhaps AI copyright is not possible… (Makes LLM output sound like politicians.)

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4667410

Asemic Defamation, or, the Death of the AI Speaker

Large Language Model (“LLM”) systems have captured considerable popular, scholarly, and governmental notice. By analyzing vast troves of text, these machine learning systems construct a statistical model of relationships among words, and from that model they are able to generate syntactically sophisticated texts. However, LLMs are prone to “hallucinate,” which is to say that they routinely generate statements that are demonstrably false. Although couched in the language of credible factual statements, such LLM output may entirely diverge from known facts. When they concern particular individuals, such texts may be reputationally damaging if the contrived false statements they contain are derogatory.

Scholars have begun to analyze the prospects and implications of such AI defamation. However, most analyses to date begin from the premise that LLM texts constitute speech that is protected under constitutional guarantees of expressive freedom. This assumption is highly problematic, as LLM texts have no semantic content. LLMs are not designed, have no capability, and do not attempt to fit the truth values of their output to the real world. LLM texts appear to constitute an almost perfect example of what semiotics labels “asemic signification,” that is, symbols that have no meaning except for meaning imputed to them by a reader.

In this paper, I question whether asemic texts are properly the subject of First Amendment coverage. I consider both LLM texts and historical examples to examine the expressive status of asemic texts, recognizing that LLM texts may be the first instance of fully asemic texts. I suggest that attribution of meaning by listeners alone cannot credibly place such works within categories of protected speech. In the case of LLM outputs, there is neither a speaker, nor communication of any message, nor any meaning that is not supplied by the text recipient. I conclude that LLM texts cannot be considered protected speech, which vastly simplifies their status under defamation law.



Saturday, December 23, 2023

Unforgivable. The first thing hackers look for is stupidity.

https://www.databreaches.net/u-s-water-utilities-were-hacked-after-leaving-their-default-passwords-set-to-1111-cybersecurity-officials-say/

U.S. water utilities were hacked after leaving their default passwords set to ‘1111,’ cybersecurity officials say

Wilfred Chan reports:

Providers of critical infrastructure in the United States are doing a sloppy job of defending against cyber intrusions, the National Security Council tells Fast Company, pointing to recent Iran-linked attacks on U.S. water utilities that exploited basic security lapses.
The security council tells Fast Company it’s also aware of recent intrusions by hackers linked to China’s military at American infrastructure entities that include water and energy utilities in multiple states. Neither the Iran-linked or China-linked attacks affected critical systems or caused disruptions, according to reports.

Read more at FastCompany.





As go law firms so goes the country? Or are law firms lagging? I think these tips are valuable anywhere.

https://www.jdsupra.com/legalnews/seo-for-law-firms-8-critical-seo-trends-4372130/

SEO for Law Firms: 8 Critical SEO Trends to Know About for 2024

As we prepare to move into a new year, it’s important to take stock of the trends shaping future SEO activity. With the rise of AI, it’s becoming even more crucial for search engines to understand user intent, prioritize user experience, and gauge trust and credibility. This means you may need to tweak your current SEO tactics. Here are some SEO trends to look out for in the next year and beyond.



Friday, December 22, 2023

Imagine doing this without permissions. If I could make 10% (even 1%) for a weekend’s prompting of ChatGPT I’d be content. (Do you suppose that’s why IP owners are concerned?)

https://www.reuters.com/lifestyle/abbas-virtual-show-boosts-londons-economy-tune-225-million-2023-12-21/

ABBA's virtual show boosts London's economy to the tune of $225 million

ABBA Voyage recreates Bjorn Ulvaeus, Benny Andersson, Agnetha Faltskog and Anni-Frid Lyngstad as high-tech, digital versions of themselves from their 1970s heyday, thanks to motion-capture technology.

The show, which has been seen by more than 1 million people, generated a total turnover of 322.6 million pounds in the 12 months since it opened in May 2022, according to an analysis by Sound Diplomacy and RealWorth published on Thursday.





Tools & Techniques. Forensics?

https://www.bespacific.com/how-to-check-if-something-online-was-written-by-ai/

How to Check If Something Online Was Written by AI

Gizmodo: “Generative artificial intelligence is everywhere you look these days, including on the web: advanced predictive text bots such as ChatGPT can now spew out endless reams of text on every topic imaginable and make all this written content natural enough that it could plausibly have been written by a human being. So, how can you make sure the articles and features you’re reading online have been thought up and typed out by an actual human being? While there isn’t any foolproof, 100 percent guaranteed way of doing this, there are a variety of clues you can look out for to spot what’s AI-generated and what isn’t…”





Tools & Techniques.

https://www.bespacific.com/how-to-set-up-legacy-contacts-for-your-online-accounts/

How to set up legacy contacts for your online accounts

Washington Post [read free ]: “If you’ve got a few days this holiday season to help your family with tech chores, embrace an awkward but necessary task: Assign someone to take over a loved one’s online accounts after they die. “Legacy contacts” are trusted individuals who can manage an online account after the owner dies. Maybe you want to download your mom’s Facebook photos when she’s gone, or you need to access her Gmail account to find a bill. In either scenario, legacy contacts make things easier during a difficult time. The average internet user is estimated to have anywhere from dozens to hundreds of online accounts. Not all of them are important for estate planning, so focus on the big ones: finance, health, cloud storage and social media.”



Thursday, December 21, 2023

Interesting. It should be very easy to find victims who are genuinely afraid. How much is fear worth?

https://www.databreaches.net/court-of-justice-of-the-european-union-rules-that-fear-may-constitute-damage-under-the-gdpr/

Court of Justice of the European Union Rules That Fear May Constitute Damage Under the GDPR

Hunton Andrews Kurth writes:

On December 14, 2023, the Court of Justice of the European Union (“CJEU”) issued its judgment in the case of VB v. Natsionalna agentsia za prihodite (C-340/21), in which it clarified, among other things, the concept of non-material damage under Article 82 of the EU General Data Protection Regulation (“GDPR”) and the rules governing burden of proof under the GDPR.
Background
Following a cyber attack against the Bulgarian National Revenue Agency (the “Agency”), one of the more than six million affected individuals brought an action before the Administrative Court of Sofia claiming compensation. In support of that claim, the affected individual argued that they had suffered non-material damage as a result of a personal data breach caused by the Agency’s failure to fulfill its obligations under, inter alia, Articles 5(1)(f), 24 and 32 of the GDPR. The non-material damage claimed consisted of the fear that their personal data, having been published without their consent, might be misused in the future, or that they might be blackmailed, assaulted or even kidnapped.

Read more at Privacy & Information Security Law Blog.





A slippery slope. Who gets to define ‘concerning behavior’ and who will they mention that definition to? (I can think of several ways to ‘game’ this system for my own amusement.)

https://www.bespacific.com/lawrence-school-district-using-ai-to-look-for-concerning-behavior-in-students-activity/

Lawrence school district using AI to look for ‘concerning behavior’ in students’ activity

LJworld.com (read free ): “The Lawrence [Kansas] school district has purchased a new system that uses artificial intelligence to look for warning signs of “concerning behavior” in the things students type, send and search for on their district-issued computers and other such devices. The purchase of the software system, called Gaggle, comes at a time when questions are growing about how artificial intelligence will affect people’s privacy. But school district leaders are emphasizing that the software’s main purpose [but not sole purpose? Bob] will be to help protect K-12 students against self-harm, bullying, and threats of violence. “First and foremost, we have an obligation to protect the safety of our students,” Lawrence school board member Ronald “G.R.” Gordon-Ross told the Journal-World. “It’s another layer of security in our quest to stay ahead of some of these issues.” Gordon-Ross, who is a longtime software developer, said that he respects the “privacy piece” of the question surrounding the use of monitoring systems. But he also said it’s important to keep in mind that the iPads and other devices that the software will monitor are the district’s property, even though they’re issued to students — “we’re still talking about the fact that they’re using devices and resources that don’t belong to them.”

See also from LJ World [read free] – New security system that monitors students’ computer use has ‘inundated’ district with alerts; leader apologizes to staff… “According to information obtained from the district on Friday, there have been 408 “detections” of concerning behavior since Gaggle’s districtwide launch on Nov. 20. Of those, 188 have resulted in actual “alerts.” District spokesperson Julie Boyle said that there are three different priority levels that Gaggle uses to classify the concerning information it detects. The lowest level, “violations,” includes minor offenses like the use of profanity. Those do not trigger alerts, but the system collects data on them “in case future review is necessary.” Next is a level called “Questionable Content,” which triggers a “non-urgent alert to the building administrators for review and follow-up as necessary.” Finally, Boyle said, there is the most urgent level: “Potential Student Situations.” This level includes warning signs of suicide, violence, drug abuse, harassment and other serious behavioral or safety problems, and it triggers “urgent alerts involving an immediate phone call, text, and email to the building administrators.” An alert of this kind is assigned to a staff member for investigation and follow-up.”





Seriously? 90%? How could they claim this tool is an improvement?

https://www.pogowasright.org/humana-also-using-ai-tool-with-90-error-rate-to-deny-care-lawsuit-claims/

Humana also using AI tool with 90% error rate to deny care, lawsuit claims

Beth Mole reports:

Humana, one the nation’s largest health insurance providers, is allegedly using an artificial intelligence model with a 90 percent error rate to override doctors’ medical judgment and wrongfully deny care to elderly people on the company’s Medicare Advantage plans.
According to a lawsuit filed Tuesday, Humana’s use of the AI model constitutes a “fraudulent scheme” that leaves elderly beneficiaries with either overwhelming medical debt or without needed care that is covered by their plans. Meanwhile, the insurance behemoth reaps a “financial windfall.”

Read more at Ars Technica.





Not (yet) a full replacement for lawyers, but clearly heading in that direction. I hope lawyers verify the results rather than accept bogus citations.

https://www.lawnext.com/2023/12/lexisnexis-expands-access-to-its-lexis-ai-to-law-school-students.html

LexisNexis Expands Access to its Lexis+ AI to Law School Students

In October, LexisNexis released its generative AI research tool, Lexis+ AI, for general availability for U.S. customers, along with limited release in law schools to select faculty, librarians and students. Now, the company is further expanding access to the tool, making it available to 100,000 second- and third-year law students starting in the spring semester, with some getting access as soon as this week.

Lexis+ AI uses large language models (LLMs) to answer legal research questions, summarize legal issues, and generate legal document drafts. LexisNexis says the product delivers trusted results with “hallucination-free” linked legal citations, combining the power of generative AI with proprietary LexisNexis search technology, Shepard’s Citations functionality, and authoritative content.





There is some danger in being the first to use AI. Is there more danger in being second?

https://www.ft.com/content/f1aff4d0-b2c5-4266-aa0a-604ef14894bb

Allen & Overy rolls out AI contract negotiation tool in challenge to legal industry

Allen & Overy has created an artificial intelligence contract negotiation tool, as the magic circle law firm pushes forward with technology that threatens to disrupt the traditional practices of the legal profession.

The UK-headquartered group, in partnership with Microsoft and legal AI start-up Harvey, has developed the service which draws on existing templates for contracts, such as non-disclosure agreements and merger and acquisition terms, to draft new agreements that lawyers can then amend or accept.

The tool, known as ContractMatrix, is being rolled out to clients in an attempt to drive new revenues, attract more business and save time for in-house lawyers. A&O estimated it would save up to seven hours in contract negotiations.

But David Wakeling, A&O partner and head of the firm’s markets innovation group, which developed ContractMatrix, said the firm’s goal was to “disrupt the legal market before someone disrupts us”.





Perspective.

https://www.thecollector.com/philosophy-of-artificial-intelligence-descartes-turing/

What Is the Philosophy of Artificial Intelligence? From Descartes to Turing





Tools & Techniques.

https://www.bespacific.com/is-your-search-experience-leaving-you-a-little-unsatisfied/

Is your search experience leaving you a little unsatisfied?

Give these Search Tweaks a try. This site has sixteen tools for enhancing Google search in four categories — Query Builders, News-Related Search, Time-Related Search, and Search Utilities. Some tools, like Back that Ask Up, make existing Google features easier to use. Others, like Marion’s Monocle, add search functionality. Hold your mouse over each menu button to see a popup explainer of what a tool does. If you like what you see, give the button a click. Using this site requires JavaScript. It’s designed to work on desktop. It should work on your phone but the design does not anticipate that. This site uses Simple Analytics because privacy, it’s a great idea. None of these tools use the Google API. Nor do they use scraping. Where’s the fun in that?”