Thursday, December 21, 2023

Interesting. It should be very easy to find victims who are genuinely afraid. How much is fear worth?

https://www.databreaches.net/court-of-justice-of-the-european-union-rules-that-fear-may-constitute-damage-under-the-gdpr/

Court of Justice of the European Union Rules That Fear May Constitute Damage Under the GDPR

Hunton Andrews Kurth writes:

On December 14, 2023, the Court of Justice of the European Union (“CJEU”) issued its judgment in the case of VB v. Natsionalna agentsia za prihodite (C-340/21), in which it clarified, among other things, the concept of non-material damage under Article 82 of the EU General Data Protection Regulation (“GDPR”) and the rules governing burden of proof under the GDPR.
Background
Following a cyber attack against the Bulgarian National Revenue Agency (the “Agency”), one of the more than six million affected individuals brought an action before the Administrative Court of Sofia claiming compensation. In support of that claim, the affected individual argued that they had suffered non-material damage as a result of a personal data breach caused by the Agency’s failure to fulfill its obligations under, inter alia, Articles 5(1)(f), 24 and 32 of the GDPR. The non-material damage claimed consisted of the fear that their personal data, having been published without their consent, might be misused in the future, or that they might be blackmailed, assaulted or even kidnapped.

Read more at Privacy & Information Security Law Blog.





A slippery slope. Who gets to define ‘concerning behavior’ and who will they mention that definition to? (I can think of several ways to ‘game’ this system for my own amusement.)

https://www.bespacific.com/lawrence-school-district-using-ai-to-look-for-concerning-behavior-in-students-activity/

Lawrence school district using AI to look for ‘concerning behavior’ in students’ activity

LJworld.com (read free ): “The Lawrence [Kansas] school district has purchased a new system that uses artificial intelligence to look for warning signs of “concerning behavior” in the things students type, send and search for on their district-issued computers and other such devices. The purchase of the software system, called Gaggle, comes at a time when questions are growing about how artificial intelligence will affect people’s privacy. But school district leaders are emphasizing that the software’s main purpose [but not sole purpose? Bob] will be to help protect K-12 students against self-harm, bullying, and threats of violence. “First and foremost, we have an obligation to protect the safety of our students,” Lawrence school board member Ronald “G.R.” Gordon-Ross told the Journal-World. “It’s another layer of security in our quest to stay ahead of some of these issues.” Gordon-Ross, who is a longtime software developer, said that he respects the “privacy piece” of the question surrounding the use of monitoring systems. But he also said it’s important to keep in mind that the iPads and other devices that the software will monitor are the district’s property, even though they’re issued to students — “we’re still talking about the fact that they’re using devices and resources that don’t belong to them.”

See also from LJ World [read free] – New security system that monitors students’ computer use has ‘inundated’ district with alerts; leader apologizes to staff… “According to information obtained from the district on Friday, there have been 408 “detections” of concerning behavior since Gaggle’s districtwide launch on Nov. 20. Of those, 188 have resulted in actual “alerts.” District spokesperson Julie Boyle said that there are three different priority levels that Gaggle uses to classify the concerning information it detects. The lowest level, “violations,” includes minor offenses like the use of profanity. Those do not trigger alerts, but the system collects data on them “in case future review is necessary.” Next is a level called “Questionable Content,” which triggers a “non-urgent alert to the building administrators for review and follow-up as necessary.” Finally, Boyle said, there is the most urgent level: “Potential Student Situations.” This level includes warning signs of suicide, violence, drug abuse, harassment and other serious behavioral or safety problems, and it triggers “urgent alerts involving an immediate phone call, text, and email to the building administrators.” An alert of this kind is assigned to a staff member for investigation and follow-up.”





Seriously? 90%? How could they claim this tool is an improvement?

https://www.pogowasright.org/humana-also-using-ai-tool-with-90-error-rate-to-deny-care-lawsuit-claims/

Humana also using AI tool with 90% error rate to deny care, lawsuit claims

Beth Mole reports:

Humana, one the nation’s largest health insurance providers, is allegedly using an artificial intelligence model with a 90 percent error rate to override doctors’ medical judgment and wrongfully deny care to elderly people on the company’s Medicare Advantage plans.
According to a lawsuit filed Tuesday, Humana’s use of the AI model constitutes a “fraudulent scheme” that leaves elderly beneficiaries with either overwhelming medical debt or without needed care that is covered by their plans. Meanwhile, the insurance behemoth reaps a “financial windfall.”

Read more at Ars Technica.





Not (yet) a full replacement for lawyers, but clearly heading in that direction. I hope lawyers verify the results rather than accept bogus citations.

https://www.lawnext.com/2023/12/lexisnexis-expands-access-to-its-lexis-ai-to-law-school-students.html

LexisNexis Expands Access to its Lexis+ AI to Law School Students

In October, LexisNexis released its generative AI research tool, Lexis+ AI, for general availability for U.S. customers, along with limited release in law schools to select faculty, librarians and students. Now, the company is further expanding access to the tool, making it available to 100,000 second- and third-year law students starting in the spring semester, with some getting access as soon as this week.

Lexis+ AI uses large language models (LLMs) to answer legal research questions, summarize legal issues, and generate legal document drafts. LexisNexis says the product delivers trusted results with “hallucination-free” linked legal citations, combining the power of generative AI with proprietary LexisNexis search technology, Shepard’s Citations functionality, and authoritative content.





There is some danger in being the first to use AI. Is there more danger in being second?

https://www.ft.com/content/f1aff4d0-b2c5-4266-aa0a-604ef14894bb

Allen & Overy rolls out AI contract negotiation tool in challenge to legal industry

Allen & Overy has created an artificial intelligence contract negotiation tool, as the magic circle law firm pushes forward with technology that threatens to disrupt the traditional practices of the legal profession.

The UK-headquartered group, in partnership with Microsoft and legal AI start-up Harvey, has developed the service which draws on existing templates for contracts, such as non-disclosure agreements and merger and acquisition terms, to draft new agreements that lawyers can then amend or accept.

The tool, known as ContractMatrix, is being rolled out to clients in an attempt to drive new revenues, attract more business and save time for in-house lawyers. A&O estimated it would save up to seven hours in contract negotiations.

But David Wakeling, A&O partner and head of the firm’s markets innovation group, which developed ContractMatrix, said the firm’s goal was to “disrupt the legal market before someone disrupts us”.





Perspective.

https://www.thecollector.com/philosophy-of-artificial-intelligence-descartes-turing/

What Is the Philosophy of Artificial Intelligence? From Descartes to Turing





Tools & Techniques.

https://www.bespacific.com/is-your-search-experience-leaving-you-a-little-unsatisfied/

Is your search experience leaving you a little unsatisfied?

Give these Search Tweaks a try. This site has sixteen tools for enhancing Google search in four categories — Query Builders, News-Related Search, Time-Related Search, and Search Utilities. Some tools, like Back that Ask Up, make existing Google features easier to use. Others, like Marion’s Monocle, add search functionality. Hold your mouse over each menu button to see a popup explainer of what a tool does. If you like what you see, give the button a click. Using this site requires JavaScript. It’s designed to work on desktop. It should work on your phone but the design does not anticipate that. This site uses Simple Analytics because privacy, it’s a great idea. None of these tools use the Google API. Nor do they use scraping. Where’s the fun in that?”



No comments: