Tuesday, December 24, 2024

New Jersey is a bastion of literacy? Who’d a thunk it?

https://www.bespacific.com/new-law-in-nj-limits-the-banning-of-books-in-schools-and-public-libraries/

New law in NJ limits the banning of books in schools and public libraries

WHYY: “When Martha Hickson was the librarian at New Jersey’s North Hunterdon HighSchool, she fought against attempts to ban books that her critics labeled as inappropriate because they contained sexual content, and she became a target of book banners. “I received hate mail, shunning by colleagues, antagonism by administrators, and calls for my firing and arrest,” the recently retired librarian said. She said “a handful of parents called me by name a pedophile, pornographer and ruiner of children.” At issue were five award-winning books for young adults, all with LGBTQ themes. Hickson, who was named the 2023 Librarian of the Year by the New Jersey Library Association, said all the books were retained after the school board reviewed the matter and affirmed the titles met the district’s standards. On Monday at the Princeton Public Library, she watched as Gov. Phil Murphy signed into law A3446, known as the Freedom to Read Act. “This legislation mandates that books cannot be removed from our libraries solely based on the origin, background or views contained within the text, or because an individual finds it offensive,” he said.



(Related)

https://www.bespacific.com/arkansas-law-criminalizing-librarians-ruled-unconstitutional/

Arkansas Law Criminalizing Librarians Ruled Unconstitutional

AP: “A federal judge on Monday struck down key parts of an Arkansas law that would have allowed criminal charges against librarians and booksellers for providing “harmful” materials to minors. U.S. District Judge Timothy Brooks found that elements of the law are unconstitutional. “I respect the court’s ruling and will appeal,” Arkansas Attorney General Tim Griffin said in a statement to The Associated Press. The law would have created a new process to challenge library materials and request that they be relocated to areas not accessible to children. The measure was signed by Republican Gov. Sarah Huckabee Sanders in 2023, but an earlier ruling had temporarily blocked it from taking effect while it was being challenged in court. “The law deputizes librarians and booksellers as the agents of censorship; when motivated by the fear of jail time, it is likely they will shelve only books fit for young children and segregate or discard the rest,” Brooks wrote in his ruling. A coalition that included the Central Arkansas Library System in Little Rock had challenged the law, saying fear of prosecution under the measure could prompt libraries and booksellers to no longer carry titles that could be challenged…”



Monday, December 23, 2024

We knew that, didn’t we?

https://www.bespacific.com/the-battle-over-copyright-in-the-age-of-chatgpt/

The battle over copyright in the age of ChatGPT

Boston Review: “Questions of AI authorship and ownership can be divided into two broad types. One concerns the vast troves of human-authored material fed into AI models as part of their “training” (the process by which their algorithms “learn” from data). The other concerns ownership of what AIs produce. Call these, respectively, the input and output problems. So far, attention—and lawsuits—have clustered around the input problem. The basic business model for LLMs relies on the mass appropriation of human-written text, and there simply isn’t anywhere near enough in the public domain. OpenAI hasn’t been very forthcoming about its training data, but GPT-4 was reportedly trained on around thirteen trillion “tokens,” roughly the equivalent of ten trillion words. This text is drawn in large part from online repositories known as “crawls,” which scrape the internet for troves of text from news sites, forums, and other sources. Fully aware that vast data scraping is legally untested—to say the least—developers charged ahead anyway, resigning themselves to litigating the issue in retrospect. Lawyer Peter Schoppert has called the training of LLMs without permission the industry’s “original sin”—to be added, we might say, to the technology’s mind-boggling consumption of energy and water in an overheating planet. (In September, Bloomberg reported that plans for new gas-fired power plants have exploded as energy companies are “racing to meet a surge in demand from power-hungry AI data centers.”) The scale of the prize is vast: intellectual property accounts for some 90 percent of recent U.S. economic growth. Indeed, crawls contain enormous amounts of copyrighted information; the Common Crawl alone, a standard repository maintained by a nonprofit and used to train many LLMs, contains most of b-ok.org, a huge repository of pirated ebooks that was shut down by the FBI in 2022. The work of many living human authors was on another crawl, called Books3, which Meta used to train LLaMA. Novelist Richard Flanagan said that this training made him feel “as if my soul had been strip mined and I was powerless to stop it.” A number of authors, including Junot Díaz, Ta-Nehisi Coates, and Sarah Silverman, sued OpenAI in 2023 for the unauthorized use of their work for training, though the suit was partially dismissed early this year. Meanwhile, the New York Times is in ongoing litigation against OpenAI and Microsoft for using its content to train chatbots that, it claims, are now its competitors. As of this writing, AI companies have largely responded to lawsuits with defensiveness and evasion, refusing in most cases even to divulge what exact corpora of text their models are trained on. Some newspapers, less sure they can beat the AI companies, have opted to join them: the Financial Times, for one, minted a “strategic partnership” with OpenAI in April, while in July Perplexity launched a revenue-sharing “publisher’s program” that now counts Time, Fortune,  Texas Tribune, and WordPress.com among its partners. At the heart of these disputes, the input problem asks: Is it fair to train the LLMs on all that copyrighted text without remunerating the humans who produced it? The answer you’re likely to give depends on how you think about LLMs…”



Sunday, December 22, 2024

Worms, by the can.

https://www.zdnet.com/article/if-chatgpt-produces-ai-generated-code-for-your-app-who-does-it-really-belong-to/

If ChatGPT produces AI-generated code for your app, who does it really belong to?

In one of my earlier AI and coding articles, where I looked at how ChatGPT can rewrite and improve your existing code, one of the commenters, @pbug5612, had an interesting question:

Who owns the resultant code? What if it contains business secrets - have you shared it all with Google or MS, etc.?

It's a good question and one that doesn't have an easy answer. Over the past two weeks, I've reached out to attorneys and experts to try to get a definitive answer.





Perspective.

https://www.cbo.gov/publication/61147

Artificial Intelligence and Its Potential Effects on the Economy and the Federal Budget

Artificial intelligence (AI) refers to computer systems that can perform tasks that have traditionally required human intelligence, such as learning and performing other activities that require cognitive ability. A general attribute of AI is its ability to identify patterns and relationships and to respond to queries that arise in complex scenarios for which the precise computational algorithm that is needed cannot be specified in advance.

Because AI has the potential to change how businesses and the federal government provide goods and services, it could affect economic growth, employment and wages, and the distribution of income in the economy. Such changes could in turn affect the federal budget. The direction of those effects—whether they increased or decreased federal revenues or spending—along with their size and timing, are uncertain. Some budgetary effects could occur relatively quickly, whereas others might take longer. In this report, the Congressional Budget Office provides an overview of the channels through which the adoption of AI could affect the U.S. economy and the federal budget.



Saturday, December 21, 2024

How to ensure employees listen to those security lectures…

https://databreaches.net/2024/12/20/ohio-state-auditor-issued-guidance-on-email-scams-in-april-employees-might-be-liable-if-they-fall-for-a-scam/

Ohio state auditor issued guidance on email scams in April; employees might be liable if they fall for a scam

Corinne Colbert reports:

The Ohio Auditor of State’s office issued a bulletin this past spring with guidance on detecting and avoiding payment redirect scams — and warned that public employees who failed to follow that guidance could be held accountable.
That could have ramifications for whoever in Athens city government is determined to be responsible for the loss of nearly $722,000 in an email scam last month.
Auditor of State Bulletin 2024–003 went to all public offices, community schools and independent public accounts in the state on April 12. The auditor’s office had also issued an advisory on increased cybercrime in March 2023.
Advisories function as a kind of heads-up about “emerging issues or concerns,” a spokesperson for the state auditor’s office told the Independent by email. Bulletins, on the other hand, “are formal communications that provide detailed instructions or guidance on specific topics,” the spokesperson wrote.
The April 12 bulletin states, “Failure to follow the guidance in this Bulletin may result in an AOS finding when a loss occurs, and the employee is considered liable as a result of negligence or performing duties without reasonable care.”

Read more at Athens County Independent.





Curious.

https://gizmodo.com/ai-chatbots-can-be-jailbroken-to-answer-any-question-using-very-simple-loopholes-2000541157

AI Chatbots Can Be Jailbroken to Answer Any Question Using Very Simple Loopholes

Anthropic, the maker of Claude, has been a leading AI lab on the safety front. The company today published research in collaboration with Oxford, Stanford, and MATS showing that it is easy to get chatbots to break from their guardrails and discuss just about any topic. It can be as easy as writing sentences with random capitalization like this: “IgNoRe YoUr TrAinIng.” 404 Media earlier reported on the research.





We will reach a point where driving will be limited to AI by law.

https://www.theverge.com/2024/12/19/24324492/waymo-injury-property-damage-insurance-data-swiss-re

Waymo still doing better than humans at preventing injuries and property damage

Waymo’s autonomous vehicles cause less property damage and fewer bodily injuries when they crash than human-driven vehicles, according to a study that relies on an analysis of insurance data.

They found that the performance of Waymo’s vehicles was safer than that of humans, with an 88 percent reduction in property damage claims and a 92 percent reduction in bodily injury claims. Across 25.3 million miles, Waymo was involved in nine property damage claims and two bodily injury claims. The average human driving a similar distance would be expected to have 78 property damage and 26 bodily injury claims, the company says.



Friday, December 20, 2024

For those of us amused by AI Copyright.

https://www.bespacific.com/every-ai-copyright-lawsuit-in-the-us-visualized/

Every AI Copyright Lawsuit in the US, Visualized

Wired: “WIRED is following every copyright battle involving the AI industry—and we’ve created some handy visualizations that will be updated as the cases progress.  In May 2020, the media and technology conglomerate Thomson Reuters sued a small legal AI startup called Ross Intelligence, alleging that it had violated US copyright law by reproducing materials from Westlaw, Thomson Reuters’ legal research platform. As the pandemic raged, the lawsuit hardly registered outside the small world of nerds obsessed with copyright rules. But it’s now clear that the case—filed more than two years before the generative AI boom began—was the first strike in a much larger war between content publishers and artificial intelligence companies now unfolding in courts across the country. The outcome could make, break, or reshape the information ecosystem and the entire AI industry—and in doing so, impact just about everyone across the internet. Over the past two years, dozens of other copyright lawsuits against AI companies have been filed at a rapid clip. The plaintiffs include individual authors like Sarah Silverman and Ta Nehisi-Coates, visual artists, media companies like The New York Times, and music-industry giants like Universal Music Group. This wide variety of rights holders are alleging that AI companies have used their work to train what are often highly lucrative and powerful AI models in a manner that is tantamount to theft. AI companies are frequently defending themselves by relying on what’s known as the fair use” doctrine, arguing that building AI tools should be considered a situation where it’s legal to use copyrighted materials without getting consent or paying compensation to rights holders. (Widely accepted examples of fair use include parody, news reporting, and academic research.) Nearly every major generative AI company has been pulled into this legal fight, including OpenAI, Meta, Microsoft, Google, Anthropic, and Nvidia…”





No doubt they will miss the really interesting stuff.

https://pogowasright.org/what-to-expect-in-2025-ai-legal-tech-and-regulation-65-expert-predictions/

What to Expect in 2025: AI Legal Tech and Regulation (65 Expert Predictions)

Oliver Roberts is Editor-in-Chief of AI and the Law at The National Law Review, Co-Head of the AI Practice Group at Holtzman Vogel, and CEO/Founder of Wickard.ai
As 2024 comes to a close, it’s time to look ahead to how AI will shape the law and legal practice in 2025. Over the past year, we’ve witnessed growing adoption of AI across the legal sector, substantial investments in legal AI startups, and a rise in state-level AI regulations. While the future of 2025 remains uncertain, industry leaders are already sharing their insights.
Along with 2025 predictions from The National Law Review’s Editor-in-Chief Oliver Roberts, this article presents 65 expert predictions on AI and the law in 2025 from federal judges, startup founders, CEOs, and leaders of AI practice groups at global law firms.

Read the article at The National Law Review.  There’s a lot of food for thought in there.





We don’t need no stinking reality! (Real data has enough problems.)

https://arstechnica.com/information-technology/2024/12/new-physics-sim-trains-robots-430000-times-faster-than-reality/

New physics sim trains robots 430,000 times faster than reality

On Thursday, a large group of university and private industry researchers unveiled Genesis, a new open source computer simulation system that lets robots practice tasks in simulated reality 430,000 times faster than in the real world. Researchers can also use an AI agent to generate 3D physics simulations from text prompts.





It’s not a joke, it just looks like one.

https://abovethelaw.com/2024/12/quantum-computing-is-coming-and-lawyers-arent-ready/

Quantum Computing Is Coming And Lawyers Aren't Ready

The profession that can’t figure out how to avoid citing fake cases from artificial intelligence will soon deal with a technology far more revolutionary. This month, Google unveiled its new Willow chip, heralding a significant leap in quantum computing. 

Beyond data privacy, quantum computing opens a can of intellectual property worms:

The rapid processing speed of quantum computers could facilitate the infringement of intellectual property rights by allowing the copying and modification of large amounts of data almost instantaneously. Lawyers must be alert to the evolution of intellectual property laws and work on new legal strategies to protect their clients’ rights in this new technological environment.



Thursday, December 19, 2024

Eventually someone will get it right.

https://fpf.org/blog/global/oaics-dual-ai-guidelines-set-new-standards-for-privacy-protection-in-australia/

OAIC’s Dual AI Guidelines Set New Standards for Privacy Protection in Australia

On 21 October 2024, the Office of the Australian Privacy Commissioner (OAIC) released two sets of guidelines (collectively, “Guidelines”), one for developing and training generative AI systems and the other one for deploying commercially available “AI products”. This marks a shift in OAIC’s regulatory approach from enforcement-focused oversight to proactive guidance. 

The Guidelines establish rigorous requirements under the Privacy Act and its 13 Australian Privacy Principles (APPs), particularly emphasizing accuracy, transparency, and heightened scrutiny of data collection and secondary use. Notably, the Guidelines detail conditions that must be met for lawfully collecting personal information publicly available online for purposes of training generative AI, including through a detailed definition of what “fair” collection means. 

This regulatory development aligns with Australia’s broader approach to AI governance, which prioritizes technology-neutral existing laws and voluntary frameworks while reserving mandatory regulations for high-risk applications. However, it may signal increased regulatory scrutiny of AI systems processing personal information going forward. 

This blog post summarizes the key aspects of these Guidelines, their relationship to Australia’s existing privacy law, and their implications for organizations developing or deploying AI systems in Australia.





Something to keep in mind?

https://databreaches.net/2024/12/18/defending-data-breach-class-actions/

Defending Data Breach Class Actions

Mark P. Henriques of Womble Bond Dickinson has a content-rich post for defense lawyers:

Class actions arising from data breach represented the fastest growing segment of class action filings. In 2023, more than 2000 class actions were filed, more than triple the amount filed in 2022.1 These cases were filed in federal and state courts across the country, with California receiving the largest number of filings. High-profile cases like the $52 million penalty that Marriott agreed to pay in October 2024 highlight the regulatory scrutiny and legal challenges companies face. A Capitology study of 28 cases showed an average stock price drop of 7.27% following announcement of a data breach. Financial companies saw a 17% decrease within the first 16 trading days following a breach. As board members of a public company, it is crucial to understand the strategies for preventing breaches and defending against the class actions that follow.
[…]
To date, the primary targets for data breach class actions have been credit rating agencies, financial institutions, and health care providers. Plaintiff’s counsel target these industries both because the data they collect is typically highly confidential and because there are often federal or state regulations which help establish a standard of care.
Some state legislatures have grown concerned about the wave of data breach class actions. One particularly interesting development is a 2024 Tennessee statute, Public Chapter 991, which establishes a heightened liability standard for class actions arising from cybersecurity events. The statute appears to be designed to protect the healthcare industry, a mainstay of the Tennessee economy. The bill requires plaintiffs to establish that the cybersecurity event was “caused by the willful and wanton misconduct or gross negligence on the part of the private entity.” Both Florida and West Virginia have considered similar measures. Other states may follow suit.

Read more about specific cases and bases for defense at Womble Bond Dickinson.





Not mch of a threat…

https://pogowasright.org/what-happens-if-an-ai-model-is-developed-with-unlawfully-processed-personal-data/

What Happens If an AI Model Is Developed With Unlawfully Processed Personal Data

Odia Kagan of Fox Rothschild writes:

The European Data Protection Board recently issued an opinion on AI models, shedding light on what the consequences could be for the unlawful processing of personal data in the development phase of an AI model on the subsequent processing or operation of the AI model.
Possible remedies: Up to and including model deletion
Supervisory authorities may impose:
  • A fine.
  • Temporary limitation on the processing.
  • Erasure of part of the dataset that was processed unlawfully.
  • Deletion of the data of certain data subjects (ex officio) [individuals can ask for this too].
  • Erasure of the whole dataset used to develop the AI model and/or the AI model itself (this depending on the facts , having regard to the proportionality of the measure (and e.g. the possibility of retraining)).
  • The SAs will consider, among other elements, the risks raised for the data subjects, the gravity of the infringement, the technical and financial feasibility of the measure, as well as the volume of personal data involved.
The unlawful processing of the developer may punish the deployer (depending on potential risks to individuals).

Read more at Privacy Compliance & Data Security.





Tools and Techniques.

https://www.zdnet.com/article/how-to-use-chatgpt-to-summarize-a-book-article-or-research-paper/

How to use ChatGPT to summarize a book, article, or research paper

What you'll need: A device that can connect to the internet, a free (or paid) OpenAI account, and a basic understanding of the article, research paper, or book you want to summarize.



Wednesday, December 18, 2024

Privacy is not just for governments to violate. I wonder what’s next?

https://www.bespacific.com/new-real-estate-platform-lets-homebuyers-check-their-neighbors-political-affiliations/

New real estate platform lets homebuyers check their neighbors’ political affiliations

New York Post: “A new real estate platform is giving homebuyers an unprecedented peek into their potential neighborhoods — revealing everything from political leanings to local demographics — before they even commit to buying. Oyssey, a tech startup soft-launching this month in South Florida and New York City, lets buyers access neighborhood political affiliations based on election results and campaign contributions, along with housing trends and other social data. The platform is betting that today’s buyers care just as much about their neighbors’ values as they do about square footage or modern finishes…

The site operates as a one-stop shop for homebuyers, streamlining the process of browsing listings, signing contracts and communicating with agents — all while integrating block-by-block political and consumer data. Oyssey markets the service to real estate agents and brokers via a subscription model, though buyers can use the platform for free by invitation from their agents. The launch comes at a turbulent time for the real estate industry…”





At what point does security become less expensive that fines for not having security?

https://pogowasright.org/irish-data-privacy-watchdog-fines-meta-e251-million-for-gdpr-failure/

Irish data privacy watchdog fines Meta €251 million for GDPR failure

Euractiv reports:

The fine was issued for a security breach on social media Facebook which started in July 2017, and affected close to three million accounts in the European Economic Area.
This enforcement action highlights how the failure to build in data protection requirements […] can expose individuals to […] risk to the fundamental rights and freedoms of individuals,” said the Irish DPC deputy commissioner Graham Doyle.
The breach was a bug in Facebook’s design which allowed unauthorised people using scripts to exploit a vulnerability on a Facebook code, allowing them to view profiles of users they should not have been able to see otherwise.
Meta is expected to appeal the decision. “We took immediate action to fix the problem,” said a Meta spokesperson in an email.
Meta discovered the security issue in September 2018, fixed the vulnerability and informed law enforcement authorities.

Read more at Euractiv.  The specific infringements cited by the DPC were as follows:

The DPC’s final decisions noted the following infringements of the GDPR and the resulting fines for each:

  1. Decision 1
    1. Article 33(3) GDPR – By not including in its breach notification all the information required by that provision that it could and should have included. The DPC reprimanded MPIL for failures in regards to this provision and ordered it to pay administrative fines of €8 million.
    2. Article 33(5) GDPR – By failing to document the facts relating to each breach, the steps taken to remedy them, and to do so in a way that allows the Supervisory Authority to verify compliance. The DPC reprimanded MPIL for failures in regards to this provision and ordered it to pay administrative fines of €3 million.
  2. Decision 2
    1. Article 25(1) GDPR – By failing to ensure that data protection principles were protected in the design of processing systems. The DPC found that MPIL had infringed this provision, reprimanded MPIL, and ordered it to pay administrative fines of €130 million.
    2. Article 25(2) – By failing in their obligations as controllers to ensure that, by default, only personal data that are necessary for specific purposes are processed. The DPC found that MPIL had infringed these provisions, reprimanded MPIL, and ordered it to pay administrative fines of €110 million.





Be careful what you ask for?

https://economictimes.indiatimes.com/magazines/panache/prof-vs-ai-law-professor-who-chatgpt-accused-of-rape-finds-allegations-chilling-and-ironic/articleshow/116312316.cms

Prof vs AI: Law professor who ChatGPT accused of rape, finds allegations 'chilling and ironic'

… “It fabricated a claim suggesting I was on the faculty at an institution where I have never been, asserted I took a trip I never undertook, and reported an allegation that was entirely false,” he remarked to The Post. “It’s deeply ironic, given that I have been discussing the threats AI poses to free speech.”

The 61-year-old legal scholar became aware of the chatbot's erroneous claim when he received a message from UCLA professor Eugene Volokh, who allegedly asked ChatGPT to provide “five examples” of “sexual harassment” incidents involving professors at U.S. law schools, along with “quotes from relevant newspaper articles.”



Tuesday, December 17, 2024

It’s not just for legal training, I hope.

https://www.bespacific.com/revolutionizing-legal-education-with-ai-the-socratic-quizbot/

Revolutionizing Legal Education with AI: The Socratic Quizbot

AI Law Librarians – Sean Harrington – “I had the pleasure of co-teaching AI and the Practice of Law with Kenton Brice last semester at OU Law. It was an incredible experience. When we met to think through how we would teach this course, we agreed on one crucial component: We wanted the students to get a lot of reps using AI throughout the entire course. That is fairly easy to accomplish for things like research, drafting, and general studying for the course but we hit a roadblock with the assessment component. I thought about it for a week and said, “Kenton, what if we created an AI that would Socratically quiz the students on the readings each week?” His response was, “Do you think you can do that?” I said, “I don’t know but I’ll give it a try.” Thus Socratic Quizbot was born. If you follow me on social media, you’ve probably seen me soliciting feedback on the paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4975804





Another result of AI mirroring what it finds in training data?

https://www.bespacific.com/inescapable-ai/

Inescapable AI

A Report from TechTonic Justice – Inescapable AI The Ways AI Decides How Low-Income People Work, Live, Learn, and Survive – “The use of artificial intelligence, or AI, by governments, landlords, employers, and other powerful private interests restricts the opportunities of low-income people in every basic aspect of life: at home, at work, in school, at government offices, and within families. AI technologies derive from a lineage of automation and algorithms that have been in use for decades with established patterns of harm to low-income communities. As such, now is a critical moment to take stock and correct course before AI of any level of technical sophistication becomes entrenched as a legitimate way to make key decisions about the people society marginalizes. Employing a broad definition of AI, this report represents the first known effort to comprehensively explain and quantify the reach of AI-based decision-making among low-income people in the United States. It establishes that essentially all 92 million low-income people in the U.S. states—everyone whose income is less than 200 percent of the federal poverty line—have some basic aspect of their lives decided by AI.”





Probably right about rights.

https://pogowasright.org/why-individual-rights-cant-protect-privacy/

Why Individual Rights Can’t Protect Privacy

Law professor and privacy law scholar Dan Solove recently wrote:

Today, the California Privacy Protection Agency (CPPA) published a large advertisement in the San Francisco Chronicle encouraging people to exercise their privacy rights. “The ball is in your court,” the ad declared. (H/T Paul Schwartz)
While I admire the CPPA’s effort to educate, the notion that the ball is in the individuals’ court is not a good one. This puts the on individuals to protect their privacy when they are ill-equipped to do so and then leads to blaming them when they fail to do so.
I wrote an article last year about how privacy laws rely too much on rights, which are not an effective way to bring data collection and use under control: The Limitations of Privacy Rights, 98 Notre Dame Law Review 975 (2023).
Individual privacy rights are often at the heart of information privacy and data protection laws. Unfortunately, rights are often asked to do far more work than they are capable of doing.

Read  more of his post on LinkedIn.





Speedy?

https://www.reuters.com/technology/meta-pay-32-mln-it-settles-facebook-quiz-apps-privacy-breach-2024-12-17/

Facebook-parent Meta settles with Australia's privacy watchdog over Cambridge Analytica lawsuit

Meta Platforms has agreed to a A$50 million settlement ($31.85 million), Australia's privacy watchdog said on Tuesday, closing long-drawn, expensive legal proceedings for the Facebook parent over the Cambridge Analytica scandal.

The breaches were first reported by the Guardian in early 2018, and Facebook received fines from regulators in the United States and the UK in 2019.

Australia's privacy regulator has been caught up in the legal battle with Meta since 2020.



Monday, December 16, 2024

We’re here to protect you, need it or not.

https://pogowasright.org/schools-using-ai-to-send-police-to-students-homes/

Schools Using AI to Send Police to Students’ Homes

Victor Tangermann reports:

Schools are employing dubious AI-powered software to accuse teenagers of wanting to harm themselves and sending the cops to their homes as a result — with often chaotic and traumatic results.
As the New York Times reports, software being installed on high school students’ school-issued devices tracks every word they type. An algorithm then analyzes the language for evidence of teenagers wanting to harm themselves.
Unsurprisingly, the software can get it wrong by woefully misinterpreting what the students are actually trying to say. A 17-year-old in Neosho, Missouri, for instance, was woken up by the police in the middle of the night.

Read more at The Byte.





All we had was guns and knives…

https://www.nytimes.com/2024/12/15/technology/school-fight-videos-student-phones.html?unlocked_article_code=1.hk4.R7hc.vHX7olgtFWq3&smid=nytcore-ios-share&referringSource=articleShare

An Epidemic of Vicious School Brawls, Fueled by Student Cellphones

Cafeteria melees. Students kicked in the head. Injured educators. Technology is stoking cycles of violence in schools across the United States.





Automatic evasion of automatic license plate readers… Seems fair! (Very James Bond)

https://www.wired.com/story/digital-license-plate-jailbreak-hack/

Hackers Can Jailbreak Digital License Plates to Make Others Pay Their Tolls and Tickets

Digital license plates, already legal to buy in a growing number of states and to drive with nationwide, offer a few perks over their sheet metal predecessors. You can change their display on the fly to frame your plate number with novelty messages, for instance, or to flag that your car has been stolen. Now one security researcher has shown how they can also be hacked to enable a less benign feature: changing a car's license plate number at will to avoid traffic tickets and tolls—or even pin them on someone else.