Saturday, August 30, 2025

What are we teaching children?

https://pogowasright.org/constitutional-challenges-to-ai-monitoring-systems-in-public-schools/

Constitutional Challenges to AI Monitoring Systems in Public Schools

Alex A. Lozada and Tu Le of Atkinson Andelson Loya Ruud & Romo write:

Two recent federal lawsuits filed against school districts in Lawrence, Kansas and Marana, Arizona highlight emerging legal challenges surrounding the use of AI surveillance tools in the educational setting. Both cases involve Gaggle, a comprehensive AI student safety platform, and center around similar allegations: students claim that their respective school districts violated their constitutional rights through broad, invasive AI surveillance of their electronic communications and documents. These lawsuits represent a new legal frontier in which traditional student privacy rights collide with school districts’ reliance on generative AI to monitor students’ digital activity.

Read more about these cases at Lexology.





Restricting access isn’t easy.

https://techcrunch.com/2025/08/29/mastodon-says-it-doesnt-have-the-means-to-comply-with-age-verification-laws/

Mastodon says it doesn’t ‘have the means’ to comply with age verification laws

Decentralized social network Mastodon says it can’t comply with Mississippi’s age verification law — the same law that saw rival Bluesky pull out of the state — because it doesn’t have the means to do so.

The social nonprofit explains that Mastodon doesn’t track its users, which makes it difficult to enforce such legislation. Nor does it want to use IP address-based blocks, as those would unfairly impact people who were traveling, it says.



(Related) Must you be an adult to have a credit card?

https://www.theverge.com/news/767980/steam-uk-age-vertification-online-safety-act-credit-card-mature-games

Steam users in the UK will need a credit card to access ‘mature content’ games

Valve has started to comply with the UK’s Online Safety Act, by rolling out a requirement for all Brits to verify their age with a credit card to access “mature content” pages and games on Steam. UK users won’t even be able to access the community hubs of mature content games unless a valid credit card is stored on a Steam account.





Not unique. But we stopped training for new technologies as a cost saving mandate.

https://www.zdnet.com/article/new-linkedin-study-reveals-the-secret-that-a-third-of-professionals-are-hiding-at-work/

New LinkedIn study reveals the secret that a third of professionals are hiding at work

Staying up with AI's changing landscape is getting workers down. Forty-one percent of professionals report AI's current pace is impacting their well-being, and more than half of professionals say learning about AI feels like another job in and of itself, according to the latest research by LinkedIn. 

LinkedIn monitored conversations on the platform that included the words "overwhelm" or "overwhelmed," "burn out," and "navigating change" from July 2024 through June 2025, while also keeping an eye on AI topics and keywords around that same time. 

The research found that AI is driving pressure among workers to upskill, despite how little they know about the technology -- and it's "fueling insecurity among professionals at work," the study said. 

Thirty-three percent of professionals admitted they felt embarrassed about how little they understand AI, and 35% of professionals said they feel nervous about bringing it up at work because of their lack of knowledge. 

Studies show that people with AI experience, or, as one Oxford Economics study called it, "AI capital," boost professionals' job prospects. University graduates with AI capital received more invitations for job interviews than those without it, the Oxford study found. Additionally, graduates with AI capital were offered higher wages than those without it. 



Friday, August 29, 2025

Beyond Oops…

https://electrek.co/2025/08/04/tesla-withheld-data-lied-misdirected-police-plaintiffs-avoid-blame-autopilot-crash/

Tesla withheld data, lied, and misdirected police and plaintiffs to avoid blame in Autopilot crash

Tesla was caught withholding data, lying about it, and misdirecting authorities in the wrongful death case involving Autopilot that it lost this week.

The automaker was undeniably covering up for Autopilot.

Last week, a jury found Tesla partially liable for a wrongful death involving a crash on Autopilot. I explained the case in the verdict in this article and video.

But we now have access to the trial transcripts, which confirm that Tesla was extremely misleading in its attempt to place all the blame on the driver.

The company went as far as to actively withhold critical evidence that explained Autopilot’s performance around the crash.



Thursday, August 28, 2025

AI crimes require AI solutions?

https://www.zdnet.com/article/anthropic-agrees-to-settle-copyright-infringement-class-action-suit-what-it-means/

Anthropic agrees to settle copyright infringement class action suit - what it means

AI startup Anthropic has agreed to settle a class action lawsuit against three authors for the tech company's misuse of their work to train its Claude chatbot.

The writers claimed that Anthropic used the authors' pirated works to train Claude, its family of large language models (LLMs), on prompt generation. The AI startup negotiated a "proposed class settlement," Anthropic announced Tuesday, to forgo a trial determining how much it would owe for the infringement. 

The preliminary settlement's details are scarce. In June, a judge ruled that Anthropic's legal purchase of books to train its chatbot was fair use -- that is, free to use without payment or permission from the copyright holder. However, some of Anthropic's tactics, like using a website called LibGen, constituted piracy, the judge ruled. Anthropic could have been forced to pay over $1 trillion in damages over piracy claims, Wired reports. 





...but words will never hurt me. More to come?

https://newrepublic.com/article/199717/trump-explodes-rage-dem-governor-harsh-takedown-draws-blood

Trump Explodes in Rage as Dem Governor’s Harsh Takedown Draws Blood

This week, Illinois Governor J.B. Pritzker delivered an extraordinary takedown of President Trump over his deployment of the military in U.S. cities.  Trump then exploded about Pritzker’s impudence, calling the governor names and instructing him to bow down and beg for his “HELP” in fighting crime. Trump has threatened twice to occupy Chicago no matter what the city’s residents and their elected representatives think about it—another window into his seething anger. In this standoff, Pritzker did something unusual: He communicated with his constituents from the heart, vowing to use all his power to protect them from Trump’s authoritarian takeover. Rather than let Trump pretend he cares about crime, Pritzker cast Trump as the primary threat to his state’s people. We talked to Brian Beutler, who has a great new piece on his Substack, Off Message, taking stock of Pritzker’s response. We discuss how Pritzker is shrewdly reading the moment in a way many Democrats are not, why Trump is vulnerable on crime, and what the punditry is getting so wrong about all of it. Listen to this episode here. A transcript is here.



Wednesday, August 27, 2025

What do you expect from ‘lowly trained’ law enforcement? Sorry, not even law enforcement.

https://pogowasright.org/most-illegal-search-ive-ever-seen-trumps-dc-crackdown-results-in-stream-of-abuses/

Most Illegal Search I’ve Ever Seen’: Trump’s DC Crackdown Results in Stream of Abuses

Brad Reed reports:

US President Donald Trump has attempted to portray his deployment of National Guard troops and other federal agents in Washington, DC as a boon for public safety.
Inside DC courtrooms, however, judges and defense attorneys have expressed alarm at the tactics being used by law enforcement officers to unfairly charge local residents with serious crimes that carry lengthy prison sentences.
NPR reports that US Magistrate Judge Zia Faruqui expressed incredulity on Monday while dismissing weapons charges against a Maryland resident named Torez Riley, who was subjected to what the judge described as “without a doubt the most illegal search I’ve ever seen in my life.”
While reviewing the case, the judge said that law enforcement officials seem to have targeted Riley for a search simply because he was a Black man carrying what appeared to be a heavy backpack.

Read more at Common Dreams.





And most phone users don’t understand how much is there!

https://pogowasright.org/fourth-amendment-victory-michigan-supreme-court-reins-in-digital-device-fishing-expeditions/

Fourth Amendment Victory: Michigan Supreme Court Reins in Digital Device Fishing Expeditions

Jennifer Pinsof and Jennifer Lynch write:

EFF legal intern Noam Shemtov was the principal author of this post.

When police have a warrant to search a phone, should they be able to see everything on the phone—from family photos to communications with your doctor to everywhere you’ve been since you first started using the phone—in other words, data that is in no way connected to the crime they’re investigating? The Michigan Supreme Court just ruled no.
In People v. Carson, the court held that to satisfy the Fourth Amendment, warrants authorizing searches of cell phones and other digital devices must contain express limitations on the data police can review, restricting searches to data that they can establish is clearly connected to the crime.
EFF, along with ACLU National and the ACLU of Michigan, filed an amicus brief  in Carson, expressly calling on the court to limit the scope of cell phone search warrants.  We explained that the realities of modern cell phones call for a strict application of rules governing the scope of warrants. Without clear limits, warrants would  become de facto licenses to look at everything on the device, a great universe of information that amounts to “the sum of an individual’s private life.”

Read more at EFF.



Tuesday, August 26, 2025

Those who cannot remember the past are condemned to repeat it. (What other similarities will follow?)

https://www.bespacific.com/the-us-used-to-be-a-haven-for-research-now-scientists-are-packing-their-bags/

The US used to be a haven for research. Now, scientists are packing their bags.

Christian Science Monitor – “…As government funding for scientific research dries up, and as President Donald Trump wages pointed attacks against some of the nation’s top universities, more academics are looking to Europe and Asia as safe havens.  A recent survey of U.S. college faculty by the journal Nature found that 75% were looking for work outside the country. Some are doing so to protect their research, while others are trying to safeguard their individual freedoms. The result is a reverse brain drain that has not been seen since European scientists sought refuge on U.S. shores before and during World War II. For the researchers who have chosen to leave, it is bittersweet – and professionally risky. But they say the future of science depends on it. “A lot of us scholars value our independence,” says Isaac Kamola, director of the American Association of University Professors’ Center for the Defense of Academic Freedom. “We value the ability to research, write and teach what we want, and do what we think is in the best interest of … our disciplines. “So, when somebody comes and tells us, ‘No, you can’t say these words, you can’t teach this book … this class … it’s basically like saying to a doctor, ‘You’ve trained for years to become a doctor, but we’re not going to let you see patients. You’ll have to do office work,” says Dr. Kamola, who is also an assistant professor at Trinity College in Connecticut…”





Another “Great” idea or perhaps a “Grate” idea…

https://stratechery.com/2025/u-s-intel/

U.S. Intel

The beauty of being in the rather lonely position of supporting the U.S. government taking an equity stake in Intel is that I don’t have to steelman the case about it being a bad idea. Scott Lincicome, for example, had a good Twitter thread and Washington Post column explaining why this is a terrible idea; this is the opening of the latter:

President Donald Trump’s announcement on Friday that the U.S. government will take a 10 percent stake in long-struggling Intel marks a dangerous turn in American industrial policy. Decades of market-oriented principles have been abandoned in favor of unprecedented government ownership of private enterprise. Sold as a pragmatic and fiscally responsible way to shore up national security, the $8.9 billion equity investment marks a troubling departure from the economic policies that made America prosperous and the world’s undisputed technological leader.



(Related)

https://www.cnbc.com/2025/08/25/intel-trump-deal-risks-stock.html

Intel says Trump deal has risks for shareholders, international sales

Intel on Monday warned of “adverse reactions” from investors, employees and others to the Trump administration taking a 10% stake in the company, in a filing citing risks involved with the deal.

A key concern area is international sales, with 76% of Intel’s revenue in its last fiscal year coming from outside the U.S., according to the filing with the Securities and Exchange Commission.





Perspective.

https://www.socialmediatoday.com/news/where-ai-tools-source-responses-reddit-infographic/758586/

Where AI Gets its Facts [Infographic]

As more and more people turn to AI chatbots to get answers to their queries (whether they specifically set out to or not), it’s worth taking note of where those AI answers are coming from, and which platforms are the most sourced by AI responses.

And according to this study, based on research conducted by SEMRush, Reddit is the top source for AI answers, by a big margin, beating out Wikipedia and YouTube by significant margins.

Check out the visualization from Visual Capitalist below.



Monday, August 25, 2025

Still a work in progress.

https://pogowasright.org/australian-university-used-wi-fi-location-data-to-identify-student-protestors/

Australian university used Wi-Fi location data to identify student protestors

Simon Sharwood reports:

Australia’s University of Melbourne last year used Wi-Fi location data to identify student protestors.
The University used Wi-Fi to identify students who participated in July 2024 sit-in protest. As described in a report [PDF] into the matter by the state of Victoria’s Office of the Information Commissioner, the University directed protestors to leave the building they occupied and warned those who remained could be suspended, disciplined, or reported to police.
The report says 22 chose to remain, and that the University used CCTV and WiFi location data to identify them.
The Information Commissioner found that use of CCTV to identify protestors did not breach privacy, but felt using Wi-Fi location data did because the University’s policies lacked detail.

Read more at The Register.





Not the economics I was taught. At least I don’t remember it this way.

https://www.sciencealert.com/ai-revolution-could-require-us-to-re-think-money-entirely

AI Revolution Could Require Us to Re-Think Money Entirely

It's the defining technology of an era. But just how artificial intelligence (AI) will end up shaping our future remains a controversial question.

For techno-optimists, who see the technology improving our lives, it heralds a future of material abundance.

That outcome is far from guaranteed. But even if AI's technical promise is realised – and with it, once intractable problems are solved – how will that abundance be used?



Sunday, August 24, 2025

Rules for facial recognition?

https://pogowasright.org/nz-commissioner-issues-biometric-processing-privacy-code/

NZ: Commissioner issues Biometric Processing Privacy Code

The Privacy Commissioner has issued a biometric Code that will create specific privacy rules for agencies (businesses and organisations) using biometric technologies to collect and process biometric information.

The Code, which is now law made under the Privacy Act, will help make sure agencies implementing biometric technologies are doing it safely and in a way that is proportionate.
The Code comes into force on 3 November 2025, but agencies already using biometrics have a nine-month grace period to move to the new set of rules. That transition period ends on 3 August 2026.
Guidance is also being issued to support the Code.

Source: Privacy Commissioner of New Zealand





Now that is different.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5399463

Plagiarism, Copyright, and AI

Critics of generative AI often describe it as a “plagiarism machine.” They may be right, though not in the sense they mean. With rare exceptions, generative AI doesn’t just copy someone else’s creative expression, producing outputs that infringe copyright. But it does get its ideas from somewhere. And it’s quite bad at identifying the source of those ideas. That means that students (and professors, and lawyers, and journalists) who use AI to produce their work generally aren’t engaged in copyright infringement. But they are often passing someone else’s work off as their own, whether or not they know it. While plagiarism is a problem in academic work generally, AI makes it much worse, because authors who use AI may be taking the ideas and words of someone else without knowing it.

Disclosing that the authors used AI isn’t a sufficient solution to the problem, because the people whose ideas are being used don’t get credit for those ideas. Whether or not a declaration that “AI came up with my ideas” is plagiarism, it is a bad academic practice.

We argue that AI plagiarism isn’t—and shouldn’t be—illegal. But it is still a problem in many contexts, particularly academic work, where proper credit is an essential part of the ecosystem. We suggest best practices to align academic and other writing with good scholarly norms in the AI environment.





I must have missed some of this…

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5395242

AI Training is Fair Use: The Beginning of the End of the Copyright Assault on Gen AI

Two federal courts overseeing claims against the developers of generative artificial intelligence (GenAI) have pointed the way to resolving these infringement actions by finding that the training of GenAI models is a transformative fair use under copyright law. While the two opinions differed in tone and scope, this article takes these rulings as the starting point for a discussion on resolving the ongoing copyright claims against AI developers, signaling what may be the beginning of the end of the copyright assault on GenAI.

The goal of this article is to inject urgency into resolving these matters. It asserts that uncertainty over the legal status of AI training is a drag on innovation and development in this vital economic sector. While massive investments are pouring into this field, the money flows to an extraordinarily small number of players whose resources allow them to run the risks posed by class actions and multi-party actions demanding damages that might cripple even the largest companies. With the threat of destruction by copyright infringement action removed, AI development could expand and flourish among even the smallest of innovators.

Ending the infringement actions requires more than just a recognition that indiscriminately drawing data from existing works without permission and without licensing to create a generative artificial intelligence expression machine is fundamentally transformative under factor one of the copyright fair use test. Plaintiffs have fought to sell a theory of the case that keeps AI developers in the defendants’ seats, even though the parties responsible for the production of outputs and for any resulting market harm are the end-users of the technology.

This article asserts that the proper theory of these infringement cases is that GenAI developers made a general-purpose technology that can create an infinite variety of new, original expression, but end-users of the technology can choose to use it to compete with the plaintiff artists and creators in their same style and in their same medium, at massively reduced costs and massively increased speeds. And sometimes end-users will use the technology to create infringing works. Far from being a unique 21st century high technology story, this story is the same as that of photocopy machines, Betamax and VCR devices, scanners, image-editing software, and internet search engines, all of which are capable of making duplicates of expressive works that can be put to uses that infringe on the original works and harm their markets. Yet, the designers of these copying technologies are not sued for copyright infringement because of the disconnect between the action of creating a useful tool and the action of an end-user who co-opts the tool for their own purposes.



The designers of these GenAI models made them powerful and extraordinarily fluent tools for creating new expression with a “further purpose or different character, altering the first with new expression, meaning, or message,” but in the end, GenAI systems are just tools. They are not artists or authors and do not automatically regurgitate infringing content. Rather, they are tools capable of being used by end-users who may act purposefully to create substantially similar and potentially infringing works that can be used to compete with the plaintiffs.