Saturday, May 10, 2025

How valuable must this data seem to be worth the risk of billion dollar fines?

https://pogowasright.org/google-agrees-to-pay-texas-1-4-billion-data-privacy-settlement/

Google agrees to pay Texas $1.4 billion data privacy settlement

CNBC reports:

Google agreed to pay nearly $1.4 billion to the state of Texas to settle allegations of violating the data privacy rights of state residents, Texas Attorney General Ken Paxton said Friday.
Paxton sued Google in 2022 for allegedly unlawfully tracking and collecting the private data of users.
The attorney general said the settlement, which covers allegations in two separate lawsuits against the search engine and app giant, dwarfed all past settlements by other states with Google for similar data privacy violations.
Google’s settlement comes nearly 10 months after Paxton obtained a $1.4 billion settlement for Texas from Meta, the parent company of Facebook and Instagram, to resolve claims of unauthorized use of biometric data by users of those popular social media platforms.

Read more at CNBC.





Do you need permission to leave the country? This looks like a step in that direction.

https://pogowasright.org/us-customs-and-border-protection-plans-to-photograph-everyone-exiting-the-us-by-car/

US Customs and Border Protection Plans to Photograph Everyone Exiting the US by Car

Caroline Haskins reports:

United States Customs and Border Protection plans to log every person leaving the country by vehicle by taking photos at border crossings of every passenger and matching their faces to their passports, visas, or travel documents, WIRED has learned.
The escalated documentation of travelers could be used to track how many people are self-deporting, or leave the US voluntarily, which the Trump administration is fervently encouraging to people in the country illegally.
CBP exclusively tells WIRED, in response to an inquiry to the agency, that it plans to mirror the current program it’s developing—photographing every person entering the US and match their faces with their travel documents—to the outbound lanes going to Canada and Mexico. The agency currently does not have a system that monitors people leaving the country by vehicle.

Read more at WIRED.





The questionable fight continues. MAIA (Make America Insecure Again?)

https://pogowasright.org/florida-bill-requiring-encryption-backdoors-for-social-media-accounts-has-failed/

Florida bill requiring encryption backdoors for social media accounts has failed

Zack Whittaker reports:

A Florida bill, which would have required social media companies to provide an encryption backdoor for allowing police to access user accounts and private messages, has failed to pass into law.
The Social Media Use by Minors bill was “indefinitely postponed” and “withdrawn from consideration” in the Florida House of Representatives earlier this week. Lawmakers in the Florida Senate had already voted to advance the legislation, but a bill requires both legislative chambers to pass before it can become law.
The bill would have required social media firms to “provide a mechanism to decrypt end-to-end encryption when law enforcement obtains a subpoena,” which are typically issued by law enforcement agencies and without judicial oversight.

Read more at TechCrunch.



Friday, May 09, 2025

The goal is to replace lawyers with AI, right?

https://www.bespacific.com/artificial-intelligence-and-law-an-overview-of-recent-technological-changes-in-large-language-models-and-law/

Artificial Intelligence and Law – An Overview of Recent Technological Changes in Large Language Models and Law

Surden, Harry, Artificial Intelligence and Law – An Overview of Recent Technological Changes in Large Language Models and Law (February 12, 2025). 96 Colorado Law Review pp. 376 – 411 (2025), U of Colorado Law Legal Studies Research Paper No. 25-8, Available at SSRN:  https://ssrn.com/abstract=5135305 or http://dx.doi.org/10.2139/ssrn.5135305

This article, based upon a keynote address by Professor Harry Surden, provides an in-depth overview of the recent advancements in artificial intelligence (AI) since 2022, particularly the rise of highly capable large language models (LLMs) such as OpenAI’s GPT-4, and their implications for the field of law. The talk begins with a historical perspective, tracing the evolution of AI from early symbolic systems to the modern deep learning era, highlighting the breakthroughs that have enabled AI to process and generate human language with unprecedented sophistication. The address explores the technical foundations of contemporary AI models, including the transition from rule-based systems to data-driven machine learning, the role of deep learning, and the emergence of transformers, which have significantly enhanced AI’s ability to understand and generate text. It then examines the capabilities and limitations of GPT-4, emphasizing its strengths in legal research, document drafting, and analysis while also identifying key concerns, such as hallucinations, biases, and the sensitivity of AI outputs to user prompts. The article also considers the potential risks of using AI in legal decision-making, particularly in judicial settings, where AI-generated legal reasoning may appear authoritative yet embed subtle interpretive choices. It argues that while AI can assist in legal tasks, it should not be treated as a neutral arbiter of law. It concludes by addressing near-term trends in AI, including improvements in model accuracy, interpretability, and integration into legal workflows, and emphasizes the need for AI literacy among legal professionals.”



(Related)

https://www.bespacific.com/measuring-the-rapidly-increasing-use-of-artificial-intelligence-in-legal-scholarship/

Measuring the Rapidly Increasing Use of Artificial Intelligence in Legal Scholarship

Conklin, Michael and Houston, Christopher, Measuring the Rapidly Increasing Use of Artificial Intelligence in Legal Scholarship (March 23, 2025). Available at SSRN:  https://ssrn.com/abstract=5190385  or http://dx.doi.org/10.2139/ssrn.5190385

The rapid advancement of artificial intelligence (AI) has had a profound impact on nearly every industry, including legal academia. As AI-driven tools like ChatGPT become more prevalent, they raise critical questions about authorship, academic integrity, and the evolving nature of legal writing. While AI offers promising benefits—such as improved efficiency in research, drafting, and analysis—it also presents ethical dilemmas related to originality, bias, and the potential homogenization of legal discourse. One of the challenges in assessing AI’s influence on legal scholarship is the difficulty of identifying AI-generated content. Traditional plagiarism-detection methods are often inadequate, as AI does not merely copy existing text but generates novel outputs based on probabilistic language modeling. This first-of-its-kind study uses the existence of an AI idiosyncrasy to measure the use of AI in legal scholarship. This provides the first-ever empirical evidence of a sharp increase in the use of AI in legal scholarship, thus raising pressing questions about the proper role of AI in shaping legal scholarship and the practice of law. By applying a novel framework to highlight the rapidly evolving challenges at the intersection of AI and legal academia, this Essay will hopefully spark future debate on the careful balance in this area.”



Wednesday, May 07, 2025

Perspective.

https://thedailyeconomy.org/article/three-dimensional-trade-chess-explained/

Three-Dimensional Trade Chess, Explained

Supporters of tariffs claim that mainstream economists don’t understand the game. This “game” has been described by Peter Navarro as three-dimensional chess. I want to take that claim at face value, break down its dimensions, and then evaluate whether Trump’s actions are likely to lead to his desired outcomes. And the answer is “no,” because the administration is playing three quite different, inconsistent games all at once. That isn’t a strategy at all; rather, it’s a scheme for certain failure.

Trump and his advisers claim that their policies have three justifications. For simplicity, I will call these (1) security, (2) reciprocity, and (3) revenue. I’ll present these (as the lawyers put it) arguendo, meaning that for the sake of argument, we will just consider the case for Trump’s actions, and mostly put aside the counterarguments.



Tuesday, May 06, 2025

We knew this was coming.

https://www.bespacific.com/from-help-to-harm-how-the-government-is-quietly-repurposing-everyones-data-for-surveillance/

From help to harm: How the government is quietly repurposing everyone’s data for surveillance

The Conversation, Nicole M. Bennett: “A whistleblower at the National Labor Relations Board reported an unusual spike in potentially sensitive data flowing out of the agency’s network in early March 2025 when staffers from the Department of Government Efficiency, which goes by DOGE, were granted access to the agency’s databases. On April 7, the Department of Homeland Security gained access to Internal Revenue Service tax data. These seemingly unrelated events are examples of recent developments in the transformation of the structure and purpose of federal government data repositories. I am a researcher who studies the intersection of migration, data governance and digital technologies. I’m tracking how data that people provide to U.S. government agencies for public services such as tax filing, health care enrollment, unemployment assistance and education support is increasingly being redirected toward surveillance and law enforcement. Originally collected to facilitate health care, eligibility for services and the administration of public services, this information is now shared across government agencies and with private companies, reshaping the infrastructure of public services into a mechanism of control. Once confined to separate bureaucracies, data now flows freely through a network of interagency agreements, outsourcing contracts and commercial partnerships built up in recent decades. These data-sharing arrangements often take place outside public scrutiny, driven by national security justificationsfraud prevention initiatives and digital modernization efforts. The result is that the structure of government is quietly transforming into an integrated surveillance apparatus, capable of monitoring, predicting and flagging behavior at an unprecedented scale. Executive orders signed by President Donald Trump aim to remove remaining institutional and legal barriers to completing this massive surveillance system…”





What age is the target? 13 or 16 or 18?

https://pogowasright.org/virginia-governor-signs-into-law-bill-restricting-minors-use-of-social-media/

Virginia Governor Signs into Law Bill Restricting Minors’ Use of Social Media

Hunton Andrews Kurth writes:

On May 2, 2025, Virginia Governor Glenn Youngkin signed into law a bill that amends the Virginia Consumer Data Protection Act (“VCDPA”) to impose significant restrictions on minors’ use of social media. The bill comes on the heels of recent children’s privacy amendments to the VCDPA that took effect on January 1, 2025.
The bill amends the VCDPA to require social media platform operators to (1) use commercially reasonable methods (such as a neutral age screen) to determine whether a user is a minor under the age of 16 and (2) limit a minor’s use of the social media platform to one hour per day, unless a parent consents to increase the daily limit.
The bill prohibits social media platform operators from using the information collected to determine a user’s age for any other purpose.

Read more at Privacy & Information Security Law Blog.



Monday, May 05, 2025

Any chance this will stop Trump? (Nope, I don’t think so either.)

https://www.bespacific.com/the-ruling-against-trumps-law-firm-order-shows-how-to-respond-in-this-moment/

The ruling against Trump’s law firm order shows how to respond in this moment

Law Dork – Chris Geitner – “…Judge Howell’s decision striking down the order targeting Perkins Coie can serve as a framework for addressing Trump. On Friday evening [May 2, 2025], U.S. District Judge Beryl Howell took a stab at summarizing what it means to exist in this moment — and how courts need to respond to it — with lessons for the rest of us as well. “[S]ettling personal vendettas by targeting a disliked business or individual for punitive government action is not a legitimate use of the powers of the U.S. government or an American President,“ Howell wrote in her decision striking down the first of President Donald Trump’s several executive orders targeting a law firm with broad reprisals due to his personal grievances with the firm. Although Howell’s statement might have been an obvious one when she took the bench nearly 15 years ago, Trump proved why such a statement was necessary less than 48 hours after she issued it. Asked on Meet the Press whether he has to uphold the Constitution, Trump replied, “I don’t know,” later, at least, adding that his “brilliant lawyers … are going to obviously follow what the Supreme Court said.” In a moment when the president is explicitly shoving his oath of office to the wayside without a second thought, the other branches, the states, and the people will have to step up… In her 102-page decision finding that Trump’s attack on the Perkins Coie law firm [Note – I added this link to Perkins Coie site with all documents relevant to this case] was unconstitutional on several grounds, rendering the executive order “null and void,” Howell, an Obama appointee, laid out the facts — many of them undisputed — about how Trump and the Trump administration are acting without regard to basic constitutional protections. She then expanded a temporary restraining order that had blocked significant parts of the order to a permanent injunction blocking the entirety of the executive order. Like the subsequent law firm executive orders, the Perkins Coie one included a “purpose” section that recounted Trump’s grievances, followed by sections addressing security clearances, government contracting (including the firms’ clients contracts), diversity efforts, and personnel (which includes building access restrictions and hiring restrictions). Quoting from key U.S. Supreme Court decisions from recent years — and one seminal case — addressing fundamental limits on government power, Howell concluded her opinion by stating:

Government officials, including the President, may not “subject[] individuals to ‘retaliatory actions’ after the fact for having engaged in protected speech.”  Hous. Cmty. Coll. Sys., 595 U.S. at 474 (quoting Nieves, 587 U.S. at 398). They may neither “use the power of the State to punish or suppress disfavored expression,” Vullo, 602 U.S. at 188, nor engage in the use of “purely personal and arbitrary power,” Yick Wo, 118 U.S. at 370. In this case, these and other foundational protections were violated by EO 14230. On that basis, this Court has found that EO 14230 violates the Constitution and is thus null and void.



Sunday, May 04, 2025

If it exists, it’s taxable.

https://pogowasright.org/the-irs-says-your-digital-life-is-not-your-property/

The IRS Says Your Digital Life Is Not Your Property

Brent Skorup and Laura Bondank write:

When the IRS secretly demands your financial records and private information from a third party, without a warrant, what rights do you still have?
That’s the question at the heart of Harper v. O’Donnell, which is before the Supreme Court. New Hampshire resident Jim Harper is fighting back against the IRS after discovering he was swept up in a massive digital dragnet. The case could redefine how the Fourth Amendment applies in the age of cloud storage—and it may determine whether your emails, location history, search queries, and financial records that tech companies store on your behalf are treated as your property.
In 2016, the IRS ordered the cryptocurrency exchange Coinbase to hand over transaction records of over 14,000 customers. Harper was among them and only learned of the government’s records grab after the IRS sent him a warning letter, mistakenly suggesting he’d underreported his cryptocurrency income. He soon discovered the IRS had his transaction logs, wallet addresses, and public keys—allowing the agency to monitor any future transactions he made.
Harper hadn’t done anything wrong. He’d simply used a legal platform to buy and sell cryptocurrency. But his digital footprint became visible to the government overnight.

Read more at Reason.





Sorry, the AI says we shouldn’t waste time treating you.

https://www.researchgate.net/profile/John-Mathew-26/publication/391318390_Predictive_AI_Models_for_Emergency_Room_Triage/links/68121727ded43315573f521a/Predictive-AI-Models-for-Emergency-Room-Triage.pdf

Predictive AI Models for Emergency Room Triage

Emergency room (ER) triage is a critical process that prioritizes patients based on the severity of their conditions, aiming to ensure timely care in high-pressure environments. However, traditional triage methods are often subjective and may lead to delays in treatment, overcrowding, and suboptimal patient outcomes. This paper explores the role of predictive Artificial Intelligence (AI) models in enhancing ER triage by providing data-driven, real-time insights to optimize decision-making, improve patient prioritization, and streamline resource allocation. We examine various AI techniques, including machine learning (ML), deep learning (DL), and natural language processing (NLP), highlighting their application in analyzing structured and unstructured data such as electronic health records (EHRs), patient vital signs, medical imaging, and clinical notes. The paper also discusses the importance of data preprocessing, including handling missing values, data normalization, and feature selection, to ensure accurate model predictions. Through case studies and clinical implementations, we demonstrate how AI models have been successfully integrated into real-world ER settings to predict patient acuity, early deterioration, and patient outcomes. Ethical, legal, and practical considerations such as data privacy, algorithmic bias, and model transparency are also addressed. The paper concludes with a discussion on the future directions of AI in ER triage, including the integration of multimodal data, real-time monitoring, and personalized care. Predictive AI has the potential to significantly enhance ER efficiency and improve patient care, making it a valuable tool for modern healthcare systems.





AI is no big deal?

https://scholarship.law.unc.edu/cgi/viewcontent.cgi?article=1508&context=ncjolt

Liability for AI Agents

Artificial intelligence (“AI”) is becoming integral to modern life, fueling innovation while presenting complex legal challenges. Unlike traditional software, AI operates with a degree of autonomy, producing outcomes that its developers or deployers cannot fully anticipate. Advances in underlying technology have further enhanced this autonomy, giving rise to AI agents: systems capable of interacting with their environment independently, often with minimal or no human oversight. As AI decision-making—like that of humans—is inherently imperfect, its increasing deployment inevitably results in instances of harm, prompting the critical question of whether developers and deployers should be held liable as a matter of tort law.

This question is frequently answered in the negative. Many scholars, adopting a framework of technological exceptionalism, assume AI to be uniquely disruptive. Citing the lack of transparency and unpredictability of AI models, they contend that AI challenges conventional notions of causality, rendering existing liability regimes inadequate.

This Article offers the first comprehensive normative analysis of the liability challenges posed by AI agents through a law-and-economics lens. It begins by outlining an optimal AI liability framework designed to maximize economic and societal benefits. Contrary to prevailing assumptions about AI’s disruptiveness, this analysis reveals that AI largely aligns with traditional products. While AI presents some distinct challenges—particularly in its complexity, opacity, and potential for benefit externalization—these factors call for targeted refinements to existing legal frameworks rather than an entirely new paradigm.

This holistic approach underscores the resilience of traditional legal principles in tort law. While AI undoubtedly introduces novel complexities, history shows that tort law has effectively navigated similar challenges before.

For example, AI’s causality issues closely resemble those in medical malpractice cases, where the impact of treatment on patient recovery can be uncertain. The legal system has already addressed these issues, providing a clear precedent for extending similar solutions to AI. Likewise, while the traditional distinction between design and manufacturing defects does not map neatly onto AI, there is a compelling case for classifying inadequate AI training data as a manufacturing defect—aligning AI liability with established legal doctrine.

Taken together, this Article argues that AI agents do not necessitate a fundamental overhaul of tort law but rather call for targeted, nuanced refinements. This analysis offers essential guidance on how to effectively apply existing legal standards to this evolving technology.





Who really done it?

https://ijlr.iledu.in/wp-content/uploads/2025/04/V5I723.pdf

ARTIFICIAL INTELLIGENCE, LEGAL PERSONHOOD, AND DETERMINATION OF CRIMINAL LIABILITY

The broad adoption of artificial intelligence (AI) across vital domains ranging from autonomous vehicles and financial markets to healthcare diagnostics and legal analytics has exposed significant gaps in our legal systems when AI-driven errors or malfunctions cause harm. Autonomous systems often involve multiple stakeholder hardware suppliers, software developers, sensor manufacturers, and corporate overseers making it difficult to pinpoint who is responsible for a system’s failure. The 2018 Uber autonomous-vehicle crash in Tempe, Arizona, where a pedestrian was misclassified repeatedly by the AI’s perception module and the emergency braking function was disabled, underscores this challenge: with safety overrides turned off and state oversight minimal, liability became entangled among engineers, operators, and corporate policy not the machine alone.

Traditional criminal law doctrines rest on actus reus (the guilty act) and mens rea (the guilty mind), both premised on human agency and intent. AI entities, however, can execute complex decision-making without consciousness or moral awareness, creating a “responsibility gap” under current frameworks. To bridge this gap, scholars like Gabriel Hallevy have proposed three liability models—perpetration-via-another (holding programmers or users accountable), the natural-probable-consequence model (liability for foreseeable harms), and direct liability (attributing responsibility to AI itself if it meets legal thresholds for actus reus and an analogue of mens rea). Each model offers insight but struggles with AI’s semi-autonomous nature and opacity.

This paper argues against prematurely conferring legal personhood on AI an approach that risks absolving human actors and diluting accountability. Instead, it advocates for a human-centric policy framework that combines clear oversight duties, mandated explainability measures, and calibrated negligence or strict-liability standards for high-risk AI applications. Such reforms are especially urgent in jurisdictions like India, where AI governance remains nascent. By anchoring liability in human oversight and regulatory clarity rather than on machines themselves, we can ensure that accountability evolves in step with AI’s growing capabilities, safeguarding both innovation and public safety.



Saturday, May 03, 2025

Reiteration is probably necessary.

https://pogowasright.org/a-letter-to-the-privacy-law-community-from-the-scholars-and-teachers-in-leadership/

A Letter to the Privacy Law Community from the Scholars and Teachers in Leadership

May 2, 2025

Dear Colleagues,

In our capacities as scholars, teachers, and leaders of the Privacy Law Scholars Foundation (PLSF) and the Privacy Law Scholars Conference (PLSC), we write to express our grave concern about ongoing threats to privacy and democracy in the United States.

Each of us brings different perspectives on what the law is and should be. Diversity in our views is one hallmark of the privacy law community. That diversity has made PLSC a vibrant incubator of cutting edge scholarship for nearly twenty years. Although we have different views on many things, we are resolute in our view that lawyers, elected officials, judges, and other government actors must abide by the rule of law. And although we approach the topic of privacy from many different angles, we all agree that privacy is of great and fundamental importance to the rule of law and to democracy in general.





Why is this necessary? Wouldn’t AI be covered under existing rules?

https://www.reuters.com/legal/government/us-judicial-panel-advances-proposal-regulate-ai-generated-evidence-2025-05-02/

US judicial panel advances proposal to regulate AI-generated evidence

A federal judicial panel advanced a proposal on Friday to regulate the introduction of artificial intelligence-generated evidence at trial, with judges expressing a need to swiftly get feedback from the public and lawyers on the draft rule to get ahead of a rapidly evolving technology.

The U.S. Judicial Conference's Advisory Committee on Evidence Rules in Washington, D.C., voted 8-1 in favor of seeking public comment on a draft rule designed to ensure evidence produced by generative AI technology meets the same reliability standards as evidence from a human expert witness.



Friday, May 02, 2025

Thinking for your AI.

https://www.bespacific.com/ai-in-high-stakes-litigation-the-critical-role-of-experienced-attorneys/

AI In High-Stakes Litigation: The Critical Role of Experienced Attorneys

Via LLRX – AI In High-Stakes Litigation: The Critical Role of Experienced Attorney – Skepticism about AI is not only justified—it’s evidence of good judgment. There are indeed pitfalls to AI use. Inept use of AI won’t help you, but my experience has been that in the hands of skilled lawyers with good judgment, AI is essential to obtaining the best results, for one simple reason: AI is only as good as the question it’s given. This is where senior lawyers excel. Knowing what issue to frame, what clause to focus on, what fact might tip the case—this is precisely what you’ve spent your career developing.  Jerry Lawson contends that AI can assist. But it still needs someone to think.





Change is hard.

https://abovethelaw.com/2025/05/law-firms-keep-buying-amazing-tech-lawyers-keep-not-using-it/

Law Firms Keep Buying Amazing Tech… Lawyers Keep Not Using It

The tech people are trying to make lawyering easier — or at least more profitable and secure — but their worst enemy remains the lawyers they aim to help.

Purchase, install, ignore” remains a disturbingly common pattern for law firms. It’s not universal by any means. Some technology manages to strike a chord with attorneys and gets inserted into the workflow, but many products fail to break through with the masses to the frustration of tech professionals. And unfortunately the tech that manages to break through is often the least essential. So as firms chase the generative AI dragon, they’d do well to get their fundamental tech issues sorted first.

Ground your legal AI strategy firmly in the basics declares a new report from iManage. Hopefully the AI-curious will take a look, because the takeaway lurking under the glitzy “AI” in the headline is a plea for better adoption across the board. Especially for the more important tools that currently collect a lot of dust across the industry.



Thursday, May 01, 2025

Similar to the conclusions reached in “The Dynamo and the Computer”

https://sloanreview.mit.edu/article/want-ai-driven-productivity-redesign-work/

Want AI-Driven Productivity? Redesign Work

To capitalize on the promises of artificial intelligence, leaders need to deconstruct jobs and processes, redeploy work, and reconstruct new ways of operating.



Wednesday, April 30, 2025

Darn! I was looking forward to this.

https://www.axios.com/2025/04/29/tariffs-amazon-prime-day-sellers-report

Amazon denies tariff pricing plan after White House calls it "hostile and political"

Amazon now denies reports it planned to list how much tariffs increased products' prices after White House Press Secretary Karoline Leavitt slammed the move as a "hostile and political act."

Why it matters: The reported plan further suggests a growing rift between businesses and President Trump, who has made aggressive tariffs and a global trade war central to his economic agenda.





As I feared, AI is becoming the dominant influencer…

https://www.theatlantic.com/technology/archive/2025/04/great-language-flattening/682627/

The Great Language Flattening

In at least one crucial way, AI has already won its campaign for global dominance. An unbelievable volume of synthetic prose is published every moment of every day—heaping piles of machine-written news articles, text messages, emails, search results, customer-service chats, even scientific research.

Chatbots learned from human writing. Now the influence may run in the other direction. Some people have hypothesized that the proliferation of generative-AI tools such as ChatGPT will seep into human communication, that the terse language we use when prompting a chatbot may lead us to dispose of any niceties or writerly flourishes when corresponding with friends and colleagues. But there are other possibilities. Jeremy Nguyen, a senior researcher at Swinburne University of Technology, in Australia, ran an experiment last year to see how exposure to AI-generated text might change the way people write. He and his colleagues asked 320 people to write a post advertising a sofa for sale on a secondhand marketplace. Afterward, the researchers showed the participants what ChatGPT had written when given the same prompt, and they asked the subjects to do the same task again. The responses changed dramatically.





Better or bitter?

https://coloradosun.com/2025/04/29/colorado-revisions-artificial-intelligence-law-consumer-protection/

Colorado lawmakers, being watched across the country, scale back artificial intelligence law

Senate Bill 318 would reduce the administrative tasks smaller companies must take to protect consumers against discrimination if their AI systems are used to decide who gets a job, housing, personal loans, health care, insurance coverage, educational opportunities, or legal or essential government services. 

The measure also would delay implementation for about a year and make the resource-intensive parts of the AI law apply initially to companies with 500 or more employees worldwide, instead of 50 or more. That would step down gradually until April 1, 2029, when companies with fewer than 100 workers would be exempt.





Perspective?

https://www.zdnet.com/article/anthropic-mapped-claudes-morality-heres-what-the-chatbot-values-and-doesnt/

Anthropic mapped Claude's morality. Here's what the chatbot values (and doesn't)

On Monday, Anthropic released an analysis of over 300,000 anonymized conversations between users and Claude, primarily Claude 3.5 models Sonnet and Haiku, as well as Claude 3. Titled "Values in the wild," the paper maps Claude's morality through patterns in the interactions that revealed 3,307 "AI values." 

As a result, Anthropic discovered a hierarchical values taxonomy of five macro-categories: Practical (the most prevalent), Epistemic, Social, Protective, and Personal (the least prevalent) values. Those categories were then subdivided into values, such as "professional and technical excellence" and "critical thinking."



Tuesday, April 29, 2025

One way to point fingers?

https://punchbowl.news/archive/42925-am/#__amazontodisplaytariffcostsforconsumers__

Amazon to display tariff costs for consumers

Amazon doesn’t want to shoulder the blame for the cost of President Donald Trump’s trade war.

So the e-commerce giant will soon show how much Trump’s tariffs are adding to the price of each product, according to a person familiar with the plan.

The shopping site will display how much of an item’s cost is derived from tariffs – right next to the product’s total listed price.

It’s a bit of a risky move for Amazon. Going on offense against Trump-imposed tariffs may cause its 300 million active customers to direct their anger toward the administration and not the retailer. But it could also irk Trump, who isn’t afraid of retaliating.





No good deed goes unpunished.

https://www.theverge.com/news/657632/take-it-down-act-passes-house-deepfakes

Take It Down Act heads to Trump’s desk

The Take It Down Act is heading to President Donald Trump’s desk after the House voted 409-2 to pass the bill, which will require social media companies to take down content flagged as nonconsensual (including AI-generated) sexual images. Trump has pledged to sign it.

The bill is among the only pieces of online safety legislation to successfully pass both chambers in years of furor over deepfakes, child safety, and other issues — but it’s one that critics fear will be used as a weapon against content the administration or its allies dislike. It criminalizes the publication of nonconsensual intimate images (NCII), whether real or computer-generated, and requires social media platforms to have a system to remove those images within 48 hours of being flagged. In his address to Congress this year, Trump quipped that once he signed it, “I’m going to use that bill for myself too, if you don’t mind, because nobody gets treated worse than I do online, nobody.”





Will this become common?

https://mustreadalaska.com/alaska-graduate-surveillance-legislation-passes-senate-under-guise-of-cell-phones-in-schools/

Alaska graduate surveillance legislation passes Senate under guise of ‘cell phones in schools’

An Alaska House of Representatives bill that was originally about cell phone use in schools has passed the Senate after being decorated with numerous amendments having nothing to do with cell phones.

One of those amendments to House Bill 57 has the State of Alaska tracking Alaska high school graduates for 20 years — until they are 38 years old.



Monday, April 28, 2025

Are lawyers on the way out?

https://theconversation.com/people-trust-legal-advice-generated-by-chatgpt-more-than-a-lawyer-new-study-252217

People trust legal advice generated by ChatGPT more than a lawyer – new study

People who aren’t legal experts are more willing to rely on legal advice provided by ChatGPT than by real lawyers – at least, when they don’t know which of the two provided the advice. That’s the key finding of our new research, which highlights some important concerns about the way the public increasingly relies on AI-generated content. We also found the public has at least some ability to identify whether the advice came from ChatGPT or a human lawyer.



Sunday, April 27, 2025

Perspective.

https://jcss.ut.ac.ir/article_101594.html

Once Upon a Time and Research

Background: The nature of scholarly research has undergone a profound transformation in recent decades, transitioning from traditional, library-based inquiry to digitally mediated and increasingly AI-assisted methodologies. This article reflects on that evolution through an autoethnographic lens, drawing upon the author’s personal academic trajectory and long-standing engagement with satire.

Aims: This article explores the evolving landscape of research, communication, and authorship in the digital age, with a particular focus on the transformative role of Artificial Intelligence.

Methodology: The study employs a reflective, autoethnographic methodology combined with AI-assisted literature synthesis. Drawing on personal academic experiences and outputs from ChatGPT and Claude, the author critically examines artificial intelligence’s role in communication research and satire. This qualitative approach blends narrative inquiry with theoretical analysis to explore the epistemological and ethical implications of AI in scholarly authorship.

Discussion: Reflecting on a shift from traditional library-based scholarship to AI-assisted inquiry, the author critically examines how tools like ChatGPT and Claude reshape academic and journalistic practices. The manuscript considers the integration of AI across domains such as human communication, media, sentiment analysis, and translation, while addressing ethical concerns including privacy, authorship, and misinformation. Through both anecdotal reflection and synthesized research, the text interrogates the promises and pitfalls of AI in content generation, especially in the context of satire—a long-standing interest of the author.

Conclusion: Drawing on personal experience and historical theories of satire from figures like Northrop Frye, Juvenal, and Linda Hutcheon, the article positions AI not just as a technological tool but as a cultural force influencing narrative forms and critical thought. While acknowledging AI's generative capabilities, the author emphasizes the enduring need for human discernment, intellectual ownership, and critical interpretation in both academic and creative contexts.





Perspective.

https://academic.oup.com/jiplp/advance-article-abstract/doi/10.1093/jiplp/jpaf024/8115922

The contingencies of copyright and some big questions of our time

At the heart of the copyright debate in the age of generative artificial intelligence (AI) lies a nagging question: will authorship remain the preserve of human creativity, or are we witnessing the emergence of a new, hybrid model of intellectual production that blurs the lines between human creativity and machine?

My intention is by no means to offer a definitive answer but rather to unpack the complexity of this question. By examining past and recent legal cases through a philosophical lens, I explore some of the key conceptual transformations that copyright has undergone in the late modern era, shifting from an anthropocentric logic to new environmental dynamics of networked technology.

Whether or not we are prepared to sacrifice our Promethean spark of creation—one of the key features that define us as human beings—the implications go far beyond copyright itself; they speak to the very core of what it means to write, read, create, and ultimately, to be human.





As a life-long fan of SciFi I can only agree!

https://ejournal.upi.edu/index.php/AJSEE/article/view/82480

The Role of Science Fiction in Enhancing Critical Thinking and Ethical Imagination in Education

Science fiction is a powerful literary genre that fosters critical thinking, ethical reflection, and imaginative inquiry among students. This paper explores how science fiction encourages learners to question modern realities, envision future possibilities, and engage with complex technological and societal issues. Drawing on examples from authors such as Isaac Asimov, Ray Bradbury, and Ursula K. Le Guin, the study highlights how science fiction facilitates discussions on artificial intelligence, censorship, social structures, and human identity. The genre enables students to assess hypothetical scenarios, consider moral implications, and cultivate empathy by engaging with diverse perspectives and futuristic dilemmas. Science fiction is not merely entertainment; it is a vital educational tool that prepares students to think critically and creatively in a rapidly evolving world.