Tuesday, February 18, 2025

Good advice.

https://www.bespacific.com/back-up-everything-even-if-elon-musk-isnt-looking-at-it/

Back Up Everything. Even if Elon Musk Isn’t Looking at It.

The New York Times  [unpaywalled] – “In recent weeks, Elon Musk and his aides have gained access to many federal agencies’ systems and unknown amounts of data. Many readers have written in to share their fears that the agencies — and the personal data they possess on hundreds of millions of taxpayers — are now vulnerable. When people tinker with vital systems, things can go wrong. New vulnerabilities can emerge that thieves could exploit, or existing tax or loan payments could disappear. And one wrong move can bring a whole website down for days or longer. The level of risk isn’t clear, and in uncertain situations, it’s tempting to do something to feel that you’re protecting yourself. That instinct is perfectly rational. But don’t just download your history of paying into Social Security or freeze access to your credit files because of the politics of now. Back up everything important, everywhere you can. Do this at least once a year or so. It’s just good hygiene. Having multiple copies of all of the things that help you run your life brings a certain kind of peace that lacks a perfect word in English, but it’s the quality or state of being well sorted. Here’s a guide for what to do.”





Surveillance is easy…

https://databreaches.net/2025/02/18/the-myth-of-jurisdictional-privacy/

The Myth of Jurisdictional Privacy

Understanding Global Surveillance

In discussions of online privacy, you’ll often hear passionate debates about jurisdiction, with particular focus on avoiding the “Five Eyes” intelligence alliance countries (USA, UK, Canada, Australia, and New Zealand). The argument goes that by choosing a service provider outside these nations, you can somehow escape their surveillance reach.

But let’s pause and think about that for a moment. In a world where digital information flows freely across borders, where undersea cables connect continents, and where global tech infrastructure is deeply interconnected, does it really make sense to think that physical jurisdiction offers meaningful protection from surveillance?





Perspective. (Interesting metric)

https://www.zdnet.com/article/knowledge-management-takes-center-stage-in-the-ai-journey/

Knowledge management takes center stage in the AI journey

According to the Ark Invest Big Ideas 2025 report, agents will increase enterprise productivity via software. Companies that deploy agents should be able to increase unit volume with the same workforce and optimize their workforce toward higher-value activities.

Artificial intelligence (AI) will also supercharge knowledge work. Through 2030, Ark expects the amount of software deployed per knowledge worker to grow considerably as businesses invest in productivity solutions. AI agents are poised to accelerate the adoption of digital applications and create an epochal shift in human-computer interaction.



Monday, February 17, 2025

How AI will make us all dumber…

https://nmn.gl/blog/ai-and-learning

New Junior Developers Can’t Actually Code

We’re at this weird inflection point in software development. Every junior dev I talk to has Copilot or Claude or GPT running 24/7. They’re shipping code faster than ever. But when I dig deeper into their understanding of what they’re shipping? That’s where things get concerning.

Sure, the code works, but ask why it works that way instead of another way? Crickets. Ask about edge cases? Blank stares.

The foundational knowledge that used to come from struggling through problems is just… missing.

We’re trading deep understanding for quick fixes, and while it feels great in the moment, we’re going to pay for this later.





A security perspective.

https://thehackernews.com/2025/02/cisos-expert-guide-to-ctem-and-why-it.html

CISO's Expert Guide To CTEM And Why It Matters

Cyber threats evolve—has your defense strategy kept up? A new free guide available here explains why Continuous Threat Exposure Management (CTEM) is the smart approach for proactive cybersecurity.



Sunday, February 16, 2025

Worth thinking about.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5131058

Large Language Models and International Law

Large Language Models (LLMs) have the potential to transform public international lawyering. ChatGPT and similar LLMs can do so in at least five ways: (i) helping to identify the contents of international law; (ii) interpreting existing international law; (iii) formulating and drafting proposals for new legal instruments or negotiating positions; (iv) assessing the international legality of specific acts; and (v) collating and distilling large datasets for international courts, tribunals, and treaty bodies.

The article uses two case studies to show how LLMs may work in international legal practice. First, it uses LLMs to identify whether particular behavioral expectations rise to the level of customary international law. In doing so, it tests LLMs’ ability to identify persistent objectors and a more egalitarian collection of state practice, as well as their proclivity to produce orthogonal or inaccurate answers. Second, it explores how LLMs perform in producing draft treaty texts, ranging from a U.S.-China extradition treaty to a treaty banning the use of artificial intelligence in nuclear command and control systems.

Based on our analysis of the five potential functions and the two more detailed case studies, the article identifies four roles for LLMs in international law: as collaborator, confounder, creator, or corruptor. In some cases, LLMs will be collaborators, complementing existing international lawyering by drastically improving the scope and speed with which users can assemble and analyze materials and produce new texts. At the same time, without careful prompt engineering and curation of results, LLMs may generate confounding outcomes, leading international lawyers down inaccurate or ambiguous paths. This is particularly likely when LLMs fail to accurately explain or defend particular conclusions. Further, LLMs also hold surprising potential to help to create new law by offering inventive proposals for treaty language or negotiations.

Most importantly, we highlight the potential for LLMs to corrupt international law by fostering automation bias in users. That is, even where analog work by international lawyers would produce different results, LLM results may soon be perceived to accurately reflect the contents of international law. The implications of this potential are profound. LLMs could effectively realign the contents and contours of international law based on the datasets they employ. The widespread use of LLMs may even incentivize states and others to push their desired views into those datasets to corrupt LLM outputs. Such risks and rewards lead us to conclude with a call for further empirical and theoretical research on LLMs’ potential to assist, reshape, or redefine international legal practice and scholarship.





Not sure I agree.

https://thejoas.com/index.php/thejoas/article/view/263

The Intersection of Ethics and Artificial Intelligence: A Philosophical Study

The rapid development of artificial intelligence (AI) has had a significant impact on various aspects of human life, ranging from the economy, education, to health. However, these advances also raise complex ethical challenges, such as privacy concerns, algorithmic bias, moral responsibility, and potential misuse of technology. This research aims to explore the intersection between ethics and artificial intelligence through a philosophical approach. The method used in this study is qualitative with literature study (library research), examining various classical and contemporary ethical theories and their application in the context of AI development. The results of the study show that AI presents a new moral dilemma that cannot be fully answered by traditional ethical frameworks. For example, the concept of responsibility in AI becomes blurred when decisions are taken by autonomous systems without human intervention. Additionally, bias in AI training data indicates the need for strict ethical oversight in the design and implementation process of this technology. The study also highlights the need for a multidisciplinary approach in drafting ethical guidelines that are able to accommodate future AI developments. Thus, this research is expected to contribute to enriching the discourse on AI ethics and offering a deeper philosophical perspective in understanding the moral challenges faced.





You only get out what you design in… (Garbage in, garbage out.)

https://www.livescience.com/technology/artificial-intelligence/older-ai-models-show-signs-of-cognitive-decline-study-shows

Older AI models show signs of cognitive decline, study shows

People increasingly rely on artificial intelligence (AI) for medical diagnoses because of how quickly and efficiently these tools can spot anomalies and warning signs in medical histories, X-rays and other datasets before they become obvious to the naked eye. But a new study published Dec. 20, 2024 in the BMJ raises concerns that AI technologies like large language models (LLMs) and chatbots, like people, show signs of deteriorated cognitive abilities with age.

"These findings challenge the assumption that artificial intelligence will soon replace human doctors," the study's authors wrote in the paper, "as the cognitive impairment evident in leading chatbots may affect their reliability in medical diagnostics and undermine patients' confidence."



Saturday, February 15, 2025

Sound familiar?

https://thedailyeconomy.org/article/how-did-108-economists-predict-mileis-results-exactly-wrong/

How Did 108 Economists Predict Milei’s Results Exactly Wrong?

In November 2023, the warning came, as clear as an omen.

A political upstart was seeking office and, if elected, his policies were likely to cause “devastation” in his own country and “severely reduce policy space in the long run.”

The threat was a chainsaw-wielding disciple of Austrian economics from Argentina who embraced laissez-faire economics. The predictions of doom came not from Old Testament prophets, but 108 economists who signed a public letter saying his anachronistic ideas had long ago been discredited.

On November 19, voters elected as their next president the wild-haired Milei, who defeated his Peronist opponent by a ten-point margin. Milei was inaugurated on December 10 and wasted little time implementing his laissez-faire agenda, which included an immediate five-percent (chainsaw) slash in government spending.

More reforms followed.

Public work programs were put on hold, welfare programs were slashed, and subsidies were eliminated. State-owned companies were privatized and hundreds of regulations were cut. Tax codes were simplified and levies on exports were lifted or reduced. Labor laws were relaxed. The number of government ministries was reduced from 18 to 9 (¡afuera!) and a job freeze was implemented on federal positions. Tens of thousands of public employees were given pink slips.

On the monetary side, the currency was sharply devalued and the central bank was ordered to halt its money-printing.





Worth a try?

https://venturebeat.com/ai/perplexity-just-made-ai-research-crazy-cheap-what-that-means-for-the-industry/

Perplexity just made AI research crazy cheap—what that means for the industry

Perplexity offers five free queries daily to all users. Pro subscribers pay $20 monthly for 500 daily queries and faster processing — a price point that could force larger AI companies to explain why their services cost up to 100 times more.

https://www.perplexity.ai/





Employees will try any tool that looks like it will help, without much concern for possible negatives.

https://www.bbc.com/news/articles/cglyjn7le2ko

Law firm restricts AI after 'significant' staff use

An international law firm has blocked general access to several artificial intelligence (AI) tools after it found a "significant increase in usage" by its staff.

In an email seen by the BBC, a senior director of Hill Dickinson, which employs more than a thousand people across the world, warned staff of the use of AI tools.

The firm said much of the usage was not in line with its AI policy, and going forward the firm would only allow staff to access the tools via a request process.

In the email, Hill Dickinson's chief technology officer said the law firm had detected more than 32,000 hits to the popular chatbot ChatGPT over a seven-day period in January and February.

During the same timeframe, there were also more than 3,000 hits to the Chinese AI service DeepSeek, which was recently banned from Australian government devices over security concerns.

It also highlighted almost 50,000 hits to Grammarly, the writing assistance tool.



Friday, February 14, 2025

It can’t happen to me? – OR -- My AI went to a better law school than I did?

https://www.theregister.com/2025/02/14/attorneys_cite_cases_hallucinated_ai/

Lawyers face judge's wrath after AI cites made-up cases in fiery hoverboard lawsuit

Demonstrating yet again that uncritically trusting the output of generative AI is dangerous, attorneys involved in a product liability lawsuit have apologized to the presiding judge for submitting documents that cite non-existent legal cases.

The lawsuit began with a complaint filed in June, 2023, against Walmart and Jetson Electric Bikes over a fire allegedly caused by a hoverboard [PDF]. The blaze destroyed the plaintiffs' house and caused serious burns to family members, it is said.

Last week, Wyoming District Judge Kelly Rankin issued an order to show cause [PDF] that directs the plaintiffs' attorneys to explain why they should not be sanctioned for citing eight cases that do not exist in a January 22, 2025 filing.





Perspective.

https://www.schneier.com/blog/archives/2025/02/ai-and-civil-service-purges.html

AI and Civil Service Purges

Donald Trump and Elon Musk’s chaotic approach to reform is upending government operations. Critical functions have been halted, tens of thousands of federal staffers are being encouraged to resign, and congressional mandates are being disregarded. The next phase: The Department of Government Efficiency reportedly wants to use AI to cut costs. According to The Washington Post, Musk’s group has started to run sensitive data from government systems through AI programs to analyze spending and determine what could be pruned. This may lead to the elimination of human jobs in favor of automation. As one government official who has been tracking Musk’s DOGE team told the Post, the ultimate aim is to use AI to replace “the human workforce with machines.” (Spokespeople for the White House and DOGE did not respond to requests for comment.)

Using AI to make government more efficient is a worthy pursuit, and this is not a new idea. The Biden administration disclosed more than 2,000 AI applications in development across the federal government. For example, FEMA has started using AI to help perform damage assessment in disaster areas. The Centers for Medicare and Medicaid Services has started using AI to look for fraudulent billing. The idea of replacing dedicated and principled civil servants with AI agents, however, is new—and complicated.



(Related)

https://www.bespacific.com/the-venn-diagram-of-trumps-authoritarian-actions/

The Venn Diagram of Trump’s Authoritarian Actions

Kottke.org:  Professor Christina Pagel of University College London has mapped the actions of the Trump administration’s first few weeks into a Venn diagram (above) with “five broad domains that correspond to features of proto-authoritarian states”:

  • Undermining Democratic Institutions & Rule of Law; Dismantling federal government

  • Dismantling Social Protections & Rights; Enrichment & Corruption

  • Suppressing Dissent & Controlling Information

  • Attacking Science, Environment, Health, Arts & Education

  • Aggressive Foreign Policy & Global Destabilization

This diagram is available as a PDF and the information is also contained in this categorized table. Links and commentary from Pagel can be found on Bluesky as well. Also very helpful is this list of authoritarian actions that the Trump administration has taken, each with a link to the relevant news story. I will be referring back to this list often in the coming weeks.”



Thursday, February 13, 2025

A security question?

https://www.schneier.com/blog/archives/2025/02/doge-as-a-national.html

DOGE as a National Cyberattack

In the span of just weeks, the US government has experienced what may be the most consequential security breach in its history—not through a sophisticated cyberattack or an act of foreign espionage, but through official orders by a billionaire with a poorly defined government role. And the implications for national security are profound.

First, it was reported that people associated with the newly created Department of Government Efficiency (DOGE) had accessed the US Treasury computer system, giving them the ability to collect data on and potentially control the department’s roughly $5.45 trillion in annual federal payments.

Then, we learned that uncleared DOGE personnel had gained access to classified data from the US Agency for International Development, possibly copying it onto their own systems. Next, the Office of Personnel Management—which holds detailed personal data on millions of federal employees, including those with security clearances—was compromised. After that, Medicaid and Medicare records were compromised.

Meanwhile, only partially redacted names of CIA employees were sent over an unclassified email account. DOGE personnel are also reported to be feeding Education Department data into artificial intelligence software, and they have also started working at the Department of Energy.

This essay was written with Davi Ottenheimer, and originally appeared in Foreign Policy.





Perspective.

https://www.zdnet.com/article/ai-chatbots-distort-the-news-bbc-finds-see-what-they-get-wrong/

AI chatbots distort the news, BBC finds - see what they get wrong

Four major AI chatbots are churning out "significant inaccuracies" and "distortions" when asked to summarize news stories, according to a BBC investigation.

OpenAI's ChatGPT, Microsoft's Copilot, Google's Gemini, and Perplexity AI were each presented with news content from BBC's website and then asked questions about the news.

The report details that the BBC asked chatbots to summarize 100 news stories, and journalists with relevant expertise rated the quality of each answer. 

According to the findings, 51% of all AI-produced answers about the news had significant issues, while 19% of the AI-generated answers "introduced factual errors, such as incorrect factual statements, numbers, and dates."

Additionally, the investigation found that 13% of the quotes from BBC articles were altered in some way, undermining the "original source" or not even being present in the cited article.



Wednesday, February 12, 2025

Part of the Trump guidebook…

https://www.bespacific.com/all-the-ways-elon-musk-is-breaking-the-law-explained-by-a-law-professor/

All the ways Elon Musk is breaking the law, explained by a law professor

Vox [unpaywalled]- “There are a lot of them. Elon Musk’s Department of Government Efficiency is moving fast and breaking the law — lots of laws. The scope of Trump and Musk’s sweeping effort to purge the federal workforce and slash government spending has shocked the political world — in part for its ambition, but also in part because of its disregard for the law.  David Super, an administrative law professor at Georgetown Law School, recently told the Washington Post that so many of Musk’s moves were “so wildly illegal” that he seemed to be “playing a quantity game, and assuming the system can’t react to all this illegality at once.” I reached out to Super so he could walk through this quantity game — so he could take me on a tour of all of the apparent lawbreaking in Musk’s effort so far. A transcript of our conversation, condensed and edited for clarity, follows….”





A hint of things to come?

https://natlawreview.com/article/court-training-ai-model-based-copyrighted-data-not-fair-use-matter-law#google_vignette

Court: Training AI Model Based on Copyrighted Data Is Not Fair Use as a Matter of Law

In what may turn out to be an influential decision, Judge Stephanos Bibas ruled as a matter of law in Thompson Reuters v. Ross Intelligence that creating short summaries of law to train Ross Intelligence’s artificial intelligence legal research application not only infringes Thompson Reuters’ copyrights as a matter of law but that the copying is not fair use. Judge Bibas had previously ruled that infringement and fair use were issues for the jury but changed his mind: “A smart man knows when he is right; a wise man knows when he is wrong.”

At issue in the case was whether Ross Intelligence directly infringed Thompson Reuters’ copyrights in its case law headnotes that are organized by Westlaw’s proprietary Key Number system. Thompson Reuters contended that Ross Intelligence’s contractor copied those headnotes to create “Bulk Memos.” Ross Intelligence used the Bulk Memos to train its competitive AI-powered legal research tool. Judge Bibas ruled that (i) the West headnotes were sufficiently original and creative to be copyrightable, and (ii) some of the Bulk Memos used by Ross were so similar that they infringed as a matter of law.



Tuesday, February 11, 2025

I think thinking is worth thinking about. (AI looks at what a lot of someones have already thought.)

https://www.bespacific.com/the-impact-of-generative-ai-on-critical-thinking/

The Impact of Generative AI on Critical Thinking

A new paper The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers from researchers at Microsoft and Carnegie Mellon University finds that as humans increasingly rely on generative AI in their work, they use less critical thinking, which can “result in the deterioration of cognitive faculties that ought to be preserved.” “[A] key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise,” the researchers wrote.”

The rise of Generative AI (GenAI) in knowledge workflows raises questions about its impact on critical thinking skills and practices. We survey 319 knowledge workers to investigate 1) when and how they perceive the enaction of critical thinking when using GenAI, and 2) when and why GenAI affects their effort to do so. Participants shared 936 first-hand examples of using GenAI in work tasks. Quantitatively, when considering both task- and user-specific factors, a user’s task-specific self-confidence and confidence in GenAI are predictive of whether critical thinking is enacted and the effort of doing so in GenAI-assisted tasks. Specifically, higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. Qualitatively, GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship. Our insights reveal new design challenges and opportunities for developing GenAI tools for knowledge work..”





Probably not a general guide to copyright, but suggestive?

https://www.cnet.com/tech/services-and-software/this-company-got-a-copyright-for-an-image-made-entirely-with-ai-heres-how/

This Company Got a Copyright for an Image Made Entirely With AI. Here's How

The image, called "A Single Piece of American Cheese," was created using Invoke's AI editing platform.





Perspective.

https://www.psychologytoday.com/intl/blog/a-hovercraft-full-of-eels/202502/the-double-edged-sword-of-artificial-intelligence

The Double-Edged Sword of Artificial Intelligence

Each new iteration of a large language model (LLM) feels like a step forward—better at understanding nuanced questions, more capable of providing detailed answers, and increasingly adept at sounding, well, human. These advancements are celebrated as breakthroughs in artificial intelligence (AI), and for good reason.

But we also have to remember that LLMs themselves are just tools trained by humans, regardless of how sophisticated they get. They cannot evaluate the truth of the responses they produce. As I’ve argued in the past, their responses are nothing but bullsh*t, which is information that is communicated with little regard for its accuracy. And a recent study by Zhou et al. (2024) suggests that as LLMs get more sophisticated, they may also get better at giving us plausible-sounding incorrect answers. In other words, as these systems become more educated, they also become better bullsh*tters.



Monday, February 10, 2025

Perspective.

https://blog.samaltman.com/three-observations

Three Observations

We continue to see rapid progress with AI development. Here are three observations about the economics of AI:

1. The intelligence of an AI model roughly equals the log of the resources used to train and run it. These resources are chiefly training compute, data, and inference compute. It appears that you can spend arbitrary amounts of money and get continuous and predictable gains; the scaling laws that predict this are accurate over many orders of magnitude.

2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.

3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future.





Perspective.

https://www.techpolicy.press/anatomy-of-an-ai-coup/

Anatomy of an AI Coup

Artificial intelligence (AI) is a technology for manufacturing excuses. While lacking clear definitions or tools for assessment, AI has nonetheless seized the imagination of politicians and managers across government, academia, and industry. But what AI is best at producing is justifications. If you want a labor force, a regulatory bureaucracy, or accountability to disappear, you simply say, “AI can do it.” Then, the conversation shifts from explaining why these things should or should not go away to questions about how AI would work in their place.

We are in the midst of a political coup that, if successful, would forever change the nature of American government. It is not taking place in the streets. There is no martial law. It is taking place cubicle by cubicle in federal agencies and in the mundane automation of bureaucracy. The rationale is based on a productivity myth that the goal of bureaucracy is merely what it produces (services, information, governance) and can be isolated from the process through which democracy achieves those ends: debate, deliberation, and consensus.

AI then becomes a tool for replacing politics. The Trump administration frames generative AI as a remedy to "government waste." However, what it seeks to automate is not paperwork but democratic decision-making. Elon Musk and his Department of Government Efficiency (DOGE) are banking on a popular but false delusion that word prediction technologies make meaningful inferences about the world. They are using it to sidestep Congressional oversight of the budget, which is, Constitutionally, the allotment of resources to government programs through representative politics.





Economics of AI…

https://www.infodocket.com/2025/02/09/report-microsoft-offers-authors-5000-to-train-ai-on-their-books/

Report: “Microsoft Offers Authors $5,000 to Train AI On Their Books”

The company has proposed a licensing agreement with publisher HarperCollins that would pay $5,000 per book for AI training rights. Authors would receive half of that amount, or $2,500 per book, according to the publisher.

Read the Complete Article

Microsoft has pursued “an incredible strategy,” says Brown University economics professor Emily Oster. “They’re trying to establish the idea that the rights to train on books are worth $5,000. You can’t do that by going to the latest bestseller. So you do that by going to the backlist — to people who aren’t collecting royalties — and telling them, ‘Look, would you like some free money?’”



Sunday, February 09, 2025

Value is as value does?

https://www.ft.com/content/f964fe30-cb6e-427d-b7a7-9adf2ab8a457?shareType=nongift

The MicroStrategy copycats: companies turn to bitcoin to boost share price

Firms buy ‘kryptonite for short sellers’ as they try to emulate US software group’s success

Software business-turned-bitcoin hoarder MicroStrategy is inspiring a host of companies to buy the cryptocurrency and hold it in their corporate treasuries, in a manoeuvre aimed at boosting their flagging share prices.

Pharmaceutical companies and advertisers are among 78 listed companies around the world that are following the US group’s example in buying the coins to hold in place of cash, according to data from crypto security company Coinkite.

MicroStrategy’s founder Michael Saylor has made bitcoin his company’s primary treasury reserve with an aggressive buying spree since 2020. Saylor believes bitcoin’s value will keep rising, saying: “We are going to Mars.”

Having strapped its share price to the fortunes of bitcoin, MicroStrategy is now the world’s largest corporate holder.





New thinking?

https://www.yalelawjournal.org/pdf/DubalYLJForumEssay_hrhm14dd.pdf

Data Laws at Work

In recognition of the material, physical, and psychological harms arising from the growing use of automated monitoring and decision-making systems for labor control, jurisdictions around the world are considering new digital-rights protections for workers. Unsurprisingly, legislatures frequently turn to the European Union (EU) for inspiration. The EU, through the passage of the General Data Protection Regulation in 2016, the Artificial Intelligence Act in 2024, and the Platform Work Directive in 2024, has positioned itself as the leader in digital rights, and, in particular, in providing affirmative digital rights for workers whose labor is mediated by “a platform.” However, little is known about the efficacy of these laws.

This Essay begins to fill this knowledge gap. Through close analyses of the laws and successful strategic litigation by platform workers under these laws, I argue that the current EU framework contains two significant shortcomings. First, the laws primarily position workers as liberal, autonomous subjects, and in doing so, they make a category error: workers, unlike consumers, are subordinated by law and doctrine to the firms for which they labor. As a result, the liberal rights that these laws privilege—such as transparency and consent—are insufficient to mitigate the material harms produced through automated labor management. Second, this Essay argues that by leaning primarily on transparency principles to detect, prevent, and stop violations of labor and employment law, EU data laws do not account for the ways in which workplace algorithmic management systems often create new harms that existing laws of work do not address. These harms, which fundamentally disrupt norms about worker pay, evaluation, and termination, arise from the relational logic of data-processing systems—that is, the way that these systems evaluate workers by dynamically comparing them to others, rather than by evaluating them objectively based on fulfillment of ascribed duties. Based on these analyses, I propose that future data laws should be modeled on older approaches to workplace regulation: rather than merely seeking to elucidate or assess problematic data processes, they should aim to restrict these processes. The normative north star of these laws should be proscribing the digital practices that cause the harms, rather than merely shining a light on their existence.





Can we do it without an AI assistant?

https://academic.oup.com/policyandsociety/advance-article/doi/10.1093/polsoc/puaf001/7997395

Governance of Generative AI

The rapid and widespread diffusion of generative artificial intelligence (AI) has unlocked new capabilities and changed how content and services are created, shared, and consumed. This special issue builds on the 2021 Policy and Society special issue on the governance of AI by focusing on the legal, organizational, political, regulatory, and social challenges of governing generative AI. This introductory article lays the foundation for understanding generative AI and underscores its key risks, including hallucination, jailbreaking, data training and validation issues, sensitive information leakage, opacity, control challenges, and design and implementation risks. It then examines the governance challenges of generative AI, such as data governance, intellectual property concerns, bias amplification, privacy violations, misinformation, fraud, societal impacts, power imbalances, limited public engagement, public sector challenges, and the need for international cooperation. The article then highlights a comprehensive framework to govern generative AI, emphasizing the need for adaptive, participatory, and proactive approaches. The articles in this special issue stress the urgency of developing innovative and inclusive approaches to ensure that generative AI development is aligned with societal values. They explore the need for adaptation of data governance and intellectual property laws, propose a complexity-based approach for responsible governance, analyze how the dominance of Big Tech is exacerbated by generative AI developments and how this affects policy processes, highlight the shortcomings of technocratic governance and the need for broader stakeholder participation, propose new regulatory frameworks informed by AI safety research and learning from other industries, and highlight the societal impacts of generative AI.





To contrast AI wrongs? I think, therefore I have rights?

https://link.springer.com/article/10.1007/s00146-025-02184-2

Human rights for robots? The moral foundations and epistemic challenges

As we step into an era in which artificial intelligence systems are predicted to surpass human capabilities, a number of profound ethical questions have emerged. One such question, which has gained some traction in recent scholarship, concerns the ethics of human treatment of robots and the thought-provoking possibility of robot rights. The present article explores this very aspect, with a particular focus on the notion of human rights for robots. It argues that if we accept the widely held view that moral status and rights (including human rights) are grounded in certain cognitive capacities, then it follows that intelligent machines could, in principle, acquire these entitlements once they come to possess the requisite properties. In support of this perspective, the article outlines the moral foundations of human rights and examines several main objections, arguing that they do not successfully negate the prospect of considering robots as potential holders of human rights. Subsequently, it turns to the key epistemic challenges associated with moral status and rights for robots, outlining the main difficulties in discerning the presence of mental states in artificial entities and offering some practical considerations for approaching these challenges. The article concludes by emphasizing the importance of establishing a suitable framework for moral decision-making under uncertainty in the context of human treatment of artificial entities, given the gravity of the epistemic problems surrounding the concepts of artificial consciousness, moral status, and rights.