Value
is as value does?
https://www.ft.com/content/f964fe30-cb6e-427d-b7a7-9adf2ab8a457?shareType=nongift
The
MicroStrategy copycats: companies turn to bitcoin to boost share
price
Firms
buy ‘kryptonite for short sellers’ as they try to emulate US
software group’s success
Software
business-turned-bitcoin hoarder MicroStrategy is inspiring a host of
companies to buy the cryptocurrency and hold it in their corporate
treasuries, in a manoeuvre aimed at boosting their flagging share
prices.
Pharmaceutical
companies and advertisers are among 78 listed companies around the
world that are following the US group’s example in buying the coins
to hold in place of cash, according to data from crypto security
company Coinkite.
MicroStrategy’s
founder Michael Saylor has made bitcoin his company’s primary
treasury reserve with an aggressive buying spree since 2020. Saylor
believes bitcoin’s value will keep rising, saying: “We are going
to Mars.”
Having
strapped its share price to the fortunes of bitcoin, MicroStrategy is
now the world’s largest corporate holder.
New
thinking?
https://www.yalelawjournal.org/pdf/DubalYLJForumEssay_hrhm14dd.pdf
Data
Laws at Work
In
recognition of the material, physical, and psychological harms
arising from the growing use of automated monitoring and
decision-making systems for labor control, jurisdictions around the
world are considering new digital-rights protections for workers.
Unsurprisingly, legislatures frequently turn to the European Union
(EU) for inspiration. The EU, through the passage of the General
Data Protection Regulation in 2016, the Artificial Intelligence Act
in 2024, and the Platform Work Directive in 2024, has positioned
itself as the leader in digital rights, and, in particular, in
providing affirmative digital rights for workers whose labor is
mediated by “a platform.” However, little is known about the
efficacy of these laws.
This
Essay begins to fill this knowledge gap. Through close analyses of
the laws and successful strategic litigation by platform workers
under these laws, I argue that the current EU framework contains two
significant shortcomings. First, the laws primarily position workers
as liberal, autonomous subjects, and in doing so, they make a
category error: workers, unlike consumers, are subordinated by law
and doctrine to the firms for which they labor. As a result, the
liberal rights that these laws privilege—such as transparency and
consent—are insufficient to mitigate the material harms produced
through automated labor management. Second, this Essay argues that
by leaning primarily on transparency principles to detect, prevent,
and stop violations of labor and employment law, EU data laws do not
account for the ways in which workplace algorithmic management
systems often create new harms that existing laws of work do not
address. These harms, which fundamentally disrupt norms about worker
pay, evaluation, and termination, arise from the relational logic of
data-processing systems—that is, the way that these systems
evaluate workers by dynamically comparing them to others, rather than
by evaluating them objectively based on fulfillment of ascribed
duties. Based on these analyses, I
propose that future data laws should be modeled on older approaches
to workplace regulation: rather than merely seeking to
elucidate or assess problematic data processes, they should aim to
restrict these processes. The normative north star of these laws
should be proscribing the digital practices that cause the harms,
rather than merely shining a light on their existence.
Can
we do it without an AI assistant?
https://academic.oup.com/policyandsociety/advance-article/doi/10.1093/polsoc/puaf001/7997395
Governance
of Generative AI
The
rapid and widespread diffusion of generative artificial intelligence
(AI) has unlocked new capabilities and changed how content and
services are created, shared, and consumed. This special issue
builds on the 2021 Policy and Society special issue on the governance
of AI by focusing on the legal, organizational, political,
regulatory, and social challenges of governing generative AI. This
introductory article lays the foundation for understanding generative
AI and underscores its key risks, including hallucination,
jailbreaking, data training and validation issues, sensitive
information leakage, opacity, control challenges, and design and
implementation risks. It then examines the governance challenges of
generative AI, such as data governance, intellectual property
concerns, bias amplification, privacy violations, misinformation,
fraud, societal impacts, power imbalances, limited public engagement,
public sector challenges, and the need for international cooperation.
The article then highlights a comprehensive framework to govern
generative AI, emphasizing the need for adaptive, participatory, and
proactive approaches. The articles in this special issue stress the
urgency of developing innovative and inclusive approaches to ensure
that generative AI development is aligned with societal values. They
explore the need for adaptation of data governance and intellectual
property laws, propose a complexity-based approach for responsible
governance, analyze how the dominance of Big Tech is exacerbated by
generative AI developments and how this affects policy processes,
highlight the shortcomings of technocratic governance and the need
for broader stakeholder participation, propose new regulatory
frameworks informed by AI safety research and learning from other
industries, and highlight the societal impacts of generative AI.
To
contrast AI wrongs? I think, therefore I have rights?
https://link.springer.com/article/10.1007/s00146-025-02184-2
Human
rights for robots? The moral foundations and epistemic challenges
As
we step into an era in which artificial intelligence systems are
predicted to surpass human capabilities, a number of profound ethical
questions have emerged. One such question, which has gained some
traction in recent scholarship, concerns the ethics of human
treatment of robots and the thought-provoking possibility of robot
rights. The present article explores this very aspect, with a
particular focus on the notion of human rights for robots. It
argues that if we accept the widely held view that moral status and
rights (including human rights) are grounded in certain cognitive
capacities, then it follows that intelligent machines could, in
principle, acquire these entitlements once they come to possess the
requisite properties. In support of this perspective, the
article outlines the moral foundations of human rights and examines
several main objections, arguing that they do not successfully negate
the prospect of considering robots as potential holders of human
rights. Subsequently, it turns to the key epistemic challenges
associated with moral status and rights for robots, outlining the
main difficulties in discerning the presence of mental states in
artificial entities and offering some practical considerations for
approaching these challenges. The article concludes by emphasizing
the importance of establishing a suitable framework for moral
decision-making under uncertainty in the context of human treatment
of artificial entities, given the gravity of the epistemic problems
surrounding the concepts of artificial consciousness, moral status,
and rights.