I
sense an uptick in the number of articles on the topic of AI
authored/invented IP. Could we have some resolution in my lifetime?
Perhaps defining the author (or inventor) as the human who asked AI
the question?
https://academic.oup.com/jiplp/advance-article/doi/10.1093/jiplp/jpae119/7965768
Understanding
authorship in Artificial Intelligence-assisted works
The
advent of generative Artificial Intelligence (AI) has brought about a
significant shift in the way works are created, with the blurring of
boundaries between human and machine-driven creation processes
becoming a prominent challenge. This leads to the question of whether
authorship in such works exists and, if so, whom it should be
attributed to.
This
article focusses on an analysis of existing case law of the Court of
Justice of the European Union and selected EU Member State courts, in
order to find indications about what to consider when examining the
authorship of AI-assisted works in the European copyright system.
Ultimately,
a four-step test is proposed which aids in assessing whether there is
authorship in concrete works and whom it should be attributed to.
The first step asks what persons are involved in the creation process
before determining—as second step—the kind of AI system used.
The third step analyses whether the persons involved exercised a
sufficient subjective judgment in the composition of the work; the
final step determines whether they had an adequate control over the
execution.
(Related)
https://search.informit.org/doi/abs/10.3316/informit.T2025011900005201175533311
'AI
is not an inventor': 'Thaler v Comptroller of Patents, Designs and
Trademarks' and the patentability of AI
The
increasing use of Artificial Intelligence (AI) technologies in
inventive processes raises numerous patent law issues, including
whether AI can be an inventor under law and who owns the AI-generated
inventions. The UK Supreme Court decision in 'Thaler v Comptroller
of Patents, Designs and Trademarks' has provided an ultimate answer
to this question: AI cannot be an inventor for the purposes of patent
law. This note argues, first, that while such a human-centric
approach to inventorship might discourage the use and development of
AI technologies with autonomous invention capabilities, it will help
retain an active human involvement in technologically supported
inventive processes and continuously foster human ingenuity. Second,
despite the Court focusing on what patent law is and not on what the
law should be, the decision will be influential in the ongoing
discussions on the future of patent law and will make it more
difficult to expand patent law to incorporate non-human inventors.
Third, the decision has opened, or revealed, the gaps in patent law
that the emergence of AI technologies have created and for which new
legal solutions will be needed, especially with relation to the
ownership of AI-assisted inventions and the validation of
inventorship claims.
Can
AI be trusted? An ongoing question.
https://ejournal.bamala.org/index.php/yudhistira/article/view/251
Digital
Epistemology: Evaluating The Credibility Of Knowledge Generated By Ai
The
rise of Artificial Intelligence (AI) as a key player in knowledge
production has transformed traditional epistemological frameworks,
necessitating a critical evaluation of its credibility and
trustworthiness. This
paper investigates the emerging domain of digital epistemology,
focusing on how AI challenges established notions of validity,
reliability, and trust in knowledge generation. By examining
philosophical perspectives and interdisciplinary insights, we
identify three primary challenges to AI-generated knowledge:
algorithmic biases, the dependence on flawed or incomplete datasets,
and the opacity of decision-making processes. These challenges raise
significant concerns about the ethical and epistemological
implications of relying on AI in contexts such as healthcare, law,
and policy-making. Furthermore, this study explores the mechanisms
required to evaluate the credibility of AI systems, emphasizing the
importance of transparency, explainability, and accountability in
fostering trust. We argue that the epistemological relationship
between AI and its human users hinges on balancing technological
capabilities with ethical considerations, ensuring that AI serves as
a tool to complement rather than undermine human autonomy. The
findings underscore the need for a robust digital epistemology that
adapts classical principles of knowledge to the complexities of the
digital era. This framework can guide the development of AI systems
that prioritize ethical decision-making and credible knowledge
outputs, addressing both theoretical and practical concerns. By
bridging philosophy and technology, this paper offers critical
insights into the evolving role of AI in shaping how knowledge is
produced, validated, and trusted in the digital age.
(Related)
https://jurnal.fs.umi.ac.id/index.php/alpamet/article/view/855
Artificial
Intelligence and Lokean Epistemology
This
research explores the intersection of artificial intelligence (AI)
and John Locke’s epistemology, examining how advancements in AI
challenge traditional notions of knowledge and the subject of
knowledge. The increasing sophistication of AI systems, which
simulate human-like reasoning and learning processes, blurs the
boundaries between human cognition and machine intelligence. This
study investigates the potential connections between AI and Locke's
theory of knowledge, which emphasizes that knowledge arises from
sensory experience and reflection. Beginning with a review of
Locke’s epistemological principles, including the role of empirical
data and the distinction between primary and secondary qualities, the
research evaluates how AI’s reliance on vast datasets, machine
learning algorithms, and neural networks aligns—or diverges—from
Locke’s framework. It
questions whether AI systems can possess knowledge in the Lockean
sense and examines the epistemic status of AI-generated
outputs in terms of reliability, trustworthiness, and biases in
training data. The role of human oversight in validating
AI-generated insights is also critically assessed. Ultimately, this
study contributes to the ongoing discourse on the nature and limits
of knowledge in the AI era, challenging traditional epistemological
frameworks. By integrating Locke’s principles with contemporary AI
developments, it advances the debate on what it means to "know"
in a world increasingly mediated by artificial agents, offering a
nuanced perspective on the implications of AI for human understanding
and the evolving landscape of knowledge.
Useful?
https://digitalcommons.wcl.american.edu/facsch_lawrev/2285/
A
Stepwise Approach to Copyright and Generative Artificial Intelligence
In
order to understand whether generative AI may infringe copyrights,
one must first have a sound grounding in the technical complexities
of the “generative AI supply chain.” This Article not only
explains the technology in terms accessible to a legal audience, but
also explores the doctrinal complexities of how generative AI maps
onto existing copyright law. The authors do an admirable job in
accomplishing both goals.
First
I’ve seen on this topic.
https://houstonhealthlaw.scholasticahq.com/article/128623-artificial-intelligence-and-the-hipaa-privacy-rule-a-primer
Artificial
Intelligence and the HIPAA Privacy Rule: A Primer
Consider
a medical chatbot that a hospital makes available to patients
scheduled for colonoscopies.1 The chatbot uses artificial
intelligence (AI)2 to conduct online conversations via text or
text-to-speech in lieu of providing patients direct contact with a
live person.3 The chatbot, which was designed to improve patient
compliance with unpleasant bowel preparation, has been shown to
increase the number of people who have successful colonoscopies and
decrease the number of people who fail to show for their procedures.4
Given that patients do share sensitive, bowel-related information
with the chatbot, one question is whether federal or state laws
protect the privacy and security of their information.
Further
consider an AI-driven symptom checker that a health system makes
available on its website.5
… Consider,
too, a physician who uses ChatGPT 8 to generate automated summaries
of medical histories and patient interactions.9
… Further
consider a health insurer that uses AI to review and, more frequently
than not, deny Medicare Advantage claims for elderly beneficiaries
notwithstanding their physicians’ documentation showing that their
health care services are medically necessary. 13
… Finally,
consider the number of large technology companies and startups that
are working with health industry participants, including hospitals
and health insurers, to research, create, and deploy machine learning
healthcare solutions.16