Some concepts worthy of adoption?
https://ijitc.com/index.php/my/article/view/97
AI
SURVEILLANCE, PRIVACY (ḤIFẒ AL-ʿIRḌ), AND HUMAN DIGNITY IN
ISLAMIC JURISPRUDENCE
The rapid
proliferation of artificial intelligence (AI) surveillance
technologies presents unprecedented challenges to privacy and human
dignity, necessitating examination through the lens of Islamic
jurisprudence. This study explores the intersection of AI-enabled
surveillance systems with the Islamic principle of Ḥifẓ al-ʿIrḍ
(protection of honor and privacy), which constitutes a fundamental
objective (maqṣid) of Shariah. Through a comprehensive analysis of
classical Islamic legal texts, contemporary fatwas, and current AI
surveillance practices, this research investigates how Islamic
ethical frameworks can address modern surveillance challenges while
preserving human dignity (karāmah al-insān). The study employs a
qualitative methodology integrating classical Islamic legal theory
(uṣūl al-fiqh) with contemporary technology ethics literature.
Findings reveal that while Islam permits certain forms of
surveillance for legitimate purposes (maṣlaḥah), AI
surveillance systems often violate fundamental Islamic principles of
privacy, consent, and human dignity through mass data collection,
algorithmic bias, and unwarranted intrusion into private spaces.
The research concludes that Islamic jurisprudence offers robust
ethical guidelines for regulating AI surveillance, emphasizing the
inviolability of private life, the requirement of legitimate
necessity, proportionality in monitoring, and accountability
mechanisms. This study contributes to the emerging discourse on
Islamic digital ethics and provides practical recommendations for
developing Shariah-compliant AI surveillance governance frameworks.
Will AI have
my interests at heart?
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6173424
The
Privacy Veil
Privacy law
rests on a failed premise. The notice-and-consent paradigm assumes
users will read privacy policies, understand them, and make informed
choices. They cannot. Reading
the policies an average American encounters would require thirty
working days per year. Nearly half of American adults
lack the literacy to comprehend them. Even informed users cannot
negotiate take-it-or-leave-it terms with entities holding asymmetric
power. The problem is structural, not remediable through clearer
disclosures or stronger enforcement.
This Article
proposes the Autonomous Virtual Identity Agent, a legally recognized
AI intermediary that acts on users' behalf in the digital
environment. The AVIA reads privacy policies, negotiates terms,
monitors compliance, and exercises user rights at a scale no
individual could achieve. Drawing on the jurisprudence of legal
fictions, the Article develops a complete legal architecture
including registration requirements, fiduciary duties, and
veil-piercing standards, offering a model statute for legislative
adoption.
Interesting
approach. (Roman law?)
https://feqh.semnan.ac.ir/article_10456_en.html
Explanation
of the smart slave theory in artificial intelligence contracts
Many jurists
have defined the relationship between artificial intelligence and
principal in the form of the theory of smart tools. Due to the
tremendous progress of artificial intelligence, some objections have
been noticed in this theory and it has led to the emergence of other
hypotheses, one of which is the theory of intelligent slave. Based
on this hypothesis, considering the similarities of "permissive
slave in business" with "learning artificial intelligence
system", the nature of artificial intelligence contracts can be
considered the same as permissive slave contracts in jurisprudence
and law and this is while the pillars of the validity of the
contract; It means that there is capacity, intention and consent in
these contracts. In this article, after explaining the technology of
artificial intelligence contracts and explaining the nature of slave
contracts with permission, the intelligent slave theory has been
analyzed and investigated with a descriptive-analytical method, and
as a result, this theory has been preferred over the popular theory
of smart tools.
The AI lawyer?
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6164388
The
Dead Law Theory: The Perils of Simulated Interpretation
Judges now
consult ChatGPT about what statutes mean. The scholarly response
treats this as a reliability problem. Reliability is beside the
point. LLMs generate text by predicting probable token sequences,
manipulating symbols without accessing what those symbols mean. But
syntax cannot generate semantics. Computational legal interpretation
does not fail because the technology is immature. It fails because
it is a category error. A theory that fixes meaning in historical
usage and treats interpretation as empirical recovery cannot resist
algorithms that measure historical usage patterns. The progression
from dictionaries to corpus databases to generative models follows
originalism's empirical commitments to their logical end.
AI-generated content
saturates the corpora on which future models train, and the resulting
degradation eliminates marginal claims first; those upon
which life and liberty depend. Computational methods did not
contaminate originalist interpretation. Originalism was already a
jurisprudence that simulated meaning while discarding the semantic
content that interpretation requires. The machines simply made the
method hyperreal.