Always interesting.
https://www.rand.org/content/dam/rand/pubs/research_reports/RRA2200/RRA2249-1/RAND_RRA2249-1.pdf
Finding
a Broadly Practical Approach for Regulating the Use of Facial
Recognition by Law Enforcement
Communities
across the United States are grappling with the implications of law
enforcement organizations’ and other government agencies’ use of
facial recognition (FR) technology. Although the purported benefits
of FR as stated are clear, they have yet to be measured and weighed
against the existing risks, which are also substantial. Given the
variety of ways in which FR can be used by law enforcement, the full
benefit-to-risk trade-off is difficult to account for, leading to
some municipalities that ban the use of FR by law enforcement and
others that have no clear regulations. This report provides an
overview of what is known about FR use in law enforcement and
provides a road map of sorts to help policymakers sort through the
various risks and benefits relative to different types of FR use.
We
categorize the various identified risks associated with FR technology
and its use by law enforcement, including accuracy, bias, the scope
of the search (i.e., surveillance versus investigation), data-sharing
and storage practices, privacy issues, human and civil rights,
officer misuse, law enforcement reactions to the FR results (e.g.,
street stops), public acceptance, and unintended consequences. The
concerns are discussed in detail in Chapter 3, but they are
summarized here.
A thoughtful
AI?
https://www.tandfonline.com/doi/full/10.1080/15027570.2023.2180184
The
Need for a Commander
… One
article in this double issue of the Journal of Military Ethics asks
about what an AI (artificial intelligence) commander would look like.
The underlying question is whether we are more or less inevitably
moving towards a situation where AI-driven systems will come to make
strategic decisions and hence be the place where the buck stops.
I don’t get
it…
https://www.science.org/doi/abs/10.1126/science.add2202
Leveraging
IP for AI governance
The rapidly
evolving and expanding use of artificial intelligence (AI) in all
aspects of daily life is outpacing regulatory and policy efforts to
guide its ethical use (1). Governmental inaction can be explained in
part by the challenges that AI poses to traditional regulatory
approaches (1). We propose the adaptation of existing legal
frameworks and mechanisms to create a new and nuanced system of
enforcement of ethics in AI models and training datasets. Our model
leverages two radically different approaches to manage intellectual
property (IP) rights. The first is copyleft licensing, which is
traditionally used to enable widespread sharing of created content,
including open-source software. The second is the “patent troll”
model, which is often derided for suppressing technological
development. Although diametric in isolation, these
combined models enable the creation of a “troll for good”
capable of enforcing the ethical use of AI training datasets and
models.
I think I
think so too.
https://indexlaw.org/index.php/rdb/article/view/7547
DEEP
LEARNING AND THE RIGHT TO EXPLANATION: TECHNOLOGICAL CHALLENGES TO
LEGALITY AND DUE PROCESS OF LAW
This article
studies the right to explainability, which is extremely important in
times of fast technological evolution and use of deep learning for
the most varied decision-making procedures based on personal data.
Its main hypothesis is that
the right to explanation is totally linked to the due process of Law
and legality, being a safeguard for those who need to
contest automatic decisions taken by algorithms, whether in judicial
contexts, in general Public Administration contexts, or even in
private entrepreneurial contexts.. Through hypothetical-deductive
procedure method, qualitative and transdisciplinary approach, and
bibliographic review technique, it was concluded that opacity,
characteristic of the most complex systems of deep learning, can
impair access to justice, due process legal and contradictory. In
addition, it is important to develop strategies to overcome opacity
through the work of experts, mainly (but not only). Finally,
Brazilian LGPD provides for the right to explanation, but the lack of
clarity in its text demands that the Judiciary and researchers also
make efforts to better build its regulation.
Tools worth
testing?
https://www.makeuseof.com/accurate-ai-text-detectors/
The
8 Most Accurate AI Text Detectors You Can Try
As language
models like GPT continue to improve, it is becoming increasingly
difficult to differentiate between AI-generated and human-written
text. But, in some cases, like academics, it’s necessary to ensure
that the text isn't written by AI.
This is where
AI text detectors come into play. Though none
of the tools currently available detect with complete certainty
(and neither do they claim to do so), a few of these tools do provide
pretty accurate results. So, here, we list down the eight most
accurate AI text detectors you can try.