Pendulums
swing and embarrassed agencies overreact.
https://www.pogowasright.org/the-fbi-is-spending-millions-on-social-media-tracking-software/
The
FBI is spending millions on social media tracking software
Aaron
Schaffer reports:
Social
media users seemed to foreshadow
the
Jan. 6 attack on the U.S. Capitol — and the FBI apparently missed
it.
Now, the FBI is doubling down on tracking
social media posts, spending millions of dollars on thousands of
licenses to powerful social media monitoring technology that privacy
and civil liberties advocates say raise serious concerns.
The
FBI has contracted
for
5,000 licenses to use Babel X, a software made by Babel Street that
lets
users search social
media sites within a geographic area and use other parameters.
Read
more at The
Washington Post.
Who’d
a thunk it?
https://link.springer.com/article/10.1007/s10506-022-09312-z
Perceptions
of Justice By Algorithms
Artificial
Intelligence and algorithms are increasingly able to replace human
workers in cognitively sophisticated tasks, including ones related to
justice. Many governments and international organizations are
discussing policies related to the application of algorithmic judges
in courts. In this paper, we
investigate the public perceptions of algorithmic judges.
Across two experiments (N = 1,822), and an internal meta-analysis
(N = 3,039), our results show that even though court users
acknowledge several advantages of algorithms (i.e., cost and speed),
they trust human judges
more and have greater intentions to go to the court when a
human (vs. an algorithmic) judge adjudicates. Additionally, we
demonstrate that the extent that individuals trust algorithmic and
human judges depends on the nature of the case: trust for algorithmic
judges is especially low when legal cases involve emotional
complexities (vs. technically complex or uncomplicated cases).
Taking
a seat on the bandwagon?
https://ojs.stanford.edu/ojs/index.php/intersect/article/view/2168
On
Facial Recognition Technology
Since
the beginning of the 2000s, Facial Recognition Technology (FRT) has
become significantly more accurate and more accessible. Both
government and commercial entities use it in increasingly innovative
approaches. News agencies use it to spot celebrities at big events.
Car companies install it on dashboards to alert drivers falling
asleep at the wheel. Governments have used it to track Covid-19
patients’ compliance with quarantine regimes, or to reunite missing
children with their families. However, as the use of technology has
become more widespread, the controversies around it have also grown.
The technology offers tremendous opportunities, but there are reasons
to be concerned about its impact on privacy and civil liberties, if
it is not used properly. In this paper, I make a brief introduction
to facial recognition technology, look separately at commercial and
government applications of it, and present
my argument why the US needs a federal legislation on FRT.
Making
AI work for you.
https://research.cbs.dk/en/publications/ai-ethics-regulation-amp-firm-implications
AI
Ethics, Regulation & Firm Implications
As
the widespread application of artificial intelligence permeates an
increasing number of businesses, ethical issues such as algorithmic
bias, data privacy, and transparency have gained increased attention,
raising renewed calls for policy and regulatory changes to address
the potential consequences of AI systems and products. In this
article, we build on original research to outline distinct approaches
to AI governance and regulation and discuss the implications for
firms and their managers in terms of adopting AI and ethical
practices going forward. We examine how manager perception of AI
ethics increases with the potential of AI-related regulation but at
the cost of AI diffusion. Such trade-offs are likely to be
associated with industry specific characteristics, which holds
implications for how new and intended AI regulations could affect
varying industries differently. Overall, we
recommend that businesses embrace new managerial standards and
practices that detail AI liability under varying
circumstances, even before it is regulatory prescribed. Stronger
internal audits, as well as third-party examinations, would provide
more information for managers, reduce managerial uncertainty, and aid
the development of AI products and services that are subject to
higher ethical as well as legal, and policy standards.
Following
advances(?) in my field…
https://rucore.libraries.rutgers.edu/rutgers-lib/67194/
The
use of artificial intelligence in auditing and forensics
This
dissertation examines the use of artificial intelligence for auditing
and forensics. The first essay is a conceptual analysis, the second
is quantitative and experimental. The
first essay focuses on the ethics of AI. Accounting firms
are reporting the use of Artificial Intelligence (AI) in their
auditing and advisory functions, citing benefits such as time
savings, faster data analysis, increased levels of accuracy, more
in-depth insight into business processes, and enhanced client
service. AI, an emerging technology that aims to mimic humans'
cognitive skills and judgment, promises competitive advantages to the
adopter. As a result, all the Big 4 firms are reporting its use and
their plans to continue with this innovation in audit planning risk
assessments, tests of transactions, analytics, and the preparation of
audit work-papers, among other uses. As the applications and
benefits of AI continue to emerge within the auditing profession,
there is a gradual awakening to the fact that unintended consequences
may also arise. Thus, this essay responds to the call of numerous
researchers to explore the benefits of AI and investigate the ethical
implications of the use of this emerging technology. By combining
two futuristic ethical frameworks, this study forecasts the ethical
implications of using AI in auditing, given its inherent features,
nature, and intended functions. This essay provides a conceptual
analysis of AI's practical ethical and social issues, using past
studies and inferences based on the reported use of the technology by
auditing firms. Beyond exploring these issues, this essay discusses
the responsibility for the policy and governance of emerging
technology.
The
second essay focuses on the use of machine learning in auditing.
Fraud risk assessment is challenging for external auditors due to
its complexity and because external auditors are usually the
outsiders looking in. This essay examines the use of a framework
that combines natural language processing and machine learning for
detecting fraud red flags within corporate communication. The
framework uses natural language processing to measure the temporal
sentiments and emotions conveyed in corporate communication and the
topics discussed that point to fraud red flags. The framework relies
on machine learning to identify the temporal changes in the derived
quantitative measures. When applied to a real corporate
communication dataset for a firm with known financial statement
fraud, the machine learning framework correctly flagged the
implicated departments, demonstrating how auditors can use the
framework for fraud risk assessments. Additionally, the essay
validates the machine learning framework. To validate the machine
learning framework, I used an expert panel of forensics experts with
CPA certification. Given the same information, the expert panel
expressed fraud risk assessments consistent with the machine learning
framework. This second essay uses an ensemble of machine learning
methods to analyze the temporal changes of the sentiments, emotions,
and topics discussed by individuals within an organization to detect
fraud cues. The key contribution of the second essay is that it
examines how machine learning and textual analysis can be used to
detect fraud risk cues in the organization before the issuing of
financial statements (i.e., does not rely on elements of the issued
financial statement and therefore can be used in continuous
auditing). Since the methodology in this paper begins with
unsupervised machine learning, this study demonstrates an automated
approach to labeling a digital communication dataset for machine
learning to detect fraud cues. The use of an unsupervised machine
learning approach enables this framework to be generalizable in that
there is no requirement for a context-specific pre-labeled dataset.
However, there is an initial requirement for a fraud word list, as
discussed in chapter 3. Based on a literature review by
Sánchez-Aguayo et al. (2021), there is an identified gap in studies
that use fraud detection, human behavior, machine learning and fraud
theory. This second essay cuts across these four areas.
It’s hard to strangle someone by texting…
https://dilbert.com/strip/2022-04-10