Does every tool have to be certified as compliant?
https://www.pogowasright.org/u-of-iowa-issues-reminder-about-use-of-artificial-technology-ai-and-hipaa/
U.
of Iowa Issues Reminder About Use of artificial technology (AI) and
HIPAA
From
the U. of Iowa, and kudos to them for educating and warning their
employees:
As artificial intelligence (AI) continues
to evolve, it is important to remember that most
AI tools and services, including ChatGPT, are not HIPAA compliant.
Therefore, it is not appropriate to use these tools or services in
conjunction with patient protected health information (PHI).
In
order to use application that processes, transmits, or stores patient
information, such as an AI service, a proper security review,
contracting, and business associate agreement must be completed. If
you have a request to be able to use an AI system, please speak with
your department director on how to pursue the request and initiate a
security
review.
Imputing patient information into an AI
system could result in a HIPAA violation if the above conditions have
not been met. For example, using ChatGPT to draft a patient letter
or using an unapproved AI transcription service requires sharing of
PHI with the application. Beware of these types of situations.
For
additional information, please see IT Security Guidelines
for the secure and ethical use of Artificial Intelligence.
Please
contact the Joint Office for Compliance with questions at
compliance[at]healthcare.uiowa.edu
or
319-384-8282.
Watch
the right hand while the left does something completely unexpected...
https://www.wired.com/story/meta-artificial-intelligence-data-deletion/
Artists
Allege Meta’s AI Data Deletion Request Process Is a ‘Fake PR
Stunt’
This
summer, Meta began taking requests to delete data from its AI
training. Artists say this new system is broken and fake. Meta says
there is no opt-out program.
Why
do I get the feeling that this might be “biased” in the UK’s
favor?
https://www.telegraph.co.uk/business/2023/10/28/rishi-sunak-launch-ai-chatbot-pay-taxes-access-pensions/
Sunak
to launch AI chatbot for Britons to pay taxes and access pensions
Rishi
Sunak is planning to launch an AI chatbot to help the public pay
taxes and access pensions in what would be the biggest use of
advanced artificial intelligence by Whitehall to date...
Is it
possible to gather/identify exculpatory evidence at the same time?
Yes, Bob was in the area, but it looks like he was just driving
through.
https://asistdl.onlinelibrary.wiley.com/doi/abs/10.1002/pra2.835
Geofence
Warrants, Geospatial Innovation, and Implications for Data Privacy
Geospatial
technologies collect, analyze, and produce information about earth,
humans, and objects through a convergence of geographic information
systems, remote sensors, and global positioning systems. A
microanalysis of Google's U.S. Patent 9,420,426 Inferring a current
location based on a user location history (Duleba et al., 2016)
reveals how geospatial innovation employs artificial intelligence
(AI) to train computer-vision models, infer, and impute geospatial
data. The technical disclosures in patents offer a view within
black-boxed digital technologies to examine potential privacy
implications of datafied citizens in a networked society. I n
patented geospatial innovation, user agency is subverted through AI
and anonymous knowledge production.
Presently,
the Fourth Amendment does not adequately protect citizens in a
networked society. Data privacy legal cases are interpreted through
a lens of inescapability (Tokson, 2020), which assumes perpetual
agency to consent to sharing data. In short, agency-centered privacy
models are insufficient where AI can anonymously produce knowledge
about an individual. Privacy implications are exemplified in
geofence warrants—an investigative technique that searches location
history to identify suspects in a geofenced region in the absence of
evidence. This analysis demonstrates that digital privacy rights
must expand to datafication models (Mai, 2016) centered on knowledge
production.
Another
step on the long road to eliminate lawyers?
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4582753
Generative
Contracts
This
Article examines how consumers can use generative artificial
intelligence to write their own contracts. Popularized by
“chatbots” such as OpenAI’s ChatGPT, generative AI is a form of
artificial intelligence that uses statistical models trained on
massive amounts of data to generate human-like content such as text,
images, music, and more. Generative AI is already being integrated
into the practice of law and the legal profession. In the context of
contracting and transactional law, most generative AI tools are
focused on reviewing and managing large volumes of business
contracts. Thus far, little attention has been given to using
generative AI to create entire contracts from scratch. This Article
aims to fill this gap by exploring the use of “generative
contracts”: contracts that are written entirely by a generative AI
system based on prompts from the user. For example, a user could ask
a generative AI model to, “Write me a contract to sell my used
car.” The Article uses OpenAI’s GPT-4 to generate drafts of a
wide range of contracts from an employment agreement to a residential
lease to a bill of sale. While relatively simple, the contracts
written by GPT-4 are functional and enforceable. These results
suggest that generative contracts present an opportunity to improve
access to justice for consumers who are currently underserved by the
legal system. To examine how consumers might use generative
contracts in practice, the Article engages in a proof-of-concept case
study of two hypothetical consumers who use GPT-4 to write and modify
their own car sale contract. Drawing on this case study, the Article
analyzes the implications of generative contracts for consumers,
lawyers, and the practice of law. While generative AI holds great
promise for consumers and access to justice, it threatens to disrupt
the legal profession and poses numerous technological, privacy, and
regulatory challenges. The Article maps the benefits and risks of
generative contracts as the world approaches a future of automated
contracting.
What’s in
your dataset? Would corrections change your ‘reality?’
https://arxiv.org/abs/2310.15848
On
Responsible Machine Learning Datasets with Fairness, Privacy, and
Regulatory Norms
Artificial
Intelligence (AI) has made its way into various scientific fields,
providing astonishing improvements over existing algorithms for a
wide variety of tasks. In recent years, there have been severe
concerns over the trustworthiness of AI technologies. The scientific
community has focused on the development of trustworthy AI
algorithms. However, machine and deep learning algorithms, popular
in the AI community today, depend heavily on the data used during
their development. These learning algorithms identify patterns in
the data, learning the behavioral objective. Any
flaws in the data have the potential to translate directly into
algorithms. In this study, we discuss the importance of
Responsible Machine Learning Datasets and propose a framework to
evaluate the datasets through a responsible rubric. While existing
work focuses on the post-hoc evaluation of algorithms for their
trustworthiness, we provide a framework that considers the data
component separately to understand its role in the algorithm. We
discuss responsible datasets through the lens of fairness, privacy,
and regulatory compliance and provide recommendations for
constructing future datasets. After surveying over 100 datasets, we
use 60 datasets for analysis and demonstrate that none of these
datasets is immune to issues of fairness, privacy preservation, and
regulatory compliance. We provide modifications to the “datasheets
for datasets" with important additions for improved dataset
documentation. With governments around the world regularizing data
protection laws, the method for the creation of datasets in the
scientific community requires revision. We believe this study is
timely and relevant in today's era of AI.
Perspective.
https://link.springer.com/chapter/10.1007/978-981-99-6327-0_8
Issues
that May Arise from Usage of AI Technologies in Criminal Justice and
Law Enforcement
Due to the
constant and swift technological advancements, artificial
intelligence technologies have become an integral part of our daily
lives and as a result, have started to impact various areas of our
society. Legal systems proved to be no exception as many countries
took steps to implement AI technologies to their legal systems in
order to improve the law enforcement and criminal justice systems,
making changes in various processes including but not limited to
preventing crimes, locating perpetrators, accelerating judicial
processes, and improving the accuracy of judicial decisions. While
the usage of AI technologies provided improvements to criminal
justice and law enforcement processes in various aspects, concerning
instances demonstrated that AI technologies may reach to biased,
discriminatory, or simply inaccurate conclusions that may cause harm
to people. This realization becomes even more alarming considering
that criminal justice and law enforcement consist of extremely
critical and fragile processes where a wrong decision may cost
someone their freedom, or in some cases, life. In addition to
discrimination and bias, automated decision-making processes also
have a number of other issues such as lack of transparency and
accountability, jeopardization of the presumption of innocence
principle, and concerns regarding personal data protection,
cyber-attacks, and technical challenges. Implementing AI
technologies to legal processes should be encouraged since criminal
justice and law enforcement could benefit from recent advancements in
technology and it is possible that more accurate, more just, and
faster judicial processes can be created. However, it should be
carefully considered that implementing AI systems which are in their
infancy to legal processes that could lead to severe consequences may
cause incredible and, in some cases, irrevocable damages. This study
aims to address current and possible issues in usage of AI
technologies in criminal justice and law enforcement, providing
possible solutions when possible.