Saturday, August 20, 2022

I’m sure there is no possible scenario that justifies paying the ransom…

https://www.cpomagazine.com/cyber-security/patchwork-of-us-state-regulations-becomes-more-complex-as-florida-north-carolina-ban-ransomware-payments/

Patchwork of US State Regulations Becomes More Complex as Florida, North Carolina Ban Ransomware Payments

The issue of banning ransomware payments has been contentious and hotly debated in governments throughout the world in the last few years, particularly as the problem seemed to grow out of control during the Covid-19 pandemic. In the US, the federal government has come down on the side of allowing payments but adding increasingly stringent incident reporting requirements to get law enforcement involved as fast as possible.

As with the issue of data privacy regulations, some states have decided to take their own approach. Pennsylvania was the first in January of this year, with the state Senate passing a ban that prohibits agencies or organizations that receive taxpayer funds from making ransomware payments (the bill remains before the state House awaiting a vote). North Carolina added a comprehensive ban on local and state agency ransomware payments in May, followed by a similar measure in Florida in July. New York, Texas, Arizona and New Jersey have also had bills of this nature recently come up for consideration.

Thus far the states are not attempting to compel private organizations to reject ransomware payments; the focus is on government agencies in the states that have passed such laws. However, at least one of the bills under consideration (in New York) would extend these rules to non-government entities.





Consider how easy it is to get your email address.

https://www.makeuseof.com/ways-scammers-use-email-address/

5 Ways Scammers Can Use Your Email Address Against You





Who will be the first AI ethicist?

https://thenextweb.com/news/critical-review-eus-ethics-guidelines-for-trustworthy-ai

A critical review of the EU’s ‘Ethics Guidelines for Trustworthy AI’

Europe has some of the most progressive, human-centric artificial intelligence governance policies in the world. Compared to the heavy-handed government oversight in China or the Wild West-style anything goes approach in the US, the EU’s strategy is designed to stoke academic and corporate innovation while also protecting private citizens from harm and overreach. But that doesn’t mean it’s perfect.





Would the same argument apply to any AI trained on Copyrighted or Patented data?

https://petapixel.com/2022/08/19/ai-image-generators-could-be-the-next-frontier-of-photo-copyright-theft/

AI Image Generators Could Be the Next Frontier of Photo Copyright Theft

Artificial intelligence-powered (AI) image generators have exploded in popularity and apps like DALL-E, Midjourney, and more recently Stable Diffusion are exciting and tantalizing technology enthusiasts.

To train these systems, each AI tool is fed millions of images. DALL-E 2, for example, was trained on approximately 650 million image-text pairs that its creator, OpenAI, scraped from the internet.

Now, the companies behind these technologies haven’t said as much, but to train these machines it doesn’t seem likely that millions of copyrighted images weren’t used to inform the AI’s learning.

It seems very doubtful that companies like OpenAI have only scraped public domain and creative commons images into the algorithm. More likely, the process involves image-text pairing from Google searches. That means photographers’ images have presumably been used in a way that the owners never intended or consented to.



Friday, August 19, 2022

I guess they weren’t as sure as they seemed.

https://www.cpomagazine.com/data-privacy/facial-recognition-bans-begin-to-fall-around-the-us-as-re-funding-of-law-enforcement-becomes-politically-popular/

Facial Recognition Bans Begin To Fall Around the US as Re-Funding of Law Enforcement Becomes Politically Popular

Some cities and states that were early to ban law enforcement from using facial recognition software appear to be having second thoughts, which privacy advocates with the Electronic Frontier Foundation (EFF) and other organizations largely attribute to an uptick in certain types of urban crime. Facial recognition bans in New Orleans and Virginia have seen at least a tentative reversal of course, with law enforcement now allowed to use the technology in some limited situations. And an attempt by the California Senate to make permanent a temporary facial recognition ban failed to pass, leaving the law set to expire at the end of 2022.





Everyone should understand how these work.

https://www.trendmicro.com/en_us/ciso/22/h/business-email-compromise-bec-attack-tactics.html

Business Email Compromise Attack Tactics

BEC, also known as email account compromise (EAC), is a type of email cybercrime targeting companies with the typical objective of having company funds wired into the attacker’s bank account. The five types of include: bogus invoices, CEO fraud (impersonating a c-level employee to ask coworkers for money), account compromise, attorney impersonation, and data theft.





May reveal capabilities any police organization may have.

https://www.pogowasright.org/parliaments-top-security-committee-to-probe-rcmps-use-of-spyware/

Parliament’s top security committee to probe RCMP’s use of spyware

Nick Taylor-Vaisey and Maura Forrest report:

A top-secret committee of Canadian parliamentarians has launched an investigation into the national police force’s use of spyware to conduct covert surveillance.
The all-party National Security and Intelligence Committee of Parliamentarians (NSICOP), chaired by Liberal MP David McGuinty, will conduct a “framework review” of the “lawful interception of communications by security and intelligence organizations.”

Read more at Politico.

[From the article:

The issue came to light after POLITICO’s revelation in June that the RCMP had admitted to using spyware to hack mobile devices. The police force has the ability to intercept text messages, emails, photos, videos, financial records and other information from cellphones and laptops, and to remotely turn on a device’s camera and microphone.




Thursday, August 18, 2022

Imagine creating your own version of police body-cam video. Identifying fakes is going to be big business!

https://futurism.com/the-byte/google-deepmind-video-single-frame

GOOGLE SCIENTISTS CREATE AI THAT CAN GENERATE VIDEOS FROM ONE FRAME

Google's DeepMind neural network has demonstrated that it can dream up short videos from a single image frame, and it's really cool to see how it works.

As DeepMind noted on Twitter, the artificial intelligence model, named "Transframer" — that's a riff on a "transformer," a common type of AI tool that whips up text based on partial prompts — "excels in video prediction and view synthesis," and is able to "generate 30 [second] videos from a single image."





Which morality are they advocating?

https://www.entrepreneur.com/article/432958

Can We Teach Morality to Artificial Intelligence?

Artificial Intelligence has already changed the way we live our everyday lives. It may have the potential to go even further.





A learning opportunity. I wonder how many ‘students’ we have on the ground?

https://news.yahoo.com/ukraine-testing-ground-shaping-us-134858244.html

Ukraine ‘testing ground’ shaping US network, electronic warfare effort

Fierce battles being waged in Ukraine are showcasing cyber and electronic warfare and their consequences for connectivity and communications, according to the deputy commanding general at U.S. Army Training and Doctrine Command.

If we’re looking to see how a modern battlefield is impacted by EW and cyber warfare, we need to look no further than what is going on right now” in Eastern Europe, Lt. Gen. Maria Gervais said Aug. 16 at the AFCEA TechNet Augusta conference. “Everything that we are seeing in Ukraine has implications for a unified network, and almost certainly represents the type of threats we will see.”





Not in my normal reading zone, but this one might help me understand some things so I had my library fetch it for me.

https://politicalscience.nd.edu/news-and-events/news/eileen-hunts-book-em-artificial-life-after-frankenstein-em-wins-award-for-broadening-horizons-of-contemporary-political-science/

Eileen Hunt’s book Artificial Life After Frankenstein wins award for broadening horizons of contemporary political science

Eileen Hunt, a professor in the Department of Political Science, has won the David Easton Award for her 2021 book, Artificial Life After Frankenstein.

The annual award from the American Political Science Associations Foundations of Political Theory section recognizes a book that “broadens the horizons of contemporary political science by engaging issues of philosophical significance in political life through … approaches in the social sciences and humanities.”

In Artificial Life After Frankenstein, Hunt builds on her prior work applying political theory to interpret Mary Shelley’s classic 1818 novel Frankenstein. She develops a theoretical framework for how to bring technology-based ethical issues — like making artificial intelligence, robots, genetically engineered children and other artificially-shaped life forms — into debates on human rights, international law, theories of justice, and philosophies of education and parent-child ethics.

[Also see the Amazon description: https://www.amazon.com/Artificial-After-Frankenstein-Eileen-Botting/dp/0812252748





Tools & Techniques.

https://www.makeuseof.com/windows-portable-sysadmin-toolkit/

8 Portable Windows Apps for Your System Administration Toolkit



Wednesday, August 17, 2022

Is this the future? School surveillance 24/7/365? No doubt they will want the best technology, like facial recognition, etc.

https://www.ksl.com/article/50459109/davis-school-district-installs-districtwide-surveillance-system

Davis School District installs districtwide surveillance system

The Davis School District now has an around-the-clock monitoring center, where it can keep eyes on cameras and conditions across 120 buildings.

Someone will be in the center 24 hours a day, every day of the year.

Monitors can shut down systems from here, even lock down buildings. It's something Mott said that will continue to improve just in case one of those worst-case scenarios ever happens here.

"We're working to coordinate and work with schools on drills, whether they be fire drills or lockdown drills, those kind of things, we're always preparing and practicing," he said.





Two kinds of fraud.

https://www.theregister.com/2022/08/16/social_engineering_cyber_crime_insurance/

PC store told it can't claim full cyber-crime insurance after social-engineering attack

A Minnesota computer store suing its crime insurance provider has had its case dismissed, with the courts saying it was a clear instance of social engineering, a crime for which the insurer was only liable to cover a fraction of total losses.

SJ Computers alleged in a November lawsuit [PDF] that Travelers Casualty and Surety Co. owed it far more than paid on a claim for nearly $600,000 in losses due to a successful business email compromise (BEC) attack.

Travelers, which filed a motion to dismiss, said SJ's policy clearly delineated between computer fraud and social engineering fraud. The motion was granted [PDF] with prejudice last Friday.





Who gets to define ‘reasonable?’

https://www.databreaches.net/us-regulator-urges-mfa-and-puts-banks-on-notice-not-reasonably-protecting-data-is-illegal/

US regulator urges MFA and puts banks on notice – not reasonably protecting data is illegal

Jim Nash reports:

A U.S. consumer finance regulator has published a circular warning that insufficient security for consumer biometric and other personal data is illegal under federal law. Multi-factor authentication is singled out as a method of making data security sufficient.
Anyone reading that who still thinks it will never happen to them is invited to read on to find out about the tech company who just fell victim.
The Consumer Financial Protection Bureau says that not protecting the data can be found to be an unfair practice under 12 U.S.C. 5536 for financial institutions. Officials cite preventative practices that can minimize risk.

Read more at Biometric Update.



(Related) Should similar rules apply?

https://www.theverge.com/2022/8/17/23306570/period-tracking-apps-privacy?scrolla=5eb6d68b7fedc32c19ef33b4

Period and pregnancy tracking apps have bad privacy protections, report finds

Most popular period and pregnancy tracking apps don’t have strong privacy protections, according to a new analysis from researchers at Mozilla. Leaky privacy policies in health apps are always a problem, but issues that fall into this particular category are especially concerning now that abortion is illegal in many places in the United States.

Period and pregnancy tracking apps collect data that could theoretically be used to prosecute people getting abortions in places where it’s illegal. Data from period tracking apps isn’t the biggest thing used to tie people to abortions right now — most often, the digital data used in those cases comes from texts, Google searches, or Facebook messages. But they’re still potential risks.





Do we have an acceptable answer?

https://venturebeat.com/ai/who-owns-dall-e-images-legal-ai-experts-weigh-in/

Who owns DALL-E images? Legal AI experts weigh in

When OpenAI announced expanded beta access to DALL-E in July, the company offered paid subscription users full usage rights to reprint, sell and merchandise the images they create with the powerful text-to-image generator.

A week later, creative professionals across industries were already buzzing with questions. Topping the list: Who owns images put out by DALL-E, or for that matter, other AI-powered text-to-image generators, such as Google’s Imagen? The owner of the AI that trains the model? Or the human that prompts the AI with words like “red panda wearing a black leather jacket and riding a motorcycle, in watercolor-style?”





What if we decide that AI is not responsible for its actions?

https://www.economist.com/podcasts/2022/08/16/will-ai-achieve-consciousness

Will AI achieve consciousness?

Our podcast on science and technology. This week, we explore whether artificial intelligence could become sentient—and the legal and ethical implications if it did

A DEBATE has been raging in technology circles, after an engineer at Google claimed in June that the company’s chatbot was sentient. Host Kenneth Cukier explores how to define “sentience” and whether it could be attained by AI. If machines can exhibit consciousness, it presents myriad ethical and legal considerations. Is society equipped to deal with the implications of conscious AI?

Runtime: 44 min





Would any ‘civilian’ vendor be able to stop military applications?

https://www.theregister.com/2022/08/17/russia_weaponizes_chinese_drones_robots/

Russian military uses Chinese drones and bots in combat, over manufacturers' protests

Russia's military has praised civilian grade Chinese-made drones and robots for having performed well on the battlefield, leading their manufacturers to point out the equipment is not intended or sold for military purposes.

When a video of a robot camera dog showed up with a grenade launcher on Russian state-sponsored media RIA Novosti this week, many immediately recognized it as Chinese Unitree Robotics' $2,700 Go1 robotic dog – albeit dressed in a sort of canine ninja suit.





Always good to know the players. Also points to some resources.

https://www.techspot.com/article/2515-surveillance-intelligence-alliances/

A Surveillance Primer: 5 Eyes, 9 Eyes, 14 Eyes

In 2021, the US Federal Trade Commission published a 74 page report documenting how internet service providers are collecting vast amounts of private data from their customers and then selling the data to third parties. We examined this report, the implications, and some solutions in our article on internet service providers logging browsing activity.

These practices are well-documented in the PRISM surveillance documents and also the infamous Room 641a example with AT&T and the NSA. Fortunately, there are some simple solutions to keep your data safe that we'll cover below. In this guide, we'll explain all the different "X" eyes surveillance alliances and why this topic is important when choosing privacy tools.



Tuesday, August 16, 2022

Interesting. Should make it easier for IT to talk to lawyers.

https://www.huntonprivacyblog.com/2022/08/15/new-york-becomes-first-state-to-require-cle-in-cybersecurity-privacy-and-data-protection/

New York Becomes First State to Require CLE in Cybersecurity, Privacy and Data Protection

On June 10, 2022, New York became the first state to require attorneys to complete at least one credit of cybersecurity, privacy and data protection training as part of their continuing legal education (“CLE”) requirements. The new requirement will take effect July 1, 2023.

The New York State Bar Association’s (“NYSBA”) Committee on Technology and the Legal Profession initially recommended the new requirement in a 2020 report. In a joint order, the judicial departments of the Appellate Division of the New York State Supreme Court formally adopted the recommendation.

The required one hour of cybersecurity, data privacy and data protection training may be related to attorneys’ ethical obligations with respect data protection and count toward their ethics and professionalism CLE requirements. Alternatively, the credit may be related to general cybersecurity, data privacy and data protection issues and count toward attorneys’ general CLE requirements.





Lawyers like free stuff? Who knew!

https://www.bespacific.com/free-law-project-makes-it-even-easier-to-add-pacer-documents-to-its-free-database/

Free Law Project Makes It Even Easier to Add PACER Documents to Its Free Database

LawSites: “One way to avoid the cost of downloading documents from the federal courts’ PACER database is by getting them instead from the RECAP Archive, a database of millions of PACER documents and dockets maintained by the Free Law Project. But before you can get a document out of RECAP, the document had to have been added there in the first place. To make that happen, RECAP has relied on its free browser extensions by which every PDF a user purchases on PACER is automatically added to the RECAP archive. Last year, it extended that to iPhones, iPads and Macs. But now there is a new way to add PACER documents to RECAP — one that, if enough legal professionals use it, should dramatically increase the size of RECAP’s archives…”





Amazon wants to know their customers inside and out. Podcasting

https://shows.acast.com/knowledge-at-wharton/episodes/why-is-amazon-purchasing-a-health-care-provider

Why Is Amazon Purchasing a Health Care Provider?





A current trend research tool?

https://techcrunch.com/2022/08/16/how-snipd-is-using-ai-to-unlock-knowledge-in-podcasts/

How Snipd is using AI to ‘unlock knowledge’ in podcasts

Podcasting has emerged as a major billion-dollar industry, with ad revenue in the U.S. alone expected to hit $2 billion this year — a figure that’s set to double by 2024. Against that backdrop, major players in the field are bolstering their podcasting armory, with Spotify recently doling out around $85 million for two companies specializing in podcast measurement and analytics, while Acast recently snapped up Podchaser — an “IMDb for podcasts” that gives advertiser deeper data insights — in a $27 million deal.

But as the big platforms lock horns in the hunt for podcasting riches, smaller players continue to arrive on the scene with their own ideas on how they can advance the podcast medium for creators and consumers alike.

One of these is Snipd, a Swiss startup building a podcast app that uses AI to transcribe content and synchronize with note-taking apps; automatically generate book-style “chapters”; and, as of this week, deliver podcast highlights in a TikTok-style personalized feed.



Monday, August 15, 2022

Any really large amount of communication could provide the same ‘benefits.’ Gathering data used to be a problem, today extracting information from the data is the challenge.

https://www.csoonline.com/article/3670110/3-ways-chinas-access-to-tiktok-data-is-a-security-risk.html#tk.rss_all

3 ways China's access to TikTok data is a security risk

"Politics and business in China are inseparable," said Joseph Williams, partner, cybersecurity, at Infosys Consulting. He argues that "the Chinese government could focus on specific users, specific keywords, or specific video sequences to identify whatever they might find interesting."

Theoretically, TikTok could collect all kinds of data, including text, images, videos, location, metadata, draft messages, fingerprints, or browsing history. The platform, which has grown rapidly in the past few years, exceeds 1 billion monthly active users globally, 100 million of which were based in the U.S. According to a Pew Research Center survey, 67% of American teens have installed this app more than Instagram, Snapchat, Facebook or Twitter.





Will we see “rebuttal” emails? More importantly, what can I charge for a “block ‘em all” App?

https://www.pogowasright.org/us-approves-google-plan-to-let-political-emails-bypass-gmail-spam-filter/

US approves Google plan to let political emails bypass Gmail spam filter

Jon Brodkin reports:

The US Federal Election Commission approved a Google plan on Thursday to let campaign emails bypass Gmail spam filters. The FEC’s advisory opinion adopted in a 4-1 vote said Gmail’s pilot program is permissible under the Federal Election Campaign Act and FEC regulations “and would not result in the making of a prohibited in-kind contribution.”
The FEC said Google’s approved plan is for “a pilot program to test new Gmail design features at no cost on a nonpartisan basis to authorized candidate committees, political party committees, and leadership PACs.”

Read more at Ars Technica.





While you are in our custody, please pose like a crook for our AI. (Make the evidence fit!)

https://www.biometricupdate.com/202208/move-a-little-to-your-left-turn-your-head-perfect-recreating-crimes-with-face-biometrics

Move a little to your left. Turn your head. Perfect! Recreating crimes with face biometrics

What happens if facial recognition surveillance cameras only capture a crime suspect from a too-oblique angle to make a confident identification?

In India, the police can have the subject recreate the pose of the person they are looking for and compare the two pieces of evidence using face biometric systems.

Early reports, including an article published by The Indian Express, leave a number of questions unanswered – not least of which is, were suspect compelled to pose? But what is known indicates a novel chapter in the evolution of biometric identification and surveillance may have arrived.

India’s law enforcement officials have grown more accepting of facial recognition as an investigative tool, with help from legislators. The Criminal Procedure Act, passed this year, gives officers the authority to collect fingerprints, palm prints and footprints, iris and retina and behavioral and DNA biometrics as well as analysis of other physical features, signature and handwriting for purposes of criminal investigation.



(Related) Capabilities?

https://www.marktechpost.com/2022/08/10/researchers-propose-a-deep-learning-based-face-recognition-technology-with-an-accuracy-of-99-95-for-facial-recognition-even-for-a-person-wearing-a-niqab/

Researchers Propose a Deep Learning-Based Face Recognition Technology with an Accuracy of 99.95% for Facial Recognition Even for a Person Wearing a Niqab

Face-recognition technology is quickly developing and used in various fields, including marketing, education, criminal investigation, security, and biometrics. Now, in addition to being able to identify the individual, it can also determine their facial expression. The limits of facial recognition software when a person’s face is partially hidden, as can happen when wearing a veil or protective face mask, are the subject of research published in the International Journal of Biometrics.

Full-face biometric identification has been the subject of a substantial amount of research. However, employing faces that are only partially visible, like veiled people, is difficult. In this study, the deep convolutional neural network (CNN) is used to extract characteristics from photographs of veiled people’s faces.

The researchers claim that their deep-learning technique for facial recognition is 99.95% correct, even when a person is wearing a niqab, which mostly hides the face except for the eyes. Age estimation and gender recognition by the algorithms are both 99.9% correct. Examining the eyes can identify a veiled person or wearing a COVID mask as happy or frowning with an accuracy of 80.9%.





Perspective.

https://www.escapistmagazine.com/robot-ai-inventor-us-court-patent-law-machine-uprising/

Oops, We Just Took Our First Real Step Toward the Machine Uprising





Sunday, August 14, 2022

I’m fascinated by this argument. So is my AI.

https://www.taylorfrancis.com/chapters/edit/10.4324/9780429356797-10/law-martin-clancy

You Can Call Me Hal: AI and Music IP

This chapter outlines how legal arguments can be constructed whereby a nonhuman legal person – an AI – could be capable of corporate immortality and how such radical positions challenge the human-centred design of intellectual property (IP). The chapter begins by noting the global movement to harmonise IP law and establishes the centrality of music copyright to the music industry’s economy. The historical development of fundamental legal theories is presented so that essential concepts in music copyright such as creativity and originality can be accessed concerning AI. To comprehend the legal perplexities of music generated by unsupervised machine learning, DeepMind’s WaveNet AI is considered. Supporting legal challenges, including the potential of AI being granted legal personhood for AI, are noted, and the chapter concludes that the protection offered by music copyright law is fragile when its case law has its roots in the precomputerised age. In a chapter addendum, David Hughes, Chief Technology Officer at the Recording Industry Association of America (RIAA) 2006–2021, provides high-level music industry reflection on chapter themes.





Continuing the study.

https://link.springer.com/article/10.1007/s10506-022-09327-6

Thirty years of artificial intelligence and law: the third decade

The first issue of Artificial Intelligence and Law journal was published in 1992. This paper offers some commentaries on papers drawn from the Journal’s third decade. They indicate a major shift within Artificial Intelligence, both generally and in AI and Law: away from symbolic techniques to those based on Machine Learning approaches, especially those based on Natural Language texts rather than feature sets. Eight papers are discussed: two concern the management and use of documents available on the World Wide Web, and six apply machine learning techniques to a variety of legal applications.





Business” exists to take risk. Knowing what all those risks are is a good thing.

https://www.cambridge.org/core/journals/european-journal-of-risk-regulation/article/artificial-intelligence-autonomous-drones-and-legal-uncertainties/BDB89BEC2266D1ABF17316A53AA93480

Artificial Intelligence, Autonomous Drones and Legal Uncertainties

Drones represent a rapidly developing industry. Devices initially designed for military purposes have evolved into a new area with a plethora of commercial applications. One of the biggest hindrances in the commercial developments of drones is legal uncertainty concerning the legal regimes applicable to the multitude of issues that arises with this new technology. This is especially prevalent in situations concerning autonomous drones (ie drones operating without a pilot). This article provides an overview of some of these uncertainties. A scenario based on the fictitious but plausible event of an autonomous drone falling from the sky and injuring people on the ground is analysed from the perspectives of both German and English private law. This working scenario is used to illustrate the problem of legal uncertainty facing developers, and the article provides valuable knowledge by mapping real uncertainties that impede the development of autonomous drone technology alongside providing multidisciplinary insights from law as well as software electronic and computer engineering.





God, politics and AI. Some early thinking about AI…

https://link.springer.com/article/10.1007/s43545-022-00458-w

Spinoza, legal theory, and artificial intelligence: a conceptual analysis of law and technology

This paper sets out to show the relevance of Benedict Spinoza’s (1632–1677) views on law to the contemporary legal discourse on law and technology. I will do this by using some of the reactions toward the use of Artificial Intelligence (AI) in legal practices as illustrative examples of the continued relevance of the debate on law’s nature with which Spinoza was concerned in the fourth chapter of the Theological Political Treatise. I will argue that the problem of how to make laws efficient is being manifested in legal debates on how to regulate social and scientific practices that involve the use of certain—especially advanced—AI. As such, these debates are based on the idea that AI technology complicates the valid application of law in so far as it challenges the legal idea of the individual who corresponds with the unlimited legal subject. This complication is manifested, for instance, when we consider the rule of law criteria (predictability and transparency) for valid law-making and application in light of the fact that self-learning machines and autonomous AI hold an intentionality that lies beyond the scope of the lawmaker’s cognition. My discussion will lead to the suggestion that Spinoza’s legal theory may help us make sense of the problems perceived by legal discourses on AI and law as illustrations of a conceptual paradox embedded within the concept of law, rather than problems caused by the technological development of new forms of intentionalities.





When your client is an AI… (Okay, not really)

https://scholarship.law.ufl.edu/cgi/viewcontent.cgi?article=2106&context=facultypub

Assuming the Risks of Artificial Intelligence

Tort law has long served as a remedy for those injured by products—and injuries from artificial intelligence (“AI”) are no exception. While many scholars have rightly contemplated the possible tort claims involving AI-driven technologies that cause injury, there has been little focus on the subsequent analysis of defenses. One of these defenses, assumption of risk, has been given particularly short shrift, with most scholars addressing it only in passing. This is intriguing, particularly because assumption of risk has the power to completely bar recovery for a plaintiff who knowingly and voluntarily engaged with a risk. In reality, such a defense may prove vital to shaping the likelihood of success for these prospective plaintiffs injured by AI, first-adopters who are often eager to “voluntarily” use the new technology but simultaneously often lacking in “knowledge” about AI’s risks.

To remedy this oversight in the scholarship, this Article tackles assumption of risk head-on, demonstrating why this defense may have much greater influence on the course of the burgeoning new field of “AI torts” than originally believed. It analyzes the historic application of assumption of risk to emerging technologies, extrapolating its potential use in the context of damages caused by robotic, autonomous, and facial recognition technologies. This Article then analyzes assumption of risk’s relationship to informed consent, another key doctrine that revolves around appreciation of risks, demonstrating how an extension of informed consent principles to assumption of risk can establish a more nuanced approach for a future that is sure to involve an increasing number of AI-human interactions—and AI torts. In addition to these AI-human interactions, this Article’s reevaluation also can help in other assumption of risk analyses and tort law generally to better address the evolving innovation-riskconsent trilemma.