Saturday, April 23, 2022

An opportunity for all nations to agree on what constitutes illegal speech? I doubt it.

https://www.cnbc.com/2022/04/22/digital-services-act-eu-agrees-new-rules-for-tackling-illegal-content.html

EU agrees on landmark law aimed at forcing Big Tech firms to tackle illegal content

The European Parliament and EU member states reached a deal on the Digital Services Act, a landmark piece of legislation that aims to address illegal and harmful content by getting platforms to rapidly take it down.

Margrethe Vestager, the EU competition chief and a key architect of the bloc’s digital reforms, said the deal is “better than the proposal that we tabled” back in 2020.

A key part of the legislation would limit how digital giants target users with online ads. The DSA would effectively stop platforms from targeting users with algorithms using data based on their gender, race or religion. Targeting children with ads will also be prohibited.

So-called dark patterns — deceptive tactics designed to push people toward certain products and service — will be banned as well.

E-commerce marketplaces like Amazon must also prevent sales of illegal goods under the new rules.

Another provision would require very large online platforms and search engines to take certain measures in the event of a crisis, like Russia’s invasion of Ukraine.

Failure to comply with the rules may result in fines of up to 6% of companies’ global annual revenues.





Self-driving starts off-road?

https://thenextweb.com/news/john-deere-slowly-becoming-one-worlds-most-important-ai-companies

John Deere is slowly becoming one of the world’s most important AI companies

What most outsiders don’t know is that John Deere’s not so much a farming vehicle manufacturer these days as it is an agricultural technology company. And, judging by how things are going in 2022, we’re predicting it’ll be a full-on artificial intelligence company within the next 15 years.

John Deere’s been heavily-invested in robotics and autonomy for decades. Back in the late 1990s, the company acquired GPS startup NavCon in hopes of building satellite-directed guidance systems for tractors.

Within a few years, JD was able to develop a system that was accurate to a few centimeters — previous GPS systems could be off by as much as several meters.

The company then partnered with none other than NASA to create the world’s first internet-based GPS tracking system.

In other words: the path to modern autonomous vehicles was sowed and tilled by John Deer tractors and NASA decades ago.





Free is good.

https://www.makeuseof.com/world-book-day-amazon-giving-free-kindle-books/

Amazon Is Giving Away 10 Free Kindle Books for World Book Day

Until April 27, 2022, you can get 10 free Kindle books from Amazon. Here's how to claim them.





How AI will establish contact.

https://dilbert.com/strip/2022-04-23



Friday, April 22, 2022

Privacy in Colorado…

https://www.insideprivacy.com/united-states/state-legislatures/colorado-attorney-general-remarks-on-cpa-rulemaking/

Colorado Attorney General Remarks on CPA Rulemaking

On April 12, at the International Association of Privacy Professionals’ global privacy conference, Colorado Attorney General Phil Weiser gave remarks on his office’s approach to the rulemaking and enforcement of the Colorado Privacy Act.

Attorney General Weiser observed that his office’s approach will be “principle-based” and not prescriptive. He shared that promulgating too many specific rules could be counterproductive. Not only would they not serve every context, he stated that also they could create challenges of interoperability if other states also are very prescriptive.





Prolific prognosticator of privacy.

https://www.bespacific.com/the-limitations-of-privacy-rights/

The Limitations of Privacy Rights

Solove, Daniel J., The Limitations of Privacy Rights (February 1, 2022). 98 Notre Dame Law Review — (Forthcoming 2023), Available at SSRN: https://ssrn.com/abstract=4024790 or http://dx.doi.org/10.2139/ssrn.4024790

Individual privacy rights are often at the heart of information privacy and data protection laws. The most comprehensive set of rights, from the European Union’s General Data Protection Regulation (GDPR), includes the right to access, right to rectification (correction), right to erasure, right to restriction, right to data portability, right to object, and right to not be subject to automated decisions. Privacy laws around the world include many of these rights in various forms. In this article, I contend that although rights are an important component of privacy regulation, rights are often asked to do far more work than they are capable of doing. Rights can only give individuals a small amount of power. Ultimately, rights are at most capable of being a supporting actor, a small component of a much larger architecture. I advance three reasons why rights cannot serve as the bulwark of privacy protection. First, rights put too much onus on individuals when many privacy problems are systematic. Second, individuals lack the time and expertise to make difficult decisions about privacy, and rights cannot practically be exercised at scale with the number of organizations than process people’s data. Third, privacy cannot be protected by focusing solely on the atomistic individual. The personal data of many people is interrelated, and people’s decisions about their own data have implications for the privacy of other people. The main goal of providing privacy rights aims to provide individuals with control over their personal data. However, effective privacy protection involves not just facilitating individual control, but also bringing the collection, processing, and transfer of personal data under control. Privacy rights are not designed to achieve the latter goal; and they fail at the former goal. After discussing these overarching reasons why rights are insufficient for the oversized role they currently play in privacy regulation, I discuss the common privacy rights and why each falls short of providing significant privacy protection. For each right, I propose broader structural measures that can achieve its underlying goals in a more systematic, rigorous, and less haphazard way.”





Future direction. And a phone sized “Terminator?”

https://www.protocol.com/enterprise/ai-transformers-edge-nvidia

Researchers push to make bulky AI work in your phone and personal assistant

Transformer networks, colloquially known to deep-learning practitioners and computer engineers as “transformers,” are all the rage in AI. Over the last few years, these models, known for their massive size, large amount of data inputs, big scale of parameters — and, by extension, high carbon footprint and cost — have grown in favor over other types of neural network architectures.

Some transformers, particularly some open-source, large natural-language-processing transformer models, even have names that are recognizable to people outside AI, such as GPT-3 and BERT. They’re used across audio-, video- and computer-vision-related tasks, drug discovery and more.

Now chipmakers and researchers want to make them speedier and more nimble.





Keeping up.

https://finance.yahoo.com/news/ai-software-market-legal-industry-110300717.html

AI Software Market in Legal Industry - Growth, Trends, COVID-19 Impact, and Forecasts (2022 - 2027)

Reportlinker.com announces the release of the report "AI Software Market in Legal Industry - Growth, Trends, COVID-19 Impact, and Forecasts (2022 - 2027)" - https://www.reportlinker.com/p06271885/?utm_source=GNW

Law firms have always been at the forefront of using emerging technological advancements for productivity, efficiency enhancements, and artificial intelligence (AI) to play an integral role in supporting such initiatives. AI is becoming the next big technology for law firms. The legal sector is witnessing increased utility in its application owing to the developments and the computing capacity improvement in NLP, neural networks & chips.





We’re almost there.

https://www.bbc.com/news/technology-61155735

Highway Code: Watching TV in self-driving cars to be allowed

People using self-driving cars will be allowed to watch television on built-in screens under proposed updates to the Highway Code.

The changes will say drivers must be ready to take back control of vehicles when prompted, the government said.

The first use of self-driving technology is likely to be when travelling at slow speeds on motorways, such as in congested traffic.

However, using mobile phones while driving will remain illegal.





Is there something you wanted to grab?

https://www.bespacific.com/wapo-free-access-to-our-entire-site-through-april-22/

WaPo – free access to our entire site through April 22

Washington Post – “In honor of #EarthDay, enjoy free access to our entire site through April 22. Just sign up with your email address when prompted.”

Washington Post – “Seeds of hope: How nature inspires scientists to confront climate change. Sarah Kaplan, one of The Post’s climate reporters, introduces a series of short essays from climate scientists and conservationists where their hope comes from. She begins with her own inspiration..”



Thursday, April 21, 2022

I’m glad I’m not the only one.

https://www.dailykos.com/stories/2022/4/20/2093083/-Ukraine-update-Experts-keep-waiting-for-Russia-to-show-competence-but-it-still-isn-t-happening

Ukraine update: Experts keep waiting for Russia to show competence, but it still isn't happening

With the ubiquity of smartphones, the on-the-ground details of Russia's war against Ukraine have been more closely documented than any war. The footage is omnipresent, even allowing independent observers to make detailed catalogs of destroyed equipment. The movement of Russian troops is being tracked by satellite, as well as by pinging electronic devices they have stolen and taken with them. We can hear individual conversations between Russian soldiers and their families. We have a great deal of information.

But we still don't have the slightest soggy clue as to what the Russian "strategy" actually is. Even after weeks of war, the military experts who interpret these things can't wrap their heads around just what we've been seeing.

… If you're confused, it means you're paying attention; military analysts with decades of experience watching Russia and writing up documents on what the Russian military can and can't do are even more confused than you are.





Gosh, you look familiar!

https://www.bespacific.com/europe-is-building-a-huge-international-facial-recognition-system/

Europe Is Building a Huge International Facial Recognition System

Wired: “For the past 15 years, police forces searching for criminals in Europe have been able to share fingerprints, DNA data, and details of vehicle owners with each other. If officials in France suspect someone they are looking for is in Spain, they can ask Spanish authorities to check fingerprints against their database. Now European lawmakers are set to include millions of photos of people’s faces in this system—and allow facial recognition to be used on an unprecedented scale. The expansion of facial recognition across Europe is included in wider plans to “modernize” policing across the continent, and it comes under the Prüm II data-sharing proposals. The details were first announced in December, but criticism from European data regulators has gotten louder in recent weeks, as the full impact of the plans have been understood… Prüm II plans to significantly expand the amount of information that can be shared, potentially including photos and information from driving licenses. The proposals from the European Commission also say police will have greater “automated” access to information that’s shared. Lawmakers say this means police across Europe will be able to cooperate closely, and the European law enforcement agency Europol will have a “stronger role.” … The inclusion of facial images and the ability to run facial recognition algorithms against them are among the biggest planned changes in Prüm II. Facial recognition technology has faced significant pushback in recent years as police forces have increasingly adopted it, and it has misidentified people and derailed lives. Dozens of cities in the US have gone as far as banning police forces from using the technology. The EU is debating a ban on the police use of facial recognition in public places as part of its AI Act …”


Wednesday, April 20, 2022

Sounds like a ‘great idea’ came up a bit short when it hit the real world.

https://www.schneier.com/blog/archives/2022/04/clever-cryptocurrency-theft.html

Clever Cryptocurrency Theft

Beanstalk Farms is a decentralized finance project that has a majority stake governance system: basically people have proportional votes based on the amount of currency they own. A clever hacker used a “flash loan” feature of another decentralized finance project to borrow enough of the currency to give himself a controlling stake, and then approved a $182 million transfer to his own wallet.

It is insane to me that cryptocurrencies are still a thing.





A variety of surveillance the US will never see? Somehow I doubt it.

https://www.technologyreview.com/2022/04/19/1049996/south-africa-ai-surveillance-digital-apartheid/

South Africa’s private surveillance machine is fueling a digital apartheid

As firms have dumped their AI technologies into the country, it’s created a blueprint for how to surveil citizens and serves as a warning to the world.





Automating lawyers is too easy.

https://www.bespacific.com/a-human-being-wrote-this-law-review-article-gpt-3-and-the-practice-of-law/

A Human Being Wrote This Law Review Article: GPT-3 and the Practice of Law

Cyphert, Amy, A Human Being Wrote This Law Review Article: GPT-3 and the Practice of Law (November 1, 2021). UC Davis Law Review, Volume 55, Issue 1, WVU College of Law Research Paper No. 2022-02, Available at SSRN: https://ssrn.com/abstract=3973961

Artificial intelligence tools can now “write” in such a sophisticated manner that they fool people into believing that a human wrote the text. None are better at writing than GPT-3, released in 2020 for beta testing and coming to commercial markets in 2021. GPT-3 was trained on a massive dataset that included scrapes of language from sources ranging from the NYTimes to Reddit boards. And so, it comes as no surprise that researchers have already documented incidences of bias where GPT-3 spews toxic language. But because GPT-3 is so good at “writing,” and can be easily trained to write in a specific voice — from classic Shakespeare to Taylor Swift — it is poised for wide adoption in the field of law. This Article explores the ethical considerations that will follow from GPT-3’s introduction into lawyers’ practices. GPT-3 is new, but the use of AI in the field of law is not. AI has already thoroughly suffused the practice of law. GPT-3 is likely to take hold as well, generating some early excitement that it and other AI tools could help close the access to justice gap. That excitement should nevertheless be tempered with a realistic assessment of GPT-3’s tendency to produce biased outputs. As amended, the Model Rules of Professional Conduct acknowledge the impact of technology on the profession and provide some guard rails for its use by lawyers. This Article is the first to apply the current guidance to GPT-3, concluding that it is inadequate. I examine three specific Model Rules — Rule 1.1 (Competence), Rule 5.3 (Supervision of Nonlawyer Assistance), and Rule 8.4(g) (Bias) — and propose amendments that focus lawyers on their duties and require them to regularly educate themselves about pros and cons of using AI to ensure the ethical use of this emerging technology.”





Not supportive enough? Putin reminds them who made them rich?

https://finance.yahoo.com/news/putin-signs-decree-remove-russian-135326392.html

Putin signs decree to remove Russian stocks from overseas exchanges in huge blow to the nation's billionaires





Perspective.

https://www.politico.com/news/magazine/2022/04/16/history-shows-trump-personality-cult-end-00024941

The One Way History Shows Trump’s Personality Cult Will End

… She had seen enough of Donald Trump’s behavior over the preceding five years to know how neatly he lined up with other strongmen she had studied and how his autocratic tendencies would influence his behavior whether he won or lost.

“I just predicted that he wouldn’t leave in a quiet manner,” Ben-Ghiat, a professor of history and Italian studies at New York University told me recently. “He’s an authoritarian, and they can’t leave office. They don’t have good endings and they don’t leave properly.”



Tuesday, April 19, 2022

Does this mean Clearview was right to claim it could ‘scrape’ faces from social media?

https://www.databreaches.net/web-scraping-is-legal-us-appeals-court-reaffirms/

Web scraping is legal, US appeals court reaffirms

Zack Whittaker reports:

Good news for archivists, academics, researchers and journalists: Scraping publicly accessible data is legal, according to a U.S. appeals court ruling.
The landmark ruling by the U.S. Ninth Circuit of Appeals is the latest in a long-running legal battle brougcht by LinkedIn aimed at stopping a rival company from web scraping personal information from users’ public profiles. The case reached the U.S. Supreme Court last year but was sent back to the Ninth Circuit for the original appeals court to re-review the case.

Read more at TechCrunch.

[From the article:

The Ninth Circuit, in referencing the Supreme Court’s “gate-up, gate-down” analogy, ruled that “the concept of ‘without authorization’ does not apply to public websites.”



(Completely unrelated)

https://www.bespacific.com/how-democracies-spy-on-their-citizens/

How Democracies Spy on Their Citizens

The New Yorker: “The inside story of the world’s most notorious commercial spyware and the big tech companies waging war against it.” By Ronan Farrow April 18, 2022. “…Commercial spyware has grown into an industry estimated to be worth twelve billion dollars. It is largely unregulated and increasingly controversial. In recent years, investigations by the Citizen Lab and Amnesty International have revealed the presence of Pegasus on the phones of politicians, activists, and dissidents under repressive regimes. An analysis by Forensic Architecture, a research group at the University of London, has linked Pegasus to three hundred acts of physical violence. It has been used to target members of Rwanda’s opposition party and journalists exposing corruption in El Salvador. In Mexico, it appeared on the phones of several people close to the reporter Javier Valdez Cárdenas, who was murdered after investigating drug cartels. Around the time that Prince Mohammed bin Salman of Saudi Arabia approved the murder of the journalist Jamal Khashoggi, a longtime critic, Pegasus was allegedly used to monitor phones belonging to Khashoggi’s associates, possibly facilitating the killing, in 2018. (Bin Salman has denied involvement, and NSO said, in a statement, “Our technology was not associated in any way with the heinous murder.”) Further reporting through a collaboration of news outlets known as the Pegasus Project has reinforced the links between NSO Group and anti-democratic states. But there is evidence that Pegasus is being used in at least forty-five countries, and it and similar tools have been purchased by law-enforcement agencies in the United States and across Europe. Cristin Flynn Goodwin, a Microsoft executive who has led the company’s efforts to fight spyware, told me, “The big, dirty secret is that governments are buying this stuff—not just authoritarian governments but all types of governments…”





Not everyone will agree, but it’s a start.

https://www.csoonline.com/article/3656700/cybersecurity-litigation-risks-on-the-rise-what-cisos-should-worry-about-the-most.html#tk.rss_all

Cybersecurity litigation risks: 4 top concerns for CISOs

According to Norton Rose Fulbright’s latest Annual Litigation Trends Survey of more than 250 general counsel and in-house litigation practitioners, cybersecurity and data protection will be among the top drivers of new legal disputes for the next several years. Two-thirds of survey respondents said they felt more exposed to these types of disputes in 2021, up from less than half in 2020, while more sophisticated attacks, less oversight of employees/contractors in remote environments, and concerns about the amount of client data were all cited as mitigating factors.

Clearly, the risks of litigation are very real for CISOs and their organizations, but what are the greatest areas of concern and what can they do about it?





Should you “adjust” results to reflect what you think they should be?

https://www.vox.com/future-perfect/22916602/ai-bias-fairness-tradeoffs-artificial-intelligence

Why it’s so damn hard to make AI fair and unbiased

Let’s play a little game. Imagine that you’re a computer scientist. Your company wants you to design a search engine that will show users a bunch of pictures corresponding to their keywords — something akin to Google Images.

On a technical level, that’s a piece of cake. You’re a great computer scientist, and this is basic stuff! But say you live in a world where 90 percent of CEOs are male. (Sort of like our world.) Should you design your search engine so that it accurately mirrors that reality, yielding images of man after man after man when a user types in “CEO”? Or, since that risks reinforcing gender stereotypes that help keep women out of the C-suite, should you create a search engine that deliberately shows a more balanced mix, even if it’s not a mix that reflects reality as it is today?

This is the type of quandary that bedevils the artificial intelligence community, and increasingly the rest of us — and tackling it will be a lot tougher than just designing a better search engine.





Is this really a better legal system?

https://www.jpost.com/business-and-innovation/banking-and-finance/article-704511

Israeli start-up Darrow aims to fix justice system using data

Every day, corporations violate basic rights with environmental pollutants, unfair wages, privacy breaches, misuse of consumer information and more. While publications and advocacy groups report on many such cases, the violations themselves often go undetected and unchallenged, according to a study from the European Union Agency for Fundamental Rights. Users typically don’t notice when their rights are being infringed upon, and sometimes the corporations themselves aren’t aware they are in the wrong.

In cases where a user does recognize a violation, they likely won’t know how to proceed. Staggering volumes of data, unintelligible legal jargon and legal teams leave victims feeling powerless in the face of large corporations.

Darrow implements machine-learning algorithms and natural language processing to expose harmful legal violations that would otherwise go undetected.





Eight percent false negatives, what percent false positives? “You look sick, cough into my phone!”

https://www.dailymail.co.uk/sciencetech/article-10731113/Scientists-develop-app-detect-Covid-19-92-cent-accuracy.html

Worried you have Covid? Cough at your PHONE! Scientists develop an app that can detect whether you've been infected with 92 per cent accuracy

Scientists have developed a smartphone app that can detect whether you've been infected with Covid-19.

The app, called ResApp, uses machine learning to analyse the sounds of your cough.

During testing, the app was found to correctly detect Covid-19 in 92 per cent of people with the infection.



Monday, April 18, 2022

Where is the liability here? Will this impact grades? Could a student be assigned extra work because they “don’t understand?”

https://www.protocol.com/enterprise/emotion-ai-school-intel-edutech

Intel calls its AI that detects student emotions a teaching tool. Others call it 'morally reprehensible.'

Intel and Classroom Technologies, which sells virtual school software called Class, think there might be a better way. The companies have partnered to integrate an AI-based technology developed by Intel with Class, which runs on top of Zoom. Intel claims its system can detect whether students are bored, distracted or confused by assessing their facial expressions and how they’re interacting with educational content.

But critics argue that it is not possible to accurately determine whether someone is feeling bored, confused, happy or sad based on their facial expressions or other external signals.

Some researchers have found that because people express themselves through tens or hundreds of subtle and complex facial expressions, bodily gestures or physiological signals, categorizing their state with a single label is an ill-suited approach. Other research indicates that people communicate emotions such as anger, fear and surprise in ways that vary across cultures and situations, and how they express emotion can fluctuate on an individual level.



Sunday, April 17, 2022

We can, therefore we must. It may suggest guilt, can it also “prove” innocence?

https://www.virginiamercury.com/2022/04/06/virginia-police-routinely-use-secret-gps-pings-to-track-peoples-cell-phones/

Virginia police routinely use secret GPS pings to track people’s cell phones

Police never described Durvin as a suspect in the search warrant application they submitted seeking permission to track him and court records show he has not been charged with any crimes in Virginia since police took out the warrant.

Instead, officers wrote that they had found voicemails from Durvin on the overdose victim’s phone and thought tracking his location might help them figure out who supplied the deadly dose of heroin, noting that Durvin had been with the man during what appeared to be a prior overdose in Richmond.

The warrants are limited by law to 30 days but can be — and often are — renewed monthly by a judge.





The privacy hurdle...

https://www.pogowasright.org/announce-privacy-harms-final-published-version-solove-citron/

Announce: Privacy Harms – Final Published Version (Solove & Citron)

Two resources of note this week.

First, as seen on Teach Privacy, Daniel Solove’s site:

I’m delighted to announce that the final published version of my article, Privacy Harms, is now out in print!
Privacy Harms, 101 B.U. L. Rev. 793 (2022) (with Danielle Keats Citron)
Abstract:
The requirement of harm has significantly impeded the enforcement of privacy law. In most tort and contract cases, plaintiffs must establish that they have suffered harm. Even when legislation does not require it, courts have taken it upon themselves to add a harm element. Harm is also a requirement to establish standing in federal court. In Spokeo v. Robins and TransUnion v. Ramirez, the U.S. Supreme Court ruled that courts can override congressional judgment about cognizable harm and dismiss privacy claims.
Caselaw is an inconsistent, incoherent jumble, with no guiding principles. Countless privacy violations are not remedied or addressed on the grounds that there has been no cognizable harm.
Courts struggle with privacy harms because they often involve future uses of personal data that vary widely. When privacy violations result in negative consequences, the effects are often small – frustration, aggravation, anxiety, inconvenience – and dispersed among a large number of people. When these minor harms are suffered at a vast scale, they produce significant harm to individuals, groups, and society. But these harms do not fit well with existing cramped judicial understandings of harm.
This article makes two central contributions. The first is the construction of a typology for courts to understand harm so that privacy violations can be tackled and remedied in a meaningful way. Privacy harms consist of various different types, which to date have been recognized by courts in inconsistent ways. Our typology of privacy harms elucidates why certain types of privacy harms should be recognized as cognizable.
The second contribution is providing an approach to when privacy harm should be required. In many cases, harm should not be required because it is irrelevant to the purpose of the lawsuit. Currently, much privacy litigation suffers from a misalignment of enforcement goals and remedies. We contend that the law should be guided by the essential question: When and how should privacy regulation be enforced? We offer an approach that aligns enforcement goals with appropriate remedies.

You can download the article, for free, at https://ssrn.com/abstract=3782222. Once again, this site is grateful for all the free resources Dan Solove has made available to privacy law scholars, professionals in the privacy space, and interested members of the public.



(Related)

https://www.pogowasright.org/announce-fight-for-privacy-protecting-dignity-identity-and-love-in-the-digital-age-citron/

Announce: Fight for Privacy: Protecting Dignity, Identity, and Love in the Digital Age (Citron)

Law professor Danielle Keats Citron’s new book is out.

Danielle has done such important work in the privacy space, tackling thorny issues like privacy harms (in collaboration with Daniel Solove), as well as her own work on issues such as stalking and harassment in cyberspace, and the role of state attorneys general in promoting and enforcing privacy-protective legislation. I look forward to reading this newest book by her.

One of the thorniest issues her work raises is what some call “content moderation” and others call “censorship” on social media platforms. If you’ve read Jeff Kosseff’s work on anonymous speech and on Section 230, you will recognize where Kosseff and Citron disagree, but both are well worth reading and considering if you are interested in privacy and protecting it.





We’ve been doing it all wrong?

https://arxiv.org/abs/2204.05151

Metaethical Perspectives on 'Benchmarking' AI Ethics

Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research and have been developed for a variety of tasks ranging from question answering to facial recognition. An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system. In this paper, drawing upon research in moral philosophy and metaethics, we argue that it is impossible to develop such a benchmark. As such, alternative mechanisms are necessary for evaluating whether an AI system is 'ethical'. This is especially pressing in light of the prevalence of applied, industrial AI research. We argue that it makes more sense to talk about 'values' (and 'value alignment') rather than 'ethics' when considering the possible actions of present and future AI systems. We further highlight that, because values are unambiguously relative, focusing on values forces us to consider explicitly what the values are and whose values they are. Shifting the emphasis from ethics to values therefore gives rise to several new ways of understanding how researchers might advance research programmes for robustly safe or beneficial AI. We conclude by highlighting a number of possible ways forward for the field as a whole, and we advocate for different approaches towards more value-aligned AI research.





A book worth reading?

https://www.degruyter.com/document/isbn/9781479812547/html?lang=en

Virtual Searches

A host of technologies—among them digital cameras, drones, facial recognition devices, night-vision binoculars, automated license plate readers, GPS, geofencing, DNA matching, datamining, and artificial intelligence—have enabled police to carry out much of their work without leaving the office or squad car, in ways that do not easily fit the traditional physical search and seizure model envisioned by the framers of the Constitution. Virtual Searches develops a useful typology for sorting through this bewildering array of old, new, and soon-to-arrive policing techniques. It then lays out a framework for regulating their use that expands the Fourth Amendment’s privacy protections without blindly imposing its warrant requirement, and that prioritizes democratic over judicial policymaking.

The coherent regulatory regime developed in Virtual Searches ensures that police are held accountable for their use of technology without denying them the increased efficiency it provides in their efforts to protect the public. Whether policing agencies are pursuing an identified suspect, constructing profiles of likely perpetrators, trying to find matches with crime scene evidence, collecting data to help with these tasks, or using private companies to do so, Virtual Searches provides a template for ensuring their actions are constitutionally legitimate and responsive to the polity.





For my students?

https://ieeexplore.ieee.org/abstract/document/9755237

Algorithm Auditing: Managing the Legal, Ethical, and Technological Risks of Artificial Intelligence, Machine Learning, and Associated Algorithms

Algorithms are becoming ubiquitous. However, companies are increasingly alarmed about their algorithms causing major financial or reputational damage. A new industry is envisaged: auditing and assurance of algorithms with the remit to validate artificial intelligence, machine learning, and associated algorithms.