Saturday, April 08, 2023

 

Sturm College of Law, University of Denver

Feminine Technology and Reproductive Privacy after Dobbs

April 21, 2023 - 10:00 AM to 1:00 PM

Please register here: Spring 2023 Privacy Seminar





A most convoluted topic…

https://www.pogowasright.org/texas-judge-invalidates-f-d-a-approval-of-the-abortion-pill-mifepristone/

Texas judge Invalidates F.D.A. Approval of the Abortion Pill Mifepristone

Pam Belluck reports:

A federal judge in Texas issued a preliminary ruling invalidating the Food and Drug Administration’s 23-year-old approval of the abortion pill mifepristone, an unprecedented order that — if it stands through court challenges — could make it harder for patients to get abortions in states where abortion is legal, not just in those trying to restrict it.
The drug will continue to be available at least in the short-term since the Texas judge, Matthew J. Kacsmaryk, stayed his own order for seven days to give the F.D.A. time to ask an appeals court to intervene.
Less than an hour after Judge Kacsmaryk’s ruling, a judge in Washington state issued a ruling that directly contradicted the Texas decision, ordering the F.D.A. to make no changes to the availability of mifepristone.

Read more at The New York Times.





Their programming does not allow an “I don’t know” response. I would consider that a major failure.

https://arstechnica.com/information-technology/2023/04/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/

Why ChatGPT and Bing Chat are so good at making things up

Over the past few months, AI chatbots like ChatGPT have captured the world's attention due to their ability to converse in a human-like way on just about any subject. But they come with a serious drawback: They can present convincing false information easily, making them unreliable sources of factual information and potential sources of defamation.

Why do AI chatbots make things up, and will we ever be able to fully trust their output? We asked several experts and dug into how these AI models work to find the answers.



Friday, April 07, 2023

I worry that eventually robots will make all the rules.

https://www.brookings.edu/research/robotic-rulemaking/

Robotic rulemaking

Rulemaking by federal agencies is a very text-intensive process, both in terms of writing the rules themselves, which express not only the law but also the agencies’ rationales for their regulatory choices, as well as public comments which arrive almost exclusively in the form of text. How might generative AI intersect with rulemaking? In this essay, we work through some use cases for generative AI in the rulemaking process, for better and for worse, both for the public and federal agencies.



(Related) Even if the AI is better at war than the admirals?

https://gcaptain.com/us-navy-admiral-says-ai-warships-must-obey/

US Navy Admiral Says AI Warships ‘Must Obey’

This week marked significant AI-related announcements for the US Navy at the annual Sea Air Space conference. Top Admiral and CNO, Mike Gilday, announced increased investments in Artificial Intelligence software and autonomous warships. Meanwhile, Marine Corps General Karston Heckl mentioned that their Warfighting Lab is exploring the integration of AI or autonomy “everywhere”.

Military jargon such as “force multiplier” and “game-changing technology” was abundant, but Vice Admiral Scott Conn’s insistence that AI “must obey” stood out as the most powerful statement.

During a session moderated by Defense News journalist Megan Eckstein, Vice Admiral Conn, Deputy Chief of Naval Operations, explained how the US Navy is using technology to simultaneously engage multiple fleets and achieve various objectives. He highlighted that AI is transforming not only warfighting but also addressing long-standing, mundane issues faced by commanders.







Interesting. Chasing down a surveillance satellite in order to surveil it.

https://techcrunch.com/2023/04/06/true-anomaly-wants-to-train-space-warfighters-with-spy-satellites/

True Anomaly wants to train space warfighters with spy satellites

Colorado-based True Anomaly was founded last year by a quartet of ex-Space Force members. The company’s set out to supply the Pentagon with defensive tech to protect American assets in space, and to conduct recon on enemy spacecraft. The startup has developed a technology stack that includes training software and “autonomous orbital pursuit vehicles” that will be able to collect video and other data on objects in space.





Interesting despite the Forrest Gump title.

https://www.pogowasright.org/article-data-is-what-data-does-regulating-use-harm-and-risk-instead-of-sensitive-data/

Article: Data Is What Data Does: Regulating Use, Harm, and Risk Instead of Sensitive Data

Daniel J. Solove has posted a draft of a new article and welcomes feedback,

Abstract:
Heightened protection for sensitive data is becoming quite trendy in privacy laws around the world. Originating in European Union (EU) data protection law and included in the EU’s General Data Protection Regulation (GDPR), sensitive data singles out certain categories of personal data for extra protection. Commonly recognized special categories of sensitive data include racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, health, sexual orientation and sex life, biometric data, and genetic data.
Although heightened protection for sensitive data appropriately recognizes that not all situations involving personal data should be protected uniformly, the sensitive data approach is a dead end. The sensitive data categories are arbitrary and lack any coherent theory for identifying them. The borderlines of many categories are so blurry that they are useless. Moreover, it is easy to use non-sensitive data as a proxy for certain types of sensitive data.
Personal data is akin to a grand tapestry, with different types of data interwoven to a degree that makes it impossible to separate out the strands. With Big Data and powerful machine learning algorithms, most non-sensitive data can give rise to inferences about sensitive data. In many privacy laws, data that can give rise to inferences about sensitive data is also protected as sensitive data. Arguably, then, nearly all personal data can be sensitive, and the sensitive data categories can swallow up everything. As a result, most organizations are currently processing a vast amount of data in violation of the laws.
This Article argues that the problems with the sensitive data approach make it unworkable and counterproductive — as well as expose a deeper flaw at the root of many privacy laws. These laws make a fundamental conceptual mistake — they embrace the idea that the nature of personal data is a sufficiently useful focal point for the law. But nothing meaningful for regulation can be determined solely by looking at the data itself. Data is what data does. Personal data is harmful when its use causes harm or creates a risk of harm. It is not harmful if it is not used in a way to cause harm or risk of harm.
To be effective, privacy law must focus on use, harm, and risk rather than on the nature of personal data. The implications of this point extend far beyond sensitive data provisions. In many elements of privacy laws, protections should be based on the use of personal data and proportionate to the harm and risk involved with those uses.

Solove, Daniel J., Data Is What Data Does: Regulating Use, Harm, and Risk Instead of Sensitive Data (January 11, 2023). 118 Northwestern University Law Review (Forthcoming), Available at SSRN: https://ssrn.com/abstract=4322198 or http://dx.doi.org/10.2139/ssrn.4322198

You can download the article for free at the SSRN link above.





What did I consent to?

https://www.pogowasright.org/article-murky-consent-an-approach-to-the-fictions-of-consent-in-privacy-law/

Article: Murky Consent: An Approach to the Fictions of Consent in Privacy Law

On SSRN, this article by Daniel J. Solove:

Abstract
Consent plays a profound role in nearly all privacy laws. As Professor Heidi Hurd aptly said, consent works “moral magic” – it transforms things that would be illegal and immoral into lawful and legitimate activities. Regarding privacy, consent authorizes and legitimizes a wide range of data collection and processing.
There are generally two approaches to consent in privacy law. In the United States, the notice-and-choice approach predominates, where organizations post a notice of their privacy practices and then people are deemed to have consented if they continue to do business with the organization or fail to opt out. In the European Union, the General Data Protection Regulation (GDPR) uses the express consent approach, where people must voluntarily and affirmatively consent.
Both approaches fail. The evidence of actual consent is non-existent under the notice-and-choice approach. Individuals are often pressured or manipulated, undermining the validity of their consent. The express consent approach also suffers from these problems – people are ill-equipped to make decisions about their privacy, and even experts cannot fully understand what algorithms will do with personal data. Express consent also is highly impractical; it inundates individuals with consent requests from thousands of organizations. Express consent cannot scale.
In this Article, I contend that in most circumstances, privacy consent is fictitious. Privacy law should take a new approach to consent that I call “murky consent.” Traditionally, consent has been binary – an on/off switch – but murky consent exists in the shadowy middle ground between full consent and no consent. Murky consent embraces the fact that consent in privacy is largely a set of fictions and is at best highly dubious.
Abandoning consent entirely in most situations involving privacy would involve the government making most decisions regarding personal data. But this approach would be problematic, as it would involve extensive government control and micromanaging, and it would curtail people’s autonomy. The law should allow space for people’s autonomy over their decisions, even when those decisions are deeply flawed. The law should thus strive to reach a middle ground, providing a sandbox for free play but with strong guardrails to protect against harms.
Because it conceptualizes consent as mostly fictional, murky consent recognizes its lack of legitimacy. To return to Hurd’s analogy, murky consent is consent without magic. Instead of providing extensive legitimacy and power, murky consent should authorize only a very restricted and weak license to use data. This would allow for a degree of individual autonomy but with powerful guardrails to limit exploitative and harmful behavior by the organizations collecting and using personal data. In the Article, I propose some key guardrails to use with murky consent.

Solove, Daniel J., Murky Consent: An Approach to the Fictions of Consent in Privacy Law (January 22, 2023). 104 Boston University Law Review (Forthcoming), Available at SSRN: https://ssrn.com/abstract=4333743 or http://dx.doi.org/10.2139/ssrn.4333743

Download the article for free at the SSRN link above.





...Because it can.

https://www.euronews.com/next/2023/04/07/why-does-chatgpt-make-things-up-australian-mayor-prepares-first-defamation-lawsuit-over-it

Why does ChatGPT make things up? Australian mayor prepares first defamation lawsuit over its content

ChatGPT has caught the world's attention with its ability to instantly generate human-sounding text, jokes and poems, and even pass university exams.

Another of the artificial intelligence (AI) chatbot's characteristics, however, is its tendency to make things up entirely - and it could get OpenAI, the company behind it, in legal trouble.

A regional Australian mayor said this week he may sue OpenAI if it does not correct ChatGPT's false claims that he served time in prison for bribery. If he follows through, it would likely be the first defamation lawsuit against the service, which was launched in November last year.





Does that include access to the hardware required to use it?

https://www.bespacific.com/the-socio-economic-argument-for-the-human-right-to-internet-access/

The Socio-Economic Argument for the Human Right to Internet Access

The Socio-Economic Argument for the Human Right to Internet Access, Politics Philosophy & Economics (2023). DOI: 10.1177/1470594X231167597

PHSY.org: “People around the globe are so dependent on the internet to exercise socioeconomic human rights such as education, health care, work, and housing that online access must now be considered a basic human right, a new study reveals. Particularly in developing countries, internet access can make the difference between people receiving an education, staying healthy, finding a home, and securing employment—or not. Even if people have offline opportunities, such as accessing social security schemes or finding housing, they are at a comparative disadvantage to those with Internet access. Publishing his findings today in Politics, Philosophy & Economics, Dr. Merten Reglitz, Lecturer in Global Ethics at the University of Birmingham, calls for a standalone human right to internet access—based on it being a practical necessity for a range of socioeconomic human rights.”



Thursday, April 06, 2023

New things to argue about. Lets create and register AI images of celebrities and then we can license them back.

https://www.prnewswire.com/news-releases/metaphysic-ceo-tom-graham-becomes-first-person-to-file-for-copyright-registration-of-ai-likeness-creating-new-digital-property-rights-301790247.html

Metaphysic CEO Tom Graham Becomes First Person to File for Copyright Registration of AI Likeness Creating New Digital Property Rights

Tom Graham, CEO of generative AI pioneer Metaphysic, has made history today as the first person to submit for copyright registration his AI likeness with the U.S. Copyright Office. As the industry leader in creating hyperreal content powered by generative AI, Metaphysic champions individual ownership and control of their AI likenesses and biometric data. By leveraging legal institutions and existing law and regulation, Graham, through this submission, demonstrates the increasingly fine line between reality and computer-generated media as he and Metaphysic seek to create, for the first time, a new bundle of intellectual property rights that must be available to any individual in the future.





What field of study has not yet attracted a ‘small’ LLM? Let’s get there first!

https://www.marktechpost.com/2023/04/05/meet-bloomberggpt-a-large-language-model-with-50-billion-parameters-that-has-been-trained-on-a-variety-of-financial-data/

Meet BloombergGPT: A Large Language Model With 50 Billion Parameters That Has Been Trained on a Variety of Financial Data

... Recently, models trained using solely domain-specific data outperformed general-purpose LLMs on tasks inside particular disciplines, such as science and medicine, despite being substantially smaller. These results encourage the further creation of domain-specific models. NLP technologies play an increasingly significant role in the vast and expanding field of financial technology. Sentiment analysis, named entity identification, news categorization, and question-answering are a few of the financial NLP tasks. A domain-specific system is necessary because of the complexity and language of the economic domain, even if the range of functions is similar to those found in standard NLP benchmarks. It would be beneficial to have an LLM focused on the financial domain for all the reasons generative LLMs are appealing in general few-shot learning, text creation, conversational systems, etc.





Perspective.

https://www.bespacific.com/the-ethics-of-chatgpt-a-legal-writing-and-ethics-professors-perspective/

The Ethics of ChatGPT: A Legal Writing and Ethics Professor’s Perspective

Romig, Jennifer Murphy, The Ethics of ChatGPT: A Legal Writing and Ethics Professor’s Perspective (February 18, 2023). Emory Legal Studies Research Paper , Available at SSRN: https://ssrn.com/abstract=4373550 or http://dx.doi.org/10.2139/ssrn.4373550

Teaching law students requires meeting them where they are, envisioning what they can become, and offering appropriate education and advice for their developmental journey. ChatGPT is not a person, but this essay will to some extent personify ChatGPT in discussing how it might be developed before it is used for client work. ChatGPT and A.I. text generators don’t “learn”, but they can be programmed, prompted, and trained. Everyone realizes their potential to become pervasive and ubiquitous in the legal field and, indeed, in every field that relies on the production and exchange of words. But pervasive and ubiquitous does not necessarily mean ethical or effective. So far in using ChatGPT, I have seen a broad range of embarrassingly wrong output and really promising output. My comments here are offered in the spirit of what I’d like to see from ChatGPT.”





Resource.

https://www.bespacific.com/nist-trustworthy-responsible-artificial-intelligence-resource-center/

NIST Trustworthy & Responsible Artificial Intelligence Resource Center

Welcome to the NIST Trustworthy & Responsible Artificial Intelligence Resource Center (AIRC). The AIRC supports all AI actors in the development and deployment of trustworthy and responsible AI technologies. AIRC supports and operationalizes the NIST AI Risk Management Framework (AI RMF 1.0) and accompanying Playbook and will grow with enhancements to enable an interactive, role-based experience providing access to a wide-range of relevant AI resources.”



Wednesday, April 05, 2023

Will this law cut out the obfuscation we see so often?

https://www.makeuseof.com/what-is-circia-and-how-does-cybersecurity-law-impact-you/

What Is CIRCIA and How Does This Cybersecurity Law Impact You?

The Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA) is a federal law mandating “covered entities” that deal with critical infrastructure to report cyber incidents to the Cybersecurity and Infrastructure Security Agency (CISA).

If you encounter a cyberattack, you might want to share your experience with your security team or anyone else who can help prevent a recurrence. Until recently, sharing such information with a government agency was optional. CIRCIA now mandates organizations and chief information security officers (CISO) to report cyber incidents to CISA for a more secure cyber environment.

Signed into law by President Joe Biden in 2022, CIRCIA stipulates that you must report all cyber incidents not more than 72 hours after you become privy to them. Should you pay a ransom to attackers, you must report it within 24 hours.





Will Canada ban ChatGPT?

https://telecom.economictimes.indiatimes.com/news/internet/canada-opens-investigation-into-ai-firm-behind-chatgpt/99261028

Canada opens investigation into AI firm behind ChatGPT

The investigation by the Office of the Privacy Commissioner into OpenAI was opened in response to a "complaint alleging the collection, use and disclosure of personal information without consent," the agency said.





Perhaps I could sell a course on “talking to ChatGPT”

https://www.zdnet.com/article/do-you-like-asking-chatgpt-questions-you-could-get-paid-a-lot-for-it/

Do you like asking ChatGPT questions? You could get paid (a lot) for it

… If you have ever asked ChatGPT to help you with a task, you have written a ChatGPT prompt. Lucky for you, many companies are looking to hire people with that skill to optimize their company's AI usage and results. Most importantly, they are offering generous pay.



(Related) The course might even pay for itself!

https://www.theregister.com/2023/04/04/chatgpt_exfiltration_tool/

Can ChatGPT bash together some data-stealing code? With the right prompts, sure

A Forcepoint staffer has blogged about how he used ChatGPT to craft some code that exfiltrates data from an infected machine. At first, it sounds bad, but in reality, it's nothing an intermediate or keen beginner programmer couldn't whack together themselves anyway.

His experiment does, to some extent, highlight how the code-suggesting unreliable chatbot, built by OpenAI and pushed by Microsoft, could be used to cut some corners in malware development or automate the process.

It also shows how someone, potentially one without any coding experience, can make the bot jump its guardrails, which are supposed to prevent it from outputting potentially dangerous code, and have the AI service put together an undesirable program.



 

Tuesday, April 04, 2023

I’m sure someone has already asked ChatGPT what it thinks of all this…

https://www.reuters.com/technology/germany-principle-could-block-chat-gpt-if-needed-data-protection-chief-2023-04-03/

Italy's ChatGPT ban attracts EU privacy regulators

Italy's move to temporarily ban ChatGPT has inspired other European countries to study if harsher measures are needed to rein in the wildly popular chatbots and whether to coordinate such actions.

"The points they raise are fundamental and show that GDPR does offer tools for the regulators to be involved and engaged into shaping the future of AI," said Dessislava Savova, partner at law firm Clifford Chance.

Privacy regulators in France and Ireland have reached out to counterparts in Italy to find out more about the basis of the ban. Germany could follow in Italy's footsteps by blocking ChatGPT over data security concerns, the German commissioner for data protection told the Handelsblatt newspaper.





Eventually this will show here as prep for war or translated into hacker tools.

https://www.csoonline.com/article/3692534/a-report-from-ukraine-s-cybersecurity-service-reveals-insight-into-what-the-country-has-been-facing.html#tk.rss_all

Views of a hot cyberwar — the Ukrainian perspective on Russia’s online assault

In a recent report issued by the State Service of Special Communications and Information Protection of Ukraine (SSSCIP) titled “Russia’s Cyber Tactics: Lessons Learned in 2022 — SSSCIP analytical report on the year of Russia’s full-scale cyberwar against Ukraine readers obtained a 10,000-foot overview of what a hot cyberwar entails from the Ukrainian perspective.

The SSSCIP report highlights the major targets, the coordination between government-advanced persistent threat groups and “hacktivists”, espionage operations and influence operations, and the Ukrainian analysis and discoveries.





I don’t think this is right. Perhaps they (the lawyers) are asking the wrong questions.

https://www.bespacific.com/what-are-the-top-5-areas-in-legal-work-that-cannot-be-replaced-by-ai/

What are the top 5 areas in legal work that cannot be replaced by AI?

Linkedin: “As a follow-up to our previous article, which asked ChatGPT the same question, here is Google Bard’s response: Here are the top 5 areas in legal work that cannot be replaced by AI [condensed answers]

    1. Legal research.

    2. Legal analysis.

    3. Legal writing.

    4. Legal negotiation.

    5. Legal advocacy.

In conclusion, AI can be a valuable tool for lawyers, but it cannot replace the human judgment and skills that are necessary to provide sound legal advice and representation.”





Perspective.

https://www.sfchronicle.com/tech/article/ai-artificial-intelligence-report-stanford-17869558.php

AI has already changed the world. This report shows how

Artificial intelligence software may not control every aspect of life (yet), but the powerful technology can generate seemingly organic text and photos while also controlling fusion reactors and designing the chips to power its own silicon brain.

A report from the Stanford Human-Centered Artificial Intelligence Institute shows just how much the technology is in our lives, from its, at times, significant carbon footprint, to the vast amount of investment flooding into the industry, and even how different kinds of people feel about the powerful technology.

Called the AI Index, the report is assembled by an independent cross-discipline group of AI experts from academia and industry. Here are some of the key findings:

https://hai.stanford.edu/news/2023-state-ai-14-charts





Research tool. Mr Zillman collects everything you can imagine. (Not I can find out what happened to Aunt Edith after she got out of prison!)

https://www.bespacific.com/2023-finding-people-miniguide/

2023 Finding People MiniGuide

Via LLRX 2023 Finding People MiniGuide This guide by Marcus P. Zillman is a selected list of free and fee based (some require subscriptions), people finding resources, from a range of providers. A significant number of free sources on this subject matter are sourced from public records obtained by a group of companies who initially offer free information to establish your interest, from which point a more extensive report requires a fee to obtain. It is important to note there can be many errors in these data, including the inability to correctly de-duplicated individuals with the same common names. Also note that each service targets a different mix of identifying data such as: name, address, date of birth, phone numbers, email addresses, relatives, education, employment, criminal records. social media accounts, income. As we conduct research throughout the day it is useful to employ both impromptu and planned searches about individuals that are referenced.



Monday, April 03, 2023

Interesting.

https://www.trendmicro.com/en_us/research/23/d/unpacking-the-structure-of-modern-cybercrime-organizations--.html

Unpacking the Structure of Modern Cybercrime Organizations

Our research paper titled “Inside the Halls of a Cybercrime Business” closely examines small, medium, and large criminal groups based on cases from law enforcement arrests and insider information. We also juxtapose each of these to traditional businesses of comparable size to obtain relevant insights about these criminal organizations.





ChatGPT or die?

https://www.cnbc.com/2023/04/02/why-college-professors-are-adopting-chatgpt-ai-as-quickly-as-students.html

Why some college professors are adopting ChatGPT AI as quickly as students

It’s no longer news that one of the first professional sectors threatened by the rapid adoption of ChatGPT and generative AI is education – universities and colleges around the country convened emergency meetings to discuss what to do about the risk of students using AI to cheat on their work. There’s another side to that evolving AI story. Recent research from professors at the University of Pennsylvania’s Wharton School, New York University and Princeton suggests that educators should be just as worried about their own jobs.

In an analysis of professions “most exposed” to the latest advances in large language models like ChatGPT, eight of the top 10 are teaching positions.

When we ran our analysis, I was surprised to find that educational occupations come out close to the top in many cases,” said Robert Seamans, co-author of the new research study and professor at NYU.

Post-secondary teachers in English language and literature, foreign language, and history topped the list among educators.





Tools & Techniques.

https://www.bespacific.com/how-to-use-ai-to-do-practical-stuff-a-new-guide/

How to use AI to do practical stuff: A new guide

People often ask me how to use AI. Here’s an overview with lots of links. – Ethan Mollick: “We live in an era of practical AI, but many people haven’t yet experienced it, or, if they have, they might have wondered what the big deal is. Thus, this guide. It is a modified version of one I put out for my students earlier in the year, but a lot has changed. It is an overview of ways to get AI to do practical things…”





Tools & Techniques. Will these change much when we start using Bard?

https://www.makeuseof.com/tag/best-google-search-tips-pdf/

40+ Google Search Operators and Widgets



Sunday, April 02, 2023

I robot! You human?

https://journals.rudn.ru/law/article/view/34061

Theoretical aspects of identyifying legal personality of artificial intelligence: cross-national analysis of the laws of foreign countries

Research analyzes the issues of determining the legal status of artificial intellect. As artificial intellect (AI) systems become more sophisticated and play an increasingly important role in society, the arguments that they should have some form of legal personality are becoming increasingly relevant. The research argues that most legal systems could create a new category of legal persons. The issues of innovative trends in law enforcement practice are also in the focus as well as the issues of establishing general provisions on liability for criminal acts committed due to technical failures of artificial intelligence without the presence of anthropogenic participation and intervention. The article presents the results of the relevance of philosophical-legal and ontological analysis not only to a state, but also to the prospective future modifications of artificial intelligence. It outlines the results of a comparative analysis of the laws regulating artificial intelligence in a number of foreign countries along with the results of a retrospective analysis of some historical stages in the development of legal regulation of artificial intelligence.





A new Right?

https://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1746820&dswid=4124

Chapter 2. To be a face in the crowd: Surveillance, facial recognition, and a right to obscurity

This chapter examines how facial recognition technology reshapes the philosophical debate over the ethics of video surveillance. When video surveillance is augmented with facial recognition, the data collected is no longer anonymous, and the data can be aggregated to produce detailed psychological profiles. I argue that – as this non-anonymous data of people’s mundane activities is collected – unjust risks of harm are imposed upon individuals. In addition, this technology can be used to catalogue all who publicly participate in political, religious, and socially stigmatised activities, and I argue that this would undermine central interests of liberal democracies. I examine the degree to which the interests of individuals and the societal interests of liberal democracies to maintain people’s obscurity while in public coincide with privacy interests, as popularly understood, and conclude that there is a practical need to articulate a novel right to obscurity to protect the interests of liberal democratic societies.





Good questions should lead to good answers.

https://link.springer.com/article/10.1007/s11948-023-00433-5

Machine Ethics: Do Androids Dream of Being Good People?

Is ethics a computable function? Can machines learn ethics like humans do? If teaching consists in no more than programming, training, indoctrinating… and if ethics is merely following a code of conduct, then yes, we can teach ethics to algorithmic machines. But if ethics is not merely about following a code of conduct or about imitating the behavior of others, then an approach based on computing outcomes, and on the reduction of ethics to the compilation and application of a set of rules, either a priori or learned, misses the point. Our intention is not to solve the technical problem of machine ethics, but to learn something about human ethics, and its rationality, by reflecting on the ethics that can and should be implemented in machines. Any machine ethics implementation will have to face a number of fundamental or conceptual problems, which in the end refer to philosophical questions, such as: what is a human being (or more generally, what is a worthy being); what is human intentional acting; and how are intentional actions and their consequences morally evaluated. We are convinced that a proper understanding of ethical issues in AI can teach us something valuable about ourselves, and what it means to lead a free and responsible ethical life, that is, being good people beyond merely “following a moral code”. In the end we believe that rationality must be seen to involve more than just computing, and that value rationality is beyond numbers. Such an understanding is a required step to recovering a renewed rationality of ethics, one that is urgently needed in our highly technified society.





What would Napoleon do?

https://verfassungsblog.de/big-brother-is-watching-the-olympic-games-and-everything-else-in-public-spaces/

Big Brother is Watching the Olympic Games – and Everything Else in Public Spaces

The French National Assembly is currently debating the law on the 2024 Olympic and Paralympic Games. Despite its name, the law has more to do with security than sports. In particular, Article 7 of the law creates a legal basis for algorithmic video surveillance, that is, video surveillance that relies on artificial intelligence to treat the images and audio of video surveillance cameras in order to identify human beings, objects, or specific situations. In other words, video surveillance cameras in France’s public spaces would now be able to identify you and detect if your behaviour is suspicious. Admittedly, this is already the case in several French cities (for instance in Toulouse since 2016) and in some railway services, but without any legal basis.

France is infamous for its attachment to surveillance, with the highest administrative court even deciding to ignore a CJEU’s ruling concerning its mass surveillance measures on the ground that the protection of national security is part of the “national identity” of the country. However, Article 7 represents a major step in the direction of general biometric mass surveillance and should be of concern to everyone. In fact, the risks posed by AVS are so high that the current discussions on the European Regulation on Artificial Intelligence envision a formal ban.

The legal basis for AVS provided by Article 7 of the new French law is especially worrisome from two perspectives. First, it would legitimise a practice that is in violation of France’s human rights obligations. Second, adopting this law would make France the first EU member state to grant a legal basis to algorithmic video surveillance (AVS), thus creating a worrisome precedent and normalising biometric mass surveillance.





Perspective.

https://bridges.monash.edu/articles/journal_contribution/Taming_the_Electronic_Genie_Can_Law_Regulate_the_Use_of_Public_and_Private_Surveillance_/22337425

Taming the Electronic Genie: Can Law Regulate the Use of Public and Private Surveillance?

The fear that our social and legal institutions are being subtly but inexorably eroded by the growth in surveillance is as common in academic literature as it is in the popular imagination. While large corporations harness the powers of Big Data for the wholesale harvesting of personal data, the government utilises its coercive powers to conduct increasingly intrusive surveillance of members of the public. The article considers the major issues arising from private surveillance, particularly the breaches of privacy inherent in the collection or harvesting of personal information. It then analyses selected issues arising from public surveillance, including data retention and sharing by government, the use of surveillance techniques such as facial recognition technology in criminal investigation, and the evocation of national security concerns to justify invasions of privacy. It considers what legal regime is best suited to regulate mass public and private surveillance, including the tort of privacy, the adoption of international regimes, such as the General Data Protection Regulation, and the expansion of fiduciary principles. We argue that the concept of ‘information fiduciary’ should be added to the current range of measures designed to ensure the accountability of both public and private data collectors.