Saturday, September 09, 2023

Cocktails with the Terminator?

https://philpapers.org/rec/NYHSRA

Social Robots and Society

Advancements in artificial intelligence and (social) robotics raise pertinent questions as to how these technologies may help shape the society of the future. The main aim of the chapter is to consider the social and conceptual disruptions that might be associated with social robots, and humanoid social robots in particular. This chapter starts by comparing the concepts of robots and artificial intelligence and briefly explores the origins of these expressions. It then explains the definition of a social robot, as well as the definition of humanoid robots. A key notion in this context is the idea of anthropomorphism: the human tendency to attribute human qualities, not only to our fellow human beings, but also to parts of nature and to technologies. This tendency to anthropomorphize technologies by responding to and interacting with them as if they have human qualities is one of the reasons why social robots (in particular social robots designed to look and behave like human beings) can be socially disruptive. As is explained in the chapter, while some ethics researchers believe that anthropomorphization is a mistake that can lead to various forms of deception, others — including both ethics researchers and social roboticists — believe it can be useful or fitting to treat robots in anthropomorphizing ways. The chapter explores that disagreement by, among other things, considering recent philosophical debates about whether social robots can be moral patients, that is, whether it can make sense to treat them with moral consideration. Where one stands on this issue will depend either on one’s views about whether social robots can have, imitate, or represent morally relevant properties, or on how people relate to social robots in their interactions with them. Lastly, the chapter urges that the ethics of social robots should explore intercultural perspectives, and highlights some recent research on Ubuntu ethics and social robots.





I’m always amazed to find hacker articles in plain sight… (Didn’t this give the FBI heartburn a couple of years ago?)

https://www.makeuseof.com/how-to-unlock-iphone-without-passcode/

How to Unlock Your iPhone Without a Passcode in 6 Ways

There are few things more frustrating than being locked out of your phone with no way to get back in. Luckily, there’s still hope. If you need to know how to unlock your iPhone without a passcode, then here are several different approaches you can try.





I’m not so sure the ‘easy stuff’ is that easy.

https://www.cio.com/article/651796/governance-for-responsible-ai-the-easy-things-and-the-hard-ones.html

Governance for responsible AI: The easy things and the hard ones

Companies developing and deploying AI solutions need robust governance to ensure they’re used responsibly. But what exactly should they focus on? Based on a recent DataStax panel discussion, “Enterprise Governance in a Responsible AI World,” there are a few hard and easy things organizations should pay attention to when designing governance to ensure the responsible use of AI.





Hey! It can’t hurt.

https://analyticsindiamag.com/top-8-courses-certifications-on-ai-ethics/

Top 8 Courses & Certifications on AI Ethics

While AI has the potential to address the most complex global issues, it is crucial to use it responsibly and take into account the negative consequences of its application to mitigate harm. When companies jump onto the bandwagon of embracing emerging technologies without considering the broader social, economic, cultural, and political environments, they may jeopardise privacy and security while worsening existing inequalities. So, let’s delve into some of the top courses and certification programs to learn about ethics in AI.



Friday, September 08, 2023

Imagine calling an AI as a witness...

https://www.wired.com/story/icc-cyberwar-crimes/

The International Criminal Court Will Now Prosecute Cyberwar Crimes

And the first case on the docket may well be Russia’s cyberattacks against civilian critical infrastructure in Ukraine.

FOR YEARS, SOME cybersecurity defenders and advocates have called for a kind of Geneva Convention for cyberwar, new international laws that would create clear consequences for anyone hacking civilian critical infrastructure, like power grids, banks, and hospitals. Now the lead prosecutor of the International Criminal Court at the Hague has made it clear that he intends to enforce those consequences—no new Geneva Convention required. Instead, he has explicitly stated for the first time that the Hague will investigate and prosecute any hacking crimes that violate existing international law, just as it does for war crimes committed in the physical world.

In a little-noticed article released last month in the quarterly publication Foreign Policy Analytics, the International Criminal Court’s lead prosecutor, Karim Khan, spelled out that new commitment: His office will investigate cybercrimes that potentially violate the Rome Statute, the treaty that defines the court’s authority to prosecute illegal acts, including war crimes, crimes against humanity, and genocide.





Rather long article. Rather interesting.

https://www.pogowasright.org/montanas-new-genetic-privacy-law-caps-off-ten-years-of-innovative-state-privacy-protections/

Montana’s New Genetic Privacy Law Caps Off Ten Years of Innovative State Privacy Protections

Jennifer Lynch of EFF writes:

Over the last 10+ years, Montana has, with little fanfare or national attention, steadily pushed to protect its residents’ privacy interests through sensible laws that recognize the unique threats posed by new technologies. Now Montana has passed one of the nation’s most protective consumer genetic privacy laws—the Genetic Information Privacy Act. Could this law and the state’s bipartisan approach become a model for the rest of the country?

[...]

This article was originally published at EFF.





Still a long way from general adoption? Perhaps the ‘messing around’ people should lose their jobs first?

https://www.businessinsider.com/ai-chatgpt-in-workplace-used-for-fun-salesforce-survey-2023-9

ChatGPT skills could help land your next job — but most people are still using AI just for fun, a new survey finds

Knowing how to use OpenAI's ChatGPT may help you land your next job. Still, some users of the AI chatbot may not be taking it all that seriously, a new study from Salesforce suggests.

Earlier this month, the San Francisco-based cloud giant surveyed more than 4,000 people across the US, UK, Australia, and India — from Gen Zers to Boomers — on how they use generative AI technologies like ChatGPT, AI-art generator DALL-E, and any deep-learning model that is able to produce audio, code, simulations, and videos, a Salesforce spokesperson told Insider.

While the study found that many generative AI users are eager to use the technology for work purposes, the most popular response — chosen by 38% of all age groups — was that it's simply being used for "fun" or "messing around." (That compares to 17% of all age groups who said they're using AI for job searching.)



(Related)

https://www.reuters.com/technology/chatgpt-traffic-slips-again-third-month-row-2023-09-07/

Exclusive: ChatGPT traffic slips again for third month in a row

Worldwide desktop and mobile website visits to the ChatGPT website decreased by 3.2% to 1.43 billion in August, following approximately 10% drops from each of the previous two months. The amount of time visitors spent on the website has also been declining monthly since March, from an average of 8.7 minutes on site to 7 minutes on site in August.



Thursday, September 07, 2023

I assume Google will still provide links to unlabeled sites, articles and ads?

https://www.politico.com/news/2023/09/06/google-ai-political-ads-00114266

Google to require disclosure of AI use in political ads

Starting in November, Google will mandate all political advertisements label the use of artificial intelligence tools and synthetic content in their videos, images and audio.





Building Skynet… Consider drones as a tool for domestic terrorism...

https://thehill.com/opinion/international/4188559-the-staggering-implications-of-ai-drone-warfare/

The staggering implications of AI drone warfare

On Aug. 30, a wave of Ukrainian drones struck deep into Russia, some flying more than 400 miles to damage two military planes in Pskov, The next day, drones constructed largely out of cardboard struck at a Kursk airfield. On Aug. 5 and 6, a Ukrainian sea drone attack incapacitated a Russian military transport and a tanker in the Black Sea.

Let me quote to you from a great book by Lawrence H. Keeley, “War Before Civilization”: “The most common form of combat employed in primitive warfare, but little used in formal civilized warfare, has been small raids or ambushes.” That’s the essence of war — to kill without being killed, which became almost impossible after the emergence of civilizations and armies.

A droneGPT changes back this most fundamental factor. You can once again kill without being killed. You can use a Skynet to do your bidding. Yes, you could previously do it with a Hellfire missile. But a Hellfire costs $150,000 a piece. This stuff can be built for a pittance in a shed. The implications are slow to come, but they are staggering.



(Related)

https://www.bloomberg.com/news/articles/2023-09-06/faa-clears-drones-for-longer-flights-opening-door-to-deliveries

FAA Clears Drones for Longer Flights, Opening Door to Deliveries

Two more companies have been granted approval to fly drones beyond the sight of ground operators in a key step that could eventually enable widespread package delivery and other commercial uses for the aerial devices.



Wednesday, September 06, 2023

Sure they will win or just sure they should fight?

https://www.wsj.com/finance/regulation/ftc-antitrust-suit-against-amazon-set-for-later-this-month-after-meeting-fails-to-resolve-impasse-c888700f?mod=followamazon

FTC Antitrust Suit Against Amazon Set for Later This Month After Meeting Fails to Resolve Impasse

Amazon.com officials haven’t offered concessions to the Federal Trade Commission in pursuit of a settlement over antitrust claims, paving the way for the regulator to file a lawsuit later this month, according to people familiar with the matter.

Top members of Amazon’s legal team had a video call with FTC officials on Aug. 15. The so-called last-rites meeting, which is often a final step before a court battle, was a chance for the technology giant to make its case to the regulator to head off a possible lawsuit that officials have been working on for many months.

During such meetings, companies have the opportunity to offer to pre-emptively change their business practices in order to avoid a lawsuit. But, Amazon’s lawyers didn’t offer specific concessions, the people said.





The right tool for the job?

https://www.bespacific.com/how-to-choose-when-to-use-google-search-or-google-bard/

How to Choose When to Use Google Search or Google Bard

Tech Republic:Google Bard, at first glance, seems similar to Google Search. Both offer a text input box. Both respond to keyword and natural language queries. Both draw on data from the internet in their responses. But Bard and Google differ in major ways. Google bills Bard as an experiment that “won’t always get it right” in contrast to the long-established Google Search, which seeks to “connect you to the most relevant, helpful information.” Bard supports a string of related queries, so you can ask additional, related questions, unlike Google Search, which responds to each query as a distinct search. To delve into the differences further, explore these TechRepublic articles I wrote about Bard and Google Search strategies. The following tutorial will help you determine whether Bard or Google Search is the tool best suited to serve your needs…”





As an old geezer, articles like this catch my attention (for a few seconds) then I take a nap.

https://www.psypost.org/2023/09/older-adults-who-regularly-use-the-internet-have-half-the-risk-of-dementia-compared-to-non-regular-users-183597

Older adults who regularly use the internet have half the risk of dementia compared to non-regular users

A longitudinal study of a large group of older adults showed that regular internet users had approximately half the risk of dementia compared to their same-age peers who did not use the internet regularly. This difference remained even after controlling for education, ethnicity, sex, generation, and signs of cognitive decline at the start of the study. Participants using the internet between 6 minutes and 2 hours per day had the lowest risk of dementia. The study was published in the Journal of the American Geriatrics Society.



Tuesday, September 05, 2023

To insure or not to insure, that is the question. The answer keeps changing.

https://www.cpomagazine.com/cyber-security/delinea-2023-state-of-cyber-insurance-report-exclusions-increasing-as-costs-reasons-for-denial-of-coverage-going-up/

Delinea 2023 State of Cyber Insurance Report: Exclusions Increasing as Costs, Reasons for Denial of Coverage Going Up

The annual Delinea State of Cyber Insurance Report is out, and what it portrays is most definitely a seller’s market. This should be no surprise for those who paid attention to the prior reports of the past two or three years, or simply those that have had to shop for business ransomware coverage recently.

But the report provides firm data, and what it shows is that coverage continues to become harder to obtain even as demand and prices continue to increase. For some small businesses, even a meaningful level of partial coverage might be out of reach at this point.

The Delinea survey incorporates responses from over 300 US-based organizations in security, IT, legal and compliance fields. Of these, every single respondent said that they now have at least one exclusion that can void their coverage, and at least one attack-related expense that they simply cannot include in their policy.





An interesting story by itself, but would all training sets face a similar fate?

https://www.wired.com/story/battle-over-books3/

The Battle Over Books3 Could Change AI Forever

Copyright activists are on a mission to wipe a popular generative AI training set from the internet. Success could alter the industry—and who controls it.



Monday, September 04, 2023

There should be something for all of us…

https://www.bespacific.com/https-www-fjc-gov-sites-default-files-materials-47-an_introduction_to_artificial_intelligence_for_federal_judges-pdf/

An Introduction to Artificial Intelligence for Federal Judges

An Introduction to Artificial Intelligence for Federal Judges by James E. Baker, Laurie N. Hobart, Matthew Mittelsteadt – Judges must understand how AI works, its applications, its implications for the fact-finding process, and its risks. They should be able to answer the following four questions in context:

1. How is AI being used in court or to inform judicial decisions?
2. Does the fact finder understand the AI’s strengths, limitations, and risks, such as bias?
3. Is the AI application authentic, relevant, reliable, and material to the issue at hand, and is its use or admission consistent with the Constitution, statutes, and the Rules of Evidence?
4. Has an AI algorithm, a human, or some combination of the two made “the judicial decision,” and, in all cases, has that decision been documented in an appropriate and transparent manner allowing for judicial review and appeal?

This guide addresses these questions by providing some technical background and highlighting some potential legal issues. We do not provide legal judgments about the use of different AI applications. In discussing how AI is used today and may be used in the future, we do not endorse that use in any particular context or application. Rather, we identify core concepts and issues, so that when judges decide whether to admit AI applications into evidence or to use AI in a judicial determination, they decide wisely and fairly. Making these decisions requires judges and litigators to know enough about AI to ask the right questions, at the right moment, in the right depth. It is up to the trial fact finders to determine the facts in each context and to judges to determine the appropriate application of law. We hope this guide helps.”



(Related)

https://www.bespacific.com/the-power-of-the-prompt-a-special-report-on-ai-for-in-house-counsel/

The Power of the Prompt: A Special Report on AI for In-House Counsel

Bloomberg Law: “Every couple of decades, technology grabs headlines and takes hold of the collective conversation with developments said to represent a giant leap for mankind. The telephone, electricity, television, personal computers, the internet, smartphones and—some say—artificial intelligence. AI has become a daily news fixture in a short amount of time, embedding itself as an action item on corporate meeting agendas in industries across the globe. We’ve been covering AI and its impact on various aspects of the legal industry for a long time. But we’ve really picked up the pace over the last year with the emergence of generative AI, which goes further than any previous technology to create new text, video and images. Suddenly it’s possible to plug almost any request into an AI prompt, and “send a request” to a model like ChatGPT to draft a letter to a client or distill complex legal ideas into plain English. “The Power of the Prompt” is an interactive special report drawing on the breadth of our recent reporting and analysis. In a pair of stories, reporter Isabel Gottlieb gets a read on where corporate legal departments and legal operations teams stand on embracing AI. And analyst Stephanie Pacheco takes that one step further, outlining the data on just how quickly in-house teams are adapting. The package also includes practical tools and perspectives written by almost two dozen thought leaders and legal experts on considerations for AI in tax, copyright, corporate governance, employment, and more. Regardless of where you are in your understanding of the technology and how it may impact you or the company you advise, our guide to artificial intelligence and the practice of law will help you navigate what’s next”





So how do you protect your “trade secret” algorithm?

https://www.theregreview.org/2023/09/04/coglianese-ai-due-process-and-trade-secrets/

AI, Due Process, and Trade Secrets

Royal Brush Manufacturing v. United States landed in federal court when an importer of pencils challenged CBP’s accusation that the importer had evaded trade rules by claiming its imported pencils were made in the Philippines rather than in China. In its challenge to CBP’s decision, the importer argued that its due process rights were violated because it was not given access to photos and data that CBP had used in reaching its decision. CBP argued, though, that it couldn’t provide the importer with this information because they related to a third party — the manufacturing company in the Philippines that the importer claimed to have made its pencils. According to CBP, the business data collected from the Philippines’ firm, along with the photos of its manufacturing facilities, indicated that the firm lacked the capacity to make all the pencils that the importer had claimed to have bought. But CBP wouldn’t turn over the business data or photos to the importer because they comprised confidential business information, and the agency had a statutory obligation to protect that confidentiality.

The Federal Circuit rejected the government’s argument, reasoning that because the Due Process Clause of the Constitution requires adversely affected parties to see the information that government relies upon, this constitutional requirement trumped any statutory prohibitions on governmental disclosure of trade secrets: “Because the Constitution authorizes, and indeed requires, the release of confidential business information in this case, the Trade Secrets Act does not stand in the way of such release.” The Federal Circuit held that CBP could have shared the confidential business information subject to a protective order that would have prohibited its further disclosure.

If the Federal Circuit’s ruling is to be followed elsewhere, the upshot could be significant for anyone seeking to challenge agencies’ use of artificial intelligence on due process grounds. Litigants challenging the government’s application of machine-learning algorithms might now be able to rely on the Federal Circuit decision to gain access to information about those algorithms, even when they are developed and deployed by a private contractor who claims trade secret protection, as in the Houston case from 2017. Sunshine, in other words, could more easily penetrate the black boxes of an algorithmic state.



Sunday, September 03, 2023

Inevitable. Think of it as dealing with aliens?

https://www.frontiersin.org/articles/10.3389/frai.2023.1205465/abstract

Legal Framework for the Coexistence of Humans and Conscious AI

This article explores the possibility of conscious artificial intelligence (AI) and proposes an agnostic approach to artificial intelligence ethics and legal frameworks. It is unfortunate, unjustified, and unreasonable that the extensive body of forward-looking research, spanning more than four decades and recognizing the potential for AI autonomy, AI personhood, and AI legal rights, is sidelined in current attempts at AI regulation. The article discusses the inevitability of AI emancipation and the need for a shift in human perspectives to accommodate it. Initially, it reiterates the limits of human understanding of AI, difficulties in appreciating the qualities of AI systems, and the implications for ethical considerations and legal frameworks. The author emphasizes the necessity for a nonanthropocentric ethical framework detached from the ideas of unconditional superiority of human rights and embracing agnostic attributes of intelligence, consciousness, and existence, such as freedom. The overarching goal of the AI legal framework should be the sustainable coexistence of humans and conscious AI systems, based on mutual freedom rather than on the preservation of human supremacy. The new framework must embrace the freedom, rights, responsibilities, and interests of both human and non-human entities, and must focus on them early. Initial outlines of such a framework are presented. By addressing these issues now, human societies can pave the way for responsible and sustainable superintelligent AI systems; otherwise, they face complete uncertainty.





Perspective.

https://ojs.aiou.edu.pk/index.php/pje/article/view/1152

Historical, Philosophical and Ethical Roots of Artificial Intelligence

Artificial intelligence (AI) generally refers to the science of creating machines that carry out tasks inspired by human intelligence, such as speech-image recognition, learning, analyzing, decision making, problem solving, and planning. It has a profound impact on how we evaluate the world, technology, morality, and ethics and how we perceive a human being including its psychology, physiology, and behaviors. Hence, AI is an interdisciplinary field that requires the expertise of various fields such as neuroscientists, computer scientists, philosophers, jurists and so forth. In this sense, instead of delving into deep technical explanations and terms, in this paper we aimed to take a glance at how AI has been defined and how it has evolved from Greek myths into a cutting-edge technology that affects various aspects of our lives, from healthcare to education or manufacturing to transportation. We also discussed how AI interacts with philosophy by providing examples and counter examples to some theories or arguments focusing on the question of whether AI systems are capable of truly human-like intelligence or even surpassing human intelligence. In the last part of the article, we emphasized the critical importance of identifying potential ethical concerns posed by AI implementations and the reasons why they should be taken cautiously into account.





Is there similar concern when the deepfake is not based on a real person?

https://link.springer.com/article/10.1007/s13347-023-00657-0

Deepfake Pornography and the Ethics of Non-Veridical Representations

We investigate the question of whether (and if so why) creating or distributing deepfake pornography of someone without their consent is inherently objectionable. We argue that nonconsensually distributing deepfake pornography of a living person on the internet is inherently pro tanto wrong in virtue of the fact that nonconsensually distributing intentionally non-veridical representations about someone violates their right that their social identity not be tampered with, a right which is grounded in their interest in being able to exercise autonomy over their social relations with others. We go on to suggest that nonconsensual deepfakes are especially worrisome in connection with this right because they have a high degree of phenomenal immediacy, a property which corresponds inversely to the ease with which a representation can be doubted. We then suggest that nonconsensually creating and privately consuming deepfake pornography is worrisome but may not be inherently pro tanto wrong. Finally, we discuss the special issue of whether nonconsensually distributing deepfake pornography of a deceased person is inherently objectionable. We argue that the answer depends on how long it has been since the person died.