Sunday, July 25, 2021

War by random cyber bombing?

https://www.databreaches.net/cyberattack-shuts-down-services-in-greeces-second-largest-city/

Cyberattack Shuts Down Services in Greece’s Second-Largest City

The National Herald reports:

As hackers – many sponsored by Russia and China and authoritarian governments around the world – have stepped up cyber attacks on municipal services in a number of countries, Thessaloniki‘s agencies were shut down over an electronic intrusion.
That happened July 23, with Deputy Mayor of Business Planning, e-Government and Migration Policy Giorgos Avarlis saying the city – Greece’s second-largest – closed its services and web applications, “so that proper investigations can be carried out and we do not risk being attacked again,” with no report what kind of defenses it has.

Read more on The National Herald.





My AI says, “Yes, if we can tell it what is ethical in every circumstance.”

https://www.tandfonline.com/doi/full/10.1080/0731129X.2021.1951459

Can AI Weapons Make Ethical Decisions?

The ability of machines to make truly independent and autonomous decisions is a goal of many, not least of military leaders who wish to take the human out of the loop as much as possible, claiming that autonomous military weaponry—most notably drones—can make decisions more quickly and with greater accuracy. However, there is no clear understanding of how autonomous weapons should be conceptualized and of the implications that their “autonomous” nature has on them as ethical agents. It will be argued that autonomous weapons are not full ethical agents due to the restrictions of their coding. However, the highly complex machine-learning nature gives the impression that they are making their own decisions and creates the illusion that their human operators are protected from the responsibility of the harm they cause. Therefore, it is important to distinguish between autonomous AI weapons and an AI with autonomy, a distinction that creates two different ethical problems for their use. For autonomous weapons, their limited agency combined with machine-learning means their human counterparts are still responsible for their actions while having no ability to control or intercede in the actual decisions made. If, on the other hand, an AI could reach the point of autonomy, the level of critical reflection would make its decisions unpredictable and dangerous in a weapon.





Boarder searches should catch anyone stupid enough to carry into the country data they can easily download from the Internet after they arrive.

https://www.sciencedirect.com/science/article/pii/S2666281721001256

On the need for AI to triage encrypted data containers in U.S. law enforcement applications

This paper takes an analogical approach to define the parameters by which artificial intelligence (AI) can be utilized to facilitate warrantless searches at U.S. ports of entry. The authors tailor their discussion to the prevention of child pornography (also referred to as child abuse or exploitation materials in the academic literature), and the traffic thereof. By making the legal case to utilize AI, particularly eXplainable AI (XAI), to search encrypted devices for attributes indicative of child pornography, the authors hope to encourage research in this field and develop better technology to help catch criminals without relinquishing privacy rights.





Something for lawyers to consider.

https://link.springer.com/article/10.1007/s10506-021-09294-4

Preserving the rule of law in the era of artificial intelligence (AI)

The study of law and information technology comes with an inherent contradiction in that while technology develops rapidly and embraces notions such as internationalization and globalization, traditional law, for the most part, can be slow to react to technological developments and is also predominantly confined to national borders. However, the notion of the rule of law defies the phenomenon of law being bound to national borders and enjoys global recognition. However, a serious threat to the rule of law is looming in the form of an assault by technological developments within artificial intelligence (AI). As large strides are made in the academic discipline of AI, this technology is starting to make its way into digital decision-making systems and is in effect replacing human decision-makers. A prime example of this development is the use of AI to assist judges in making judicial decisions. However, in many circumstances this technology is a ‘black box’ due mainly to its complexity but also because it is protected by law. This lack of transparency and the diminished ability to understand the operation of these systems increasingly being used by the structures of governance is challenging traditional notions underpinning the rule of law. This is especially so in relation to concepts especially associated with the rule of law, such as transparency, fairness and explainability. This article examines the technology of AI in relation to the rule of law, highlighting the rule of law as a mechanism for human flourishing. It investigates the extent to which the rule of law is being diminished as AI is becoming entrenched within society and questions the extent to which it can survive in the technocratic society.





After that, Skynet? This reminds me of Paul David’s “The Dynamo and the Computer,” which I frequently quote and wish he had followed up on…

https://venturebeat.com/2021/07/24/deadline-2024-why-you-only-have-3-years-left-to-adopt-ai/

Deadline 2024: Why you only have 3 years left to adopt AI

If your company has yet to embrace AI, you’re in a race against the clock. And by my calculations, you have just three years left.

How did I arrive at 2024 as the deadline for AI adoption? My prediction — formulated with KUNGFU.AI advisor Paco Nathan — is rooted in us noticing that many futurists’ J curves show innovations typically have a 12-to-15-year window of opportunity, a period between when a technology emerges and when it reaches the point of widespread adoption.





Fearless prediction: This argument will continue until an AI provides us with the answer.

https://www.digitallawjournal.org/jour/article/viewFile/56/48

INTELLECTUAL PROPERTY LAW: IN THE HANDS OF ARTIFICIAL CREATOR

Every year, digitalization covers more and more areas of social life, algorithmization expands the horizons of human capabilities, and mechanization accelerates the interaction of subjects of social relations. A growing number of innovations appears in the turnover of property; it is here that the consequences of the digital revolution most acutely affect a wide range of persons participating in it. Should the conservative civil law regulation of property and personal non-property relations change under the pressure of digital technologies? Should we destroy the foundations and institutions tested by many years of experience in social communication, or will the existing civil law norms be able to withstand change, only requiring a little adaptation to new circumstances?

All these issues are even more relevant in the field of intellectual activity and the protection of intellectual property. One of the challenges is related to the development and implementation of artificial intelligence. Significant advances in the creation of algorithmic software raise the question of the possibility of legal protection of the results of its activities. The merit of the first comprehensive and multifaceted study of this problem belongs to the authors of the recently published monograph “Artificial Intelligence and Intellectual Property” by Oxford University Press, which is reviewed in this article.



(Related)

https://www.digitallawjournal.org/jour/article/view/53

Deconstruction of the legal personhood of artificial intelligence

Calls to rethink the content of “legal personhood” are increasingly being heard at the present time: to recognize animals, artificial intelligence, etc. as a subject. There are several explanations for this: firstly, a change in ideas about a person and their position in society, and secondly, attempts to rethink the traditional categories of law. Throughout long periods of history, the definition of legal personhood depended on the definition of subjective right; the subjective right was associated with the legally significant will of the person. Consequently, a change in views on the will theory of subjective right inevitably lead to a revision of the content of the person. The main purpose of this article is to determine the essence of the legal personhood. To do this, using the historical method, the evolution of ideas about the legal personhood is revealed. It is argued that Hohfeld’s approach to understanding subjective-legal structures made it possible to look differently at the content of the category of legal personhood: it became possible to recognize animals or artificial intelligence as the owners of various subjective-legal categories. Nevertheless, the logic of modern commentators, as well as supporters of such a flexible approach to the definition of legal personhood, is not free from shortcomings. Using the method of analytical jurisprudence, the author demonstrates the emerging problems.





If this is true, and the people who don’t think they need a vaccine (or a mask) will get Covid then ‘believe’ can vote for Trump twice?

https://www.psypost.org/2021/07/large-study-finds-covid-19-is-linked-to-a-substantial-drop-in-intelligence-61577

Large study finds COVID-19 is linked to a substantial drop in intelligence

People who have recovered from COVID-19 tend to score significantly lower on an intelligence test compared to those who have not contracted the virus, according to new research published in The Lancet journal EclinicalMedicine. The findings suggest that the SARS-CoV-2 virus that causes COVID-19 can produce substantial reductions in cognitive ability, especially among those with more severe illness.





For my students...

https://www.makeuseof.com/linkedin-scams-to-watch-out-for/

5 LinkedIn Scams to Watch Out For

LinkedIn is a safe platform, but you can nonetheless find scammers on the site. Here's what to look out for.



No comments: