Sunday, September 26, 2021

This time, they didn’t think it through?

https://lirias.kuleuven.be/3548798?limo=0

The GDPR and the Artificial Intelligence Regulation – it takes two to tango?

The recently adopted proposal for an AI Regulation has already been the topic for widespread discussions., and so has the GDPR. This contribution discusses how the proposed AI Regulation inadequately addresses the risks presented to privacy and data protection by AI systems, and fails to integrate the comprehensive framework of the GDPR into the most buzzing proposal for regulation in a long time.



Keep thinking.

https://link.springer.com/article/10.1007/s44163-021-00002-4

Ethical and legal responsibility for Artificial Intelligence

In civilized life, law floats in a sea of ethics”, a quote by the former Chief Justice of the United States, Earl Warren. In a democratic society, the constitution defines the country’s values, and the laws define the preferred or at least still tolerated behavior, making deviations sanctionable. As the society is in a continuous flow, also based on scientific and technical developments, law always lags behind. Until regulations can catch up, ethics has to lead society. In less democratic societies, the water gets polluted up to poisoned, ethical behavior may be against ruling law. As for the latter, Robin Hood was a thief, but for most parts of the population, a hero. The less transparent the water, the more difficult to adapt law to new developments. This includes direct corruption, but also unfaithful lobbying. This article discusses the “nature” of Artificial Intelligence, including the risks its posing, and who is responsible for systematic errors, from a moral, but also legal point.



Drawing a line AI can choose to cross?

https://repository.uchastings.edu/hastings_science_technology_law_journal/vol12/iss2/4/

Towards Optimal Liability for Artificial Intelligence: Lessons from the European Union’s Proposals of 2020

Are the E.U.’s proposals on artificial intelligence (AI) a major breakthrough or just a mere token of an initial liability regime? Several initiatives have been released in 2020 to shape Europe’s digital future to the next level, whereas the U.S. leadership program is hesitant to regulate AI. However, the recent E.U. proposals by introducing strict liability or implementing a certification procedure are a first approximation of what is needed rather than an adoptable bill. Based on the lessons learned from the E.U. a scheme of liability is outlined, which strengthens the trajectory of AI’s development in the long-term solely when it is socially desirable. AI is characterized by self-learning, opacity and autonomy, and its increasing ubiquity will put greater strain on the liability system. Therefore, this contribution considers the impacts of AI on U.S.’s major liability regimes, analyzes the effects of its application, and develops a flexible system for risky AI systems. Overall, a fundamental challenge of tort law raised by AI is examined: based on the question of whether the applicable U.S. tort law doctrines are capable of setting proper incentives for the usage of AI for society. The influence of AI on liability rules will be felt along two margins: First, to avoid application difficulties, adjustments must be made to existing rules; otherwise, legal uncertainty will be enhanced. Second, there is not a single existing liability regime which is capable of governing AI in a socially optimal manner. This contribution indicates that the U.S. and E.U. neglect important opportunities to reduce the risks of AI and enhance AI’s innovation on account of their liability rules or new proposals. However, the U.S. already noted that the global AI race is underway. Hereinafter, a first roadmap is outlined that leads to a leading position.



Anticipating where AI will trample copyright?

https://digitalcommons.wcl.american.edu/pijip-righttoresearch-testimony/4/

Submission to Canadian Government Consultation on a Modern Copyright Framework for AI and the Internet of Things

We are grateful for the opportunity to participate in the Canadian Government’s consultation on a modern copyright framework for AI and the Internet of Things. Below, we present some of our research findings relating to the importance of flexibility in copyright law to permit text and data mining (“TDM”). As the consultation paper recognizes, TDM is a critical element of artificial intelligence. Our research supports the adoption of a specific exception for uses of works in TDM to supplement Canada’s existing general fair dealing exception.

Empirical research shows that more publication of citable research takes place in countries with “open” research exceptions -- that is, research exceptions that are open to all uses (e.g. reproduction and communication), to all works, and to all users. Empirical research also shows that text and data mining research is promoted through exceptions that more specifically authorize text and data mining research. While these studies are preliminary and we are still improving on them, they provide evidence that supports the approach of combining a general research exception with a more specific data mining exception.



May we assume the AI followed its Rules of Engagement?

https://www.dailystar.co.uk/news/world-news/ai-battleships-killer-drones-terminator-25056743

AI battleships and killer drones: 'Terminator future' of war feared by conflict experts

Major advancements in AI weapons and defence technology have sparked fears mankind is heading towards a "Terminator future" of war.

And as this ultra-tech advance, there are increasing concerns among anti-war campaigners that rogue states or terrorist groups could get their hands on advanced weaponry.

It comes after USV Ranger, an AI-guided warship part of the US Navy’s “Ghost Fleet” of uncrewed vessels, test-launched a one and a half-ton surface-to-air missile in a dramatic escalation of America’s autonomous weapons programme.

It’s not the first drone warship to launch a missile but, at 3,300lbs, the two-stage SM-6 missile is roughly 100 times larger than the Rafael SPIKE missile launched from an Israeli autonomous vessel in 2017.

The development of the Ghost Fleet Overlord project is a partnership between the US Navy and the US Department of Defense’s Strategic Capabilities Office.

Officially, the autonomous vessels are intended for use as support craft for conventional forces but this latest test demonstrates that the ships can be armed.



For my students.

https://www.marktechpost.com/mathematics-for-machine-learning-course-free/

Mathematics For Machine Learning Course (FREE)


No comments: