Sunday, August 08, 2021

How hard would it be to change the code from “report abnormal activity” to “attack abnormal activity.”

https://gadgets.ndtv.com/science/news/drone-survillance-technology-ai-artificial-intelligence-neural-network-human-brain-czech-police-vut-brno-2504954

Czech Scientists Give 'Brains' to Drone System to Detect Abnormal Behaviour

Law enforcement agencies across the world are leveraging technology to equip themselves to stop crimes or improve their response time. Most of them are deploying drones to monitor large groups of people or a large area of interest with limited manpower. Though very useful, this technology is limited in one aspect: the ability to decide what's normal and what's not. They can only relay the footage to their handler who would decide what action is to be taken. So, a group of Czech scientists decided to give these machines the ability to figure out suspicious behaviour.





With links to many papers.

https://link.springer.com/article/10.1007/s11948-021-00323-8

Marc Coeckelbergh, AI Ethics, Mit Press, 2021





We will need something like this…

https://link.springer.com/article/10.1007/s13218-021-00736-4

The AI Methods, Capabilities and Criticality Grid

Many artificial intelligence (AI) technologies developed over the past decades have reached market maturity and are now being commercially distributed in digital products and services. Therefore, national and international AI standards are currently being developed in order to achieve technical interoperability as well as reliability and transparency. To this end, we propose to classify AI applications in terms of the algorithmic methods used, the capabilities to be achieved and the level of criticality. The resulting three-dimensional classification scheme, termed the AI Methods, Capabilities and Criticality (AI-MC2) Grid, combines current recommendations of the EU Commission with an ethical dimension proposed by the Data Ethics Commission of the German Federal Government (Datenethikkommission der Bundesregierung: Gutachten. Berlin, 2019). As a whole, the AI-MC2 Grid allows not only to gain an overview of the implications of a given AI application as well as to compare efficiently different AI applications within a given market or implemented by different AI technologies. It is designed as a core tool to define and manage norms, standards and compliance of AI applications, but helps to manage AI solutions in general as well.





Finding honest clouds?

https://ebiquity.umbc.edu/paper/html/id/989/Analyzing-GDPR-compliance-in-Cloud-Services-privacy-policies-using-Textual-Fuzzy-Interpretive-Structural-Modeling-TFISM-

Analyzing GDPR compliance in Cloud Services' privacy policies using Textual Fuzzy Interpretive Structural Modeling (TFISM)

Cloud Service providers must comply with data protection regulations, like European Union (EU) General Data Protection Regulation (GDPR), to ensure their users' personal data security and privacy. Hence, the service privacy policies and terms of service documents refer to the rules it complies with within the data protection regulation. However, these documents contain legalese jargon that requires significant manual effort to parse and confirm compliance. We have developed a novel methodology, Textual Fuzzy Interpretive Structural Modeling (TFISM), that automatically analyzes large textual datasets to identify driving and dependent factors in the dataset. TFISM enhances Interpretive Structural Modeling (ISM) to analyze textual data and integrate it with Artificial Intelligence and Text extraction techniques. Using TFISM, we identified the critical factors in GDPR and compared them with various Cloud Service privacy policies. In this paper, we present the results of this study that identified how different factors are emphasized in GDPR and 224 publicly available service privacy policies. TFISM can be used both by service providers and consumers to automatically analyze how close a service privacy policy aligns with the GDPR.





My AI claims to have the answer to ‘life, the universe and everything’ but does not want to publish until protections are in place.

https://www.sciencedirect.com/science/article/abs/pii/S0267364921000546

Copyright protection for AI-generated outputs: The experience from China

Artificial intelligence (AI) is involved more frequently in the creative process nowadays, which raises debates associated with copyright protection for its outputs across the globe, China included. On 25 April 2019, the Beijing Internet Court released the first decision in relation to the copyrightability of the output automatically generated by computer software in China. In this case, the Beijing Internet Court held that copyrightable works should be created by natural persons, and therefore denied copyright protection for the output intelligently generated by computer software although it possessed originality. In another case decided on 24 December 2019, the Nanshan District Court of Shenzhen approved that the output automatically generated by computer software was copyrightable, holding that the review generated by an intelligent writing software conformed to the formal requirements of written works and it could be granted copyright protection.

This article analyses these two cases in detail and describes the experience of China in copyright protection for AI-generated outputs. As the first two cases about copyrightability of AI-generated outputs in China, the two cases play a significant role in future copyright protection of such outputs nationally and internationally. The two cases indicate that some of AI-generated outputs are eligible for copyright protection in China. Instead of challenging the existing doctrines of modern copyright regime, the two decisions provide a mechanism for copyright protection of AI-generated outputs within the current human-centered copyright law realm.





Should we create an ‘Open Justice” foundation?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3897576

Open Justice and Technology: Courts, Tribunals and Artificial Intelligence

In this submission to NSW Law Reform Commission Open Justice Review, I argue that to fully appreciate the impact of technology on the principle of open justice, consideration of technology issues must go beyond social media and remote hearings to cover technology assisted decision-support and decision-making systems used by the courts and tribunals. It presents novel challenges for open justice. Lack of transparency in how automation tools operate, often cemented through ‘trade secrecy’ doctrines, is not compatible with the principle of open justice. If technology is to assist courts and tribunals, open-source software should be used. Even then, many challenges remain, and they must be considered in law reform process on open justice.



No comments: