The next billion dollar opportunity?
https://www.consumerreports.org/privacy/why-its-tough-to-get-help-opting-out-of-data-sharing/
Why It's Tough to Get Help Opting Out of Data Sharing
A CR study reveals progress, along with problems, when Calif. consumers use "authorized agents" to stop their data from being sold
A new Consumer Reports study found that there are big barriers to overcome before new services can start helping California residents opt out of data sharing under the California Consumer Privacy Act, a landmark law that went into effect Jan. 1, 2020.
The CCPA gives Californians several new rights over the information that private companies collect and store. Under the state law, consumers can tell companies to stop selling their personal information, to supply the consumer with a copy of the information, or to delete it altogether. The law also says that residents can ask a third party, or “authorized agent,” to help them exercise those rights by contacting data-holding companies on their behalf.
That’s the aspect of the CCPA that CR’s new study explores. The authorized agent provision is supposed to address a hurdle consumers face if they want to flex their rights to limit the way personal information is collected and used: Hundreds of companies may hold data about you, and it would be almost impossible for an individual to find and contact every company one by one.
The ability to tell scads of companies how to handle your data with a single click would be a privacy superpower, consumer advocates say. But so far, no one has built a foolproof authorized agent. “Consumers should be able to protect their privacy in a single step—it’s not workable to contact thousands of companies one by one,” says CR Policy Analyst Maureen Mahoney, who helped conduct CR’s new research. “Companies are making it too difficult right now, which is holding consumers back from effectively controlling their personal data.”
I like it!
https://arxiv.org/abs/2101.12701
Time for AI (Ethics) Maturity Model Is Now
There appears to be a common agreement that ethical concerns are of high importance when it comes to systems equipped with some sort of Artificial Intelligence (AI). Demands for ethical AI are declared from all directions. As a response, in recent years, public bodies, governments, and universities have rushed in to provide a set of principles to be considered when AI based systems are designed and used. We have learned, however, that high-level principles do not turn easily into actionable advice for practitioners. Hence, also companies are publishing their own ethical guidelines to guide their AI development. This paper argues that AI software is still software and needs to be approached from the software development perspective. The software engineering paradigm has introduced maturity model thinking, which provides a roadmap for companies to improve their performance from the selected viewpoints known as the key capabilities. We want to voice out a call for action for the development of a maturity model for AI software. We wish to discuss whether the focus should be on AI ethics or, more broadly, the quality of an AI system, called a maturity model for the development of AI systems.
Yup. This is going to be a fun area of the law to watch. At least, that’s what my AI says…
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7851658/
In support of “no-fault” civil liability rules for artificial intelligence
Civil liability is traditionally understood as indirect market regulation, since the risk of incurring liability for damages gives incentives to invest in safety. Such an approach, however, is inappropriate in the markets of artificial intelligence devices. In fact, according to the current paradigm of civil liability, compensation is allowed only to the extent that “someone” is identified as a debtor. However, in many cases it would not be useful to impose the obligation to pay such compensation to producers and programmers: the algorithms, in fact, can “behave” far independently from the instructions initially provided by programmers so that they can err despite no flaw in design or implementation. Therefore, application of “traditional” civil liability to AI may represent a disincentive to new technologies based on artificial intelligence. This is why I think artificial intelligence requires that the law evolves, on this matter, from an issue of civil liability into one of financial management of losses. No-fault redress schemes could be an interesting and worthy regulatory strategy in order to enable this evolution. Of course, such schemes should apply only in cases where there is no evidence that producers and programmers have acted under conditions of negligence, imprudence or unskillfulness and their activity is adequately compliant with scientifically validated standards.
A more general view of AI & Law?
http://ojs.ecsdev.org/index.php/ejsd/article/view/1170
Legal Regulation of the Use of Artificial Intelligence: Problems and Development Prospects
The article considers the advantages and disadvantages of using artificial intelligence (AI) in various areas of human activity. Particular attention was paid to the use of AI in the legal field. Prospects for the use of AI in the legal field were identified. The relevance of research on the legal regulation of the use of AI was proved. The use of AI raises an important problem of the compliance with general principles of ensuring human rights. Emphasis is placed on the need to develop and use a Code of ethics for artificial intelligence and legislation that would prevent its misapplication and minimize possible harmful consequences.
Perspective. AI from a musical viewpoint.
REFLECTIONS ON THE FINANCIAL AND ETHICAL IMPLICATIONS OF MUSIC GENERATED BY ARTIFICIAL INTELLIGENCE
My research question analyses the financial and subsequent ethical implications of music generated by modern technological systems, commonly known as Artificial Intelligence (AI). Of the many implications of AI, I identify that the principal concern relates to the increased replacement of industrial professionals by autonomous and intelligent systems in the music ecosystem and that these issues are caused by technological challenges to key tenets of intellectual property (IP). To ascertain the situation, I look first at the activities of contemporary AI music actors, then explore the economic consequence of their technologies on the music ecosystem before considering a necessary ethical response to that emerging dynamic.
No comments:
Post a Comment