Sunday, January 30, 2022

We can, therefore we must. Would Airtags automatically fall into the surveillance/stalking category? If so, why didn’t Apple recognize it?

https://www.pogowasright.org/apple-creates-personal-safety-guide-as-airtag-concerns-mount-not-good-enough-says-privacy-group/

Apple creates personal safety guide as AirTag concerns mount — not good enough, says privacy group

Victoria Song reports:

On Tuesday, Apple quietly launched a Personal Safety User Guide to help “anyone who is concerned about or experience technology-enabled abuse, stalking or harassment.” The guide is a resource hub to help people figure out what their options are if they wish to remove someone’s access to shared information, as well as personal safety features available across the Apple ecosystem. Most notably, it includes a “Stay safe with AirTag and other Find My accessories page at a time when an increasing number of people have come forward about being stalked with the devices.

Read more at The Verge.

On Wednesday, S.T.O.P. issued a press release demanding Apple stop selling air tags altogether:

(New York, NY, 1/26/2022) – Today, the Surveillance Technology Oversight Project (S.T.O.P.), a New York-based privacy group, demands Apple, Inc. halt the sale of AirTags, wireless tracking devices first sold by the tech giant last April. The small, light-weight trackers have been discovered by everyone from survivors of intimate partner violence (who are targeted by their abusers) to celebrities targeted by fans. The civil rights group noted that Apple’s safeguards had failed to prevent precisely the sort of stalking it warned about last year.

Read more at StopSpying.org



A duty to use AI?

https://link.springer.com/article/10.1007/s43681-022-00139-7

Ethical issues deriving from the delayed adoption of artificial intelligence in medical imaging

Medical imaging (MI) has assumed a central role in medicine. Artificial intelligence (AI) has revolutionized computer vision and it is also approaching to impact deeply MI. Fundamental ethical matters have raised and teams of experts around the world are involved in defining ethical borders for AI in MI. However, reading the extremely detailed proposals, it is clear that the treated ethical arguments have been completely redefined and specifically structured for AI in MI. Instead, many of them should be inherited from other technologies already in use in MI. The complete re-definition of ethical principles could produce contradictions and delays for AI adoption in MI, thus arising important ethical concerns. In this paper, potential ethical issues related to AI delay are presented: the objective is to contribute to reuse some concepts from other technologies to streamline the arguments and avoid these concerns.



AI as a “legal Terminator?”

https://repository.usta.edu.co/handle/11634/42429

Assessment of the insertion of artificial intelligence in the legal field: an analysis of the prosecuting entity in the investigation stage

This article addresses the relationship that may exist between artificial intelligence (AI) and the accusatory criminal system, specifically as an assistant for the Prosecutor’s Office, claiming that, through the facts and the probative material collected by the judicial police, the latter is analyzed by a software, which can determine the criminal conduct through the information obtained. This through a qualitative research with the application of a theoretical deductive method, consisting of the review of databases, bibliography and statistics, beginning with the presentation of the function of the prosecutor’s office in the investigation phase. It continues with a statistical analysis, legal informatics and the location of artificial intelligence among its branches, a study of comparative law with Mexico, Argentina, United States, Canada, and Australia, to respond to the legal problem identified in the face of the inadequate classification of punishable conducts, which in turn violates the different procedural guarantees. The above, by generating recommendations for the implementation of a software that manages and analyzes the factual and legal circumstances to determine more easily the conduct.



Will AI ever be a person?

https://repository.usta.edu.co/handle/11634/42427

Ethics in Artificial Intelligence from the perspective of Law

This article is the product of a study that aims to demonstrate the relevance of ethics in the development of artificial intelligence, for which it will present an exploration of the positions of experts and different international organizations, including the European Union, which coincide in establishing similar paradigms on the potential scope of the autonomy of AI as a being endowed with cognitive capacity or intelligence, but not with conscience or reason, so the right to freedom plays a transcendental role in defining the category of AI as an individual. In order to bring the reader closer to the object of study, the objectives and categories of the conceptualization of the purposes of AI, the confrontation between autonomy and automation and intelligence and reasoning, will be presented, so that the reader can make a critical analysis that allows him to have his own position on the generation of legal responsibilities of the AI as a legal person and its programmers. Similarly, issues such as transhumanism, understood as a mechanism to improve human life and performance, bioethics, which should be based on respect for human dignity and the protection of fundamental rights, will be addressed, to finally, specify the positions of experts and the European Commission on the adaptation of the Charter of Fundamental Rights to achieve a reliable and ethical AI.



Does asking this question a lot suggest a direction in legal thought?

https://repository.uchastings.edu/cgi/viewcontent.cgi?article=1110&context=hastings_science_technology_law_journal

Artificial Intelligence and Inventorship – Does the Patent Inventor Have to be Human?

Artificial intelligence (“AI”) is ubiquitous; our smartphones help us navigate around town, virtual digital assistants such as Alexa and Siri respond to our questions, and social media channels such as Facebook, Instagram, and Twitter help us remain connected. Furthermore, financial institutions, pharmaceutical companies, and insurance companies all utilize AI to their advantage and to obtain leverage over their competitors. In particular, in the case of the COVID-19 pandemic, the role of AI has never been so crucial.1 AI continues to be at the forefront of technological development. Similar to coping with changes brought by the Industrial Revolution, legislatures need to embrace AI and be mindful of the challenges and effects that AI has on different laws, in particular patent laws.

Artificial Intelligence is directly related to innovation, the protection of which in turn is partly governed by patent laws. This innovation leads to questions regarding the ramifications on patent inventorship in the AI arena. One key question is whether an AI system or device can be considered an “inventor” of a patent application. The United States Patent and Trademark Office (“USPTO”) has provided its answer by clearly rejecting AI as an “inventor,” since AI cannot meet certain statutory definitions for an inventor or the relevant tests for determining inventorship.2 Importantly, the Patent Act does not expressly limit inventorship rights to humans, but it does suggest that each inventor must have a name and be an “individual.”3



Here’s where I need help. The definition of Antitrust is changing. If we force “Big Tech” to benefit sellers, will we automatically harm consumers?

https://www.wsj.com/articles/how-the-ftc-is-reshaping-the-antitrust-argument-against-tech-giants-11643432448?mod=djemalertNEWS

How the FTC Is Reshaping the Antitrust Argument Against Tech Giants

Federal Trade Commission chief Lina Khan has developed an innovative way to frame the issue. Whether she has the tools to see it through remains to be seen.

For years, activists, lawmakers, lobbying groups, think tanks and most Americans have agreed something should be done [With what strategic goal? Bob] about giant tech companies’ power. With minor exceptions, no one has figured out how to do it.

Now, U.S. competition regulators at the Federal Trade Commission are getting creative. They’re zeroing in on an issue that has been less prominent in the past: how Big Tech dominance harms not consumers, but the businesses that sell goods and services on those tech platforms.

Since mid-1980s Reagan-era reforms of antitrust law, the test for whether a company is a monopolist has been whether its dominance harms consumers —usually through higher prices or shoddy goods. It has been hard to make that charge stick against companies that offer many of their services free, like Google and Meta Platforms (née Facebook), or at (usually) competitive prices, like Amazon; or that take a cut of what seems to be a big, competitive market, like Apple does with apps.

To understand, we have to look at an unusual word the FTC has used of late: “monopsony.” If a monopoly is a market with one dominant seller, a monopsony is its inverse, a market where one buyer is pre-eminent. Monopolists can gouge consumers. A monopsonist has the same power over sellers.



No comments: