Sunday, May 15, 2022

Yeah, but they often work!

https://www.pogowasright.org/geofence-warrants-and-reverse-keyword-warrants-are-so-invasive-even-big-tech-wants-to-ban-them/

Geofence Warrants and Reverse Keyword Warrants are So Invasive, Even Big Tech Wants to Ban Them

Geofence and reverse keyword warrants are some of the most dangerous, civil-liberties-infringing and reviled tools in law enforcement agencies’ digital toolbox. It turns out that these warrants are so invasive of user privacy that big tech companies like Google, Microsoft, and Yahoo are willing to support banning them. The three tech giants have issued a public statement through a trade organization,“Reform Government Surveillance,” that they will support a bill before the New York State legislature. The Reverse Location Search Prohibition Act, A. 84 / S. 296, would prohibit government use of geofence warrants and reverse warrants, a bill that EFF also supports. Their support is welcome, especially since we’ve been calling on companies like Google, which have a lot of resources and a lot of lawyers, to do more to resist these kinds of government requests.

Under the Fourth Amendment, if police can demonstrate probable cause that searching a particular person or place will reveal evidence of a crime, they can obtain a warrant from a court authorizing a limited search for this evidence. In cases involving digital evidence stored with a tech company, this typically involves sending the warrant to the company and demanding they turn over the suspect’s digital data.

Geofence and reverse keyword warrants completely circumvent the limits set by the Fourth Amendment. If police are investigating a crime–anything from vandalism to arson–they instead submit requests that do not identify a single suspect or particular user account. Instead, with geofence warrants, they draw a box on a map, and compel the company to identify every digital device within that drawn boundary during a given time period. Similarly, with a “keyword” warrant, police compel the company to hand over the identities of anyone who may have searched for a specific term, such as a victim’s name or a particular address where a crime has occurred.

These reverse warrants have serious implications for civil liberties. Their increasingly common use means that anyone whose commute takes them goes by the scene of a crime might suddenly become vulnerable to suspicion, surveillance, and harassment by police. It means that an idle Google search for an address that corresponds to the scene of a robbery could make you a suspect. It also means that with one document, companies would be compelled to turn over identifying information on every phone that appeared in the vicinity of a protest, as happened in Kenosha, Wisconsin during a protest against police violence. And, as EFF has argued in amicus briefs, it violates the Fourth Amendment because it results in an overbroad fishing-expedition against unspecified targets, the majority of whom have no connection to any crime.

In the statement released by the companies, they write that, “This bill, if passed into law, would be the first of its kind to address the increasing use of law enforcement requests that, instead of relying on individual suspicion, request data pertaining to individuals who may have been in a specific vicinity or used a certain search term.” This is an undoubtedly positive step for companies that have a checkered history of being cavalier with users’ data and enabling large-scale government surveillance. But they can do even more than support legislation in one state. Companies can still resist complying with geofence warrants across the country, be much more transparent about the geofence warrants it receives, provide all affected users with notice, and give users meaningful choice and control over their private data.

This article originally appeared at EFF.





Over-privitization? Are there situations that require the removal of privacy protections? Could this be trusted to independent third parties?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4104823

Using Sensitive Data to Prevent Discrimination by AI: Does the GDPR Need a New Exception?

Organisations can use artificial intelligence to make decisions about people for a variety of reasons, for instance, to select the best candidates from many job applications. However, AI systems can have discriminatory effects when used for decision-making. To illustrate, an AI system could reject applications of people with a certain ethnicity, while the organisation did not plan such ethnicity discrimination. But in Europe, an organisation runs into a problem when it wants to assess whether its AI system accidentally leads to ethnicity discrimination: the organisation may not know the applicants’ ethnicity. In principle, the GDPR bans the use of certain ‘special categories of data’ (sometimes called ‘sensitive data’), which include data on ethnicity, religion, and sexual preference. The proposal for an AI Act of the European Commission includes a provision that would enable organisations to use of special categories of data for their auditing AI systems. This paper asks whether the GDPR’s rules on special categories of personal data hinder the prevention of AIdriven discrimination. We argue that the GDPR does prohibit such use of special category data in many circumstances. We also map out the arguments for and against creating an exception to the GDPR’s ban on using special categories of personal data, to enable preventing discrimination by AI systems. The paper discusses European law, but the paper can be relevant outside Europe too, as many policymakers in the world grapple with the tension between privacy and non-discrimination policy.





Feel free to express yourself.

https://www.techdirt.com/2022/05/12/just-how-incredibly-fucked-up-is-texas-social-media-content-moderation-law/

Just How Incredibly Fucked Up Is Texas’ Social Media Content Moderation Law?

So, I already had a quick post on the bizarre decision by the 5th Circuit to reinstate Texas’ social media content moderation law just two days after a bizarrely stupid hearing on it. However, I don’t think most people actually understand just how truly fucked up and obviously unconstitutional the law is. Indeed, there are so many obvious problems with it, I’m not even sure I can do them adequate justice in a single post. I’ve seen some people say that it’s easy to comply with, but that’s wrong. There is no possible way to comply with this bill. You can read the full law here, but let’s go through the details.

The law declares social media platforms as “common carriers” and this was a big part of the hearing on Monday, even though it’s not at all clear what that actually means and whether or not a state can just magically declare a website a common carrier (as we’ve explained, that’s not how any of this works). But, it’s mainly weird because it doesn’t really seem to mean anything under Texas law. The law could have been written entirely without declaring them “common carriers” and I’m not sure how it would matter.





We are going to do it, so we should probably get it right. (What if we had to convince an AI that our war was just?)

https://philpapers.org/rec/UMBDFD

Designed for Death: Controlling Killer Robots

Autonomous weapons systems, often referred to as ‘killer robots’, have been a hallmark of popular imagination for decades. However, with the inexorable advance of artificial intelligence systems (AI) and robotics, killer robots are quickly becoming a reality. These lethal technologies can learn, adapt, and potentially make life and death decisions on the battlefield with little-to-no human involvement. This naturally leads to not only legal but ethical concerns as to whether we can meaningful control such machines, and if so, then how. Such concerns are made even more poignant by the ever-present fear that something may go wrong, and the machine may carry out some action(s) violating the ethics or laws of war. Researchers, policymakers, and designers are caught in the quagmire of how to approach these highly controversial systems and to figure out what exactly it means to have meaningful human control over them, if at all. In Designed for Death, Dr Steven Umbrello aims to not only produce a realistic but also an optimistic guide for how, with human values in mind, we can begin to design killer robots. Drawing on the value sensitive design (VSD) approach to technology innovation, Umbrello argues that context is king and that a middle path for designing killer robots is possible if we consider both ethics and design as fundamentally linked. Umbrello moves beyond the binary debates of whether or not to prohibit killer robots and instead offers a more nuanced perspective of which types of killer robots may be both legally and ethically acceptable, when they would be acceptable, and how to design for them.





We need more of this…

http://shura.shu.ac.uk/30207/

Citizen Perspectives on Necessary Safeguards to the Use of AI by Law Enforcement Agencies

In the light of modern technological advances, Artificial Intelligence (AI) is relied upon to enhance performance, increase efficiency, and maximize gains. For Law Enforcement Agencies (LEAs), it can prove valuable in optimizing evidence analysis and establishing proactive prevention measures. Nevertheless, citizens raise legitimate concerns around privacy invasions, biases, inequalities, and inaccurate decisions. This study explores the views of 111 citizens across eight countries towards AI use by police through interviews and integrates societal concerns along with propositions of safeguards from negative effects of AI use by LEAs in the context of cybercrime and terrorism.





Interesting conjecture. Stop thinking of self-driving cars and start thinking of on-demand transportation?

https://venturebeat.com/2022/05/14/the-problem-with-self-driving-cars/

The problem with self-driving cars

… We have achieved things in computer vision, natural language processing and speech recognition that would have been unthinkable just a few years ago. By all accounts, the accuracy of our AI systems exceeds the wildest imaginations of yesteryear.

And yet, it’s not enough.

We were wrong about the future. Every prediction about self-driving cars has been wrong. We are not living in a future of autonomous cyborgs, and something else has come into focus.



(Related)

https://link.springer.com/chapter/10.1007/978-3-658-34293-7_7

Summary and Discussion

The automotive industry is in the progress of a fundamental change, as they no longer meet mobility requirements, especially in urban areas. As a result, many predict a disruptive change. Responses to this are new innovative developments. One answer to this are automated driving systems that offer great potential for increasing safety, comfort, environmental pollution and efficiency in road traffic.



No comments: