See? Cops have all the fun software!
Feds: Ex Louisville Police Officer Used Law Enforcement Tech To Help Hack Sexually Explicit Photos From Women
Josh Wood reports:
A former Louisville Metro Police Department officer used law enforcement technology as part of a scheme that involved hacking the Snapchat accounts of young women and using sexually explicit photos and videos they had taken to extort them, federal prosecutors said in court documents filed on Tuesday.
According to a sentencing memorandum, Bryan Wilson used his law enforcement access to Accurint, a powerful data-combing software used by police departments to assist in investigations, to obtain information about potential victims. He would then share that information with a hacker, who would hack into private Snapchat accounts to obtain sexually explicit photos and videos.
Read more at LEO Weekly.
Will this lead others to rebel?
“Don’t spy on a privacy lab” (and other career advice for university provosts)
Cory Doctorow writes:
This is a wild and hopeful story: grad students at Northeastern successfully pushed back against invasive digital surveillance in their workplace, through solidarity, fearlessness, and the bright light of publicity. It’s a tale of hand-to-hand, victorious combat with the “shitty technology adoption curve.”
What’s the “shitty tech adoption curve?” It’s the process by which oppressive technologies are normalized and spread. If you want to do something awful with tech – say, spy on people with a camera 24/7 – you need to start with the people who have the least social capital, the people whose objections are easily silenced or overridden.
Read more at Pluralistic.net.
(Related) Pay for the privilege of being spied on?
https://www.pogowasright.org/tour-amazons-dream-home-where-every-appliance-is-also-a-spy/
Tour Amazon’s dream home, where every appliance is also a spy
Geoffrey A. Fowler reports:
You may not realize all the ways Amazon is watching you.
No other Big Tech company reaches deeper into domestic life. Two-thirds of Americans who shop on Amazon own at least one of its smart gadgets, according to Consumer Intelligence Research Partners. Amazon now makes (or has acquired) more than two dozen types of domestic devices and services, from the garage to the bathroom.
Read more at The Washington Post.
You look guilty!
https://link.springer.com/chapter/10.1007/978-3-031-13952-9_4
Facial Recognition for Preventive Purposes: The Human Rights Implications of Detecting Emotions in Public Spaces
Police departments are increasingly relying on surveillance technologies to tackle public security issues in smart cities. Automated facial recognition is deployed in public spaces for real-time identification of suspects and warranted individuals. In some cases, law enforcement is going even further by exploiting also emotion recognition technologies. In preventive operations indeed, emotion facial recognition (EFR) is being used to infer individuals’ inner affective states from traits like facial muscle movements. In this way, law enforcement aims to obtain insightful hints on unknown persons acting suspiciously in public or strategic venues (e.g. train stations, airports). While the employment of such tools still seems to be relegated to dystopian scenarios, it is already a reality in some parts of the world. Hence, there emerges a need to explore their compatibility with the European human rights framework. The Chapter undertakes this task and examines whether and how EFR can be considered compliant with the rights to privacy and data protection, the freedom of thought and the presumption of innocence.
All faces are famous?
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4243000
The Right of Publicity: A New Framework for Regulating Facial Recognition
In this article, I develop a novel theory for how ROP claims could apply to FR systems and detail how their history and development, both statutory and common law, demonstrate their power to impose liability on entities that conduct mass image and identity appropriation, especially through innovative visual technologies. This provides a robust framework for FR regulation while at the same time balancing issues of informed consent and various public interest concerns, such as compatibility with copyright law and First Amendment-protected news reporting.
What happens when they turn the “ethics dial” down? (No quarter)
https://apps.dtic.mil/sti/citations/AD1181102
Ethics-The Key to Operationalizing AI-Enabled Autonomous Weapons
The so-called killer robots have arrived, and artificial intelligence-enabled autonomous weapons stand to be a prominent feature of future war. Against a backdrop of international competitor development of these systems overlaid against international and multinational corporate concern, the National Security Commission on AIs Final Report judges that these types of unmanned weapons can and should be used in ways consistent with international humanitarian law by applying the conditions of human-authorized use and proper design and testing. AI-enabled autonomy and its military applications carry with it the foundational risks in these technologies, and their use in unmanned weapons further challenges militaries seeking legal use within the frameworks of international humanitarian law and Just War Theory. Ethics therefore provides the superior conceptual vehicle to appoint and empower human authorizers and users and to qualitatively establish what constitutes proper design and testing. Each of the seven AI worker archetypes established by the DoDs Campaign for an AI Ready Force should apply role-relevant, AI-related ethics to fully realize the conditions established in the Final Report and retain and support the humanity necessary to control the monopoly on violence. The need for ethics education individually and collectively permeates each of the archetypes, and the DoD must recognize the value of publicprivate partnerships to fully account for these conditions.
Age makes little difference?
https://arxiv.org/abs/2210.01369
Understanding Older Adults' Perceptions and Challenges in Using AI-enabled Everyday Technologies
Artificial intelligence (AI)-enabled everyday technologies could help address age-related challenges like physical impairments and cognitive decline. While recent research studied older adults' experiences with specific AI-enabled products (e.g., conversational agents and assistive robots), it remains unknown how older adults perceive and experience current AI-enabled everyday technologies in general, which could impact their adoption of future AI-enabled products. We conducted a survey study (N=41) and semi-structured interviews (N=15) with older adults to understand their experiences and perceptions of AI. We found that older adults were enthusiastic about learning and using AI-enabled products, but they lacked learning avenues. Additionally, they worried when AI-enabled products outwitted their expectations, intruded on their privacy, or impacted their decision-making skills. Therefore, they held mixed views towards AI-enabled products such as AI, an aid, or an adversary. We conclude with design recommendations that make older adults feel inclusive, secure, and in control of their interactions with AI-enabled products.
A start?
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4238951
A Proposal for a Definition of General Purpose Artificial Intelligence Systems
The European Union (EU) is in the middle of comprehensively regulating artificial intelligence (AI) through an effort known as the AI Act. Within the vast spectrum of issues under the Act’s aegis, the treatment of technologies classified as general purpose AI systems (GPAIS) merits special consideration. Particularly, existing proposals to define GPAIS do not provide sufficient guidance to distinguish these systems from those designed to perform specific tasks, denominated as fixed-purpose. Thus, our working paper has three objectives. First, to highlight the variance and ambiguity in the interpretation of GPAIS in the literature. Second, to examine the dimensions of generality of purpose available to define GPAIS. Lastly, it proposes a functional definition of the term that facilitates its governance within the EU. Our intention with this piece is to spark a discussion that improves the hard and soft law efforts to mitigate these systems’ risks and protect the well-being and future of constituencies in the EU and globally.
So sue the AI!
http://jlr.sdil.ac.ir/article_153656_a4cb53f744bd047adeb18b7d671f1346.pdf?lang=en
Civil Liability of the User in Using the Artificial Intelligence System in the Car
Although self-deriving vehicles can be considered a revolution in the transportation industry, this new technology, which is based on artificial intelligence, in addition to its high efficiency, also creates challenges for the current system of civil liability, and since Accidents will always be an integral part of vehicles, so it is important to create a new liability plan that outlines the legal obligations of potential litigants. The present study, with a descriptive-analytical approach and a comparative view on the issue of civil liability of self-driving car users, concludes that unlike conventional vehicles, which traditionally immediately introduces the driver immediately responsible for road accidents, self-driving cars based on AI It allows the car to move even in the complete absence of a human factor, and also in the presence of the user - except in special cases - he is considered a passenger of the car, so in practice the discussion of guilt due to lack of participation in control A vehicle that can enable the user to change its behavior is ruled out, and as a result the current view is inconsistent with the structure of these vehicles, and the traditional rules must be redefined, and until then the principle must be the responsibility of the manufacturer The car compensated for the damage and considered the user's responsibility in this regard exceptional.
No comments:
Post a Comment