One public record is a point of contact. Millions of records looks like input to machine learning.
https://cybernews.com/security/clubhouse-data-leak-1-3-million-user-records-leaked-for-free-online/
Clubhouse data leak: 1.3 million scraped user records leaked online for free
Days after scraped data from more than a billion Facebook and LinkedIn profiles, collectively speaking, was put for sale online, it looks like now it’s Clubhouse’s turn. The upstart platform seems to have experienced the same fate, with an SQL database containing 1.3 million scraped Clubhouse user records leaked for free on a popular hacker forum.
Local use is okay if an local ordinance is in place. Nothing to prevent local cops from asking state cops to ID a face?
https://www.pogowasright.org/virginia-to-ban-local-police-from-using-facial-recognition/
Virginia to Ban Local Police from Using Facial Recognition
From EPIC.org”
A bill passed in Virginia will ban local law-enforcement agencies from using facial recognition technology without prior legislative approval starting July 1, 2021. The bill further requires any local police agency eventually authorized to have “exclusive control” over the facial recognition system, preventing the use of Clearview AI and other commercial FR products. However, Virginia State Police and other state law enforcement agencies may continue to use facial recognition. EPIC and a coalition recently urged New York City Council to enact a comprehensive ban on facial recognition. EPIC leads a campaign to Ban Face Surveillance and through the Public Voice Coalition gathered support from over 100 organizations and experts from more than 30 countries.
We should be ethical and legal?
https://link.springer.com/article/10.1365/s43439-021-00022-x
The global governance on automated facial recognition (AFR): ethical and legal opportunities and privacy challenges
The digital revolution transforms people’s view about values and priorities. Automated facial recognition (AFR) comes with many concerns as well as benefits. The technology raises significant legal and ethical challenges, which risk perpetuating systemic injustice unless countervailing measures are put in place. The way facial images are obtained and used, potentially without consent or opportunities to opt out, can have a negative impact on people’s privacy. Laws on privacy vary across jurisdictions, which has an enormous effect on measures that could be taken to safeguard AFR-related ethical concerns. In an era of digitalisation, the existing laws are ill-equipped to address evolving needs against threats to individual privacy. Integrating the principles of proportionality and necessity, of the upmost importance is to ensure the proper use of AFR in a socially responsible way. It is imperative to build an AFR infrastructure that incorporates society’s legal and ethical commitments, and further address the challenges of governing the technology.
Too much law?
https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=2824&context=ilj
WHAT’S YOUR PRIVACY WORTH ON THE GLOBAL TECH MARKET?
WEIGHING THE COST OF PROTECTING CONSUMER DATA AGAINST THE RISK THAT NEW LEGISLATION MAY STIFLE COMPETITION AND INNOVATION DURING THIS GLOBAL, TECHNOLOGICAL REVOLUTION.
The world is currently in an artificial intelligence (“AI”) arms race, whereby the first nation to develop AI will become the global super nation. That country will set the precedent for generations of future economic, technological, medical, and societal growth. While companies like Facebook, Google, and Amazon have propelled the United States to the front of this race for AI dominance, corporations have over-stepped ethical norms of data gathering and processing: methods necessary for technological development. Numerous data privacy breaches have left some consumers unlikely to ever share their data willingly without some assurances of protection. Noting these corporate scandals and data’s potential for abuse, many countries have implemented data privacy laws to protect consumers. Statutes enacted for this purpose include the European Union’s ratification of the General Data Protection Regulation (“GDPR”), the United States’ various local statutes, and China’s cybersecurity law (“CSL”) and its Personal Information Security Specification (“2018 Specification”). This Note argues that enacting wide-spread legislation as a means of protecting consumer data will cause more problems than it solves. Over-legislating technology will threaten innovation as tight-leashed constraints on development hinder growth. The consequences to a nation’s global stance in this race to innovate are tantamount to individuals’ privacy interests. The real battle will be treading the line between protecting citizens’ privacy while facilitating technological growth. After examining the flaws with the GDPR, the CSL, and the 2018 Specification, this Note urges the United States to enact a federally binding data privacy statute, incorporating some principles found within various pieces of legislation, that strikes a balance between protecting consumer data privacy and enabling technological innovation.
Artificial law?
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3791648
Artificial Intelligence and its Applicability to Court Decisions
The relevance of the topic artificial intelligence (AI) in the current reality is notorious, arising from the exponential advance of technical-scientific knowledge responsible for the development of both computers (hardware) and the programs (software) installed in them. The expansion of technical knowledge, responsible for almost daily technological innovations, makes it indispensable the corresponding expansion of the study and critical analysis of these innovations from a normative (ethical/moral and legal) perspective, with the evaluation of the impact and consequences of the use of AI programs in human life, in its individual and social dimensions, in order to provide the adequacy of the technical production to the normative parameters socially understood as due. This article aims to make a critical analysis of the application of AI to Law, by examining the use of AI programs by the judiciary. Starting from the concept of AI, its present adoption by Brazilian courts is presented, with an exposition of the activities performed by the AI programs used. This is followed by a debate on the appropriateness of the type of tasks to be assigned to such programs within the judiciary. The pertinence of assigning the execution of repetitive tasks to machines is emphasized, but the decision-making activity (the ultimate purpose of the judiciary) is based on exclusive human competence. Finally, it critically examines, under the prism of principles inherent to the democratic rule of law and the principles of fundamental rights, the reality of the United States, where the judiciary in most states uses AI programs - risk assessment software - to assist the judge in making pre-trial and procedural decisions.
Applicable in other areas?
https://europepmc.org/article/med/33821471
Coming to Terms with the Black Box Problem: How to Justify AI Systems in Health Care.
The use of opaque, uninterpretable artificial intelligence systems in health care can be medically beneficial, but it is often viewed as potentially morally problematic on account of this opacity-because the systems are black boxes. Alex John London has recently argued that opacity is not generally problematic, given that many standard therapies are explanatorily opaque and that we can rely on statistical validation of the systems in deciding whether to implement them. But is statistical validation sufficient to justify implementation of these AI systems in health care, or is it merely one of the necessary criteria? I argue that accountability, which holds an important role in preserving the patient-physician trust that allows the institution of medicine to function, contributes further to an account of AI system justification. Hence, I endorse the vanishing accountability principle: accountability in medicine, in addition to statistical validation, must be preserved. AI systems that introduce problematic gaps in accountability should not be implemented.
Protect data to control AI? I don’t think so...
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3817472
Automated decision-making as a data protection issue
Artificial intelligence techniques have been used to automate various procedures in modern life, ranging from ludic applications to substantial decisions about the lives of individuals and groups. Given the variety of automated decision-making applications and the different forms in which decisions may harm humans, the law has struggled to provide adequate responses to automation. This paper examines the role of a specific branch of law — data protection law — in the regulation of artificial intelligence. Data protection law is applicable to automation scenarios which rely on data about natural persons, and it seeks to address risks to these persons through three approaches: allowing persons to exercise rights against specific automated decisions, disclosing information about the decision-making systems and imposing design requirements for those systems. By exploring the potentials and limits of such approaches, this paper presents a portrait of the relevance of data protection law for regulating AI.
Why does the boss seem so ignorant? In some areas he is!
Training upward: your executives may not fully understand digital transformation
There has been plenty of discussion about the need to provide training to workers to acquaint them with digital skills. Some high-profile companies, such as Amazon, have committed almost a billion dollars to bring their workforces up to speed with digital and artificial intelligence skills. Worker training is a necessity these days, but for many technology managers and professionals, there's just of a pressing need to train upward in the ranks.
… Digital or technology savvy is not a top requirement for executive or board-level jobs, a recent study published in Harvard Business Review finds. A perusal of executive search listings finds that while high-focus roles such as CIO and CTO mention "technology" or "digital" skills as part of their criteria, this drops to 60% for CEO listings, 40% for COO, CFO and board listings, and 30% for HR leaders.
… The top skills needed in the C-suite include design thinking, artificial intelligence, data science, machine learning techniques, cybersecurity, and DevOps, a 2019 study from Gartner found.
With the rise of the digital economy, "the demand for digital savviness in the upper echelons of leadership has grown far more quickly than the supply," another recent study published in MIT Sloan Management Review confirms. The analysis of almost 2,000 large companies find only seven percent have digitally savvy executive teams.
Perspective.
https://siliconangle.com/2021/04/10/new-era-innovation-moores-law-not-dead-ai-ready-explode/
A new era of innovation: Moore’s Law is not dead and AI is ready to explode
Moore’s Law is dead, right? Think again.
Although the historical annual improvement of about 40% in central processing unit performance is slowing, the combination of CPUs packaged with alternative processors is improving at a rate of more than 100% per annum. These unprecedented and massive improvements in processing power combined with data and artificial intelligence will completely change the way we think about designing hardware, writing software and applying technology to businesses.
Every industry will be disrupted. You hear that all the time. Well, it’s absolutely true and we’re going to explain why and what it all means.
In this Breaking Analysis, we’re going to unveil some data that suggests we’re entering a new era of innovation where inexpensive processing capabilities will power an explosion of machine intelligence applications. We’ll also tell you what new bottlenecks will emerge and what this means for system architectures and industry transformations in the coming decade.
No comments:
Post a Comment