Thursday, March 31, 2022

Perhaps surveillance IS the disease?

https://www.the74million.org/article/from-face-mask-detection-to-temperature-checks-districts-bought-ai-surveillance-cameras-to-fight-covid-why-critics-call-them-smoke-and-mirrors/

From Face Mask Detection to Temperature Checks, Districts Bought AI-Surveillance Cameras to Fight COVID. Why Critics Call Them ‘Smoke and Mirrors’

In Rockland Maine’s Regional School Unit 13, officials used federal pandemic relief money to procure a network of cameras with “Face Match” technology for contact tracing. Through advanced surveillance, the cameras by California-based security company Verkada allow the 1,600-student district to identify students who came in close contact with classmates who tested positive for COVID-19. In its marketing materials, Verkada explains how districts could use federal funds tied to the public health crisis to buy its cameras for contact tracing and crowd control.

At a district in suburban Houston, officials spent nearly $75,000 on AI-enabled cameras from Hikvision, a surveillance company owned in part by the Chinese government, and deployed thermal imaging and facial detection to identify students with elevated temperatures and those without masks.

Security hardware for the sake of public perception, the industry expert said, is simply “smoke and mirrors.”

It’s creating a façade,” he said. “Parents think that all the bells and whistles are going to keep their kids safer and that’s not necessarily the case. With cameras, in the vast majority of schools, nobody is monitoring them.”





A new diligence?

https://www.law.com/corpcounsel/2022/03/30/dont-blindly-rely-on-the-algorithms-how-firms-can-limit-liability-amid-ai-explosion/

'Don't Blindly Rely on the Algorithms': How Firms Can Limit Liability Amid AI Explosion

The explosion in popularity of artificial intelligence tools is opening up companies and their legal departments to a range of new legal risks, experts say, some of which are not yet clearly understood.

The problems stem from the reality that companies employing artificial intelligence often don’t fully understand how they work, or whether their decision-making processes might be discriminatory or expose the company to other risks.

The minute you hear the phrase ‘AI’ or ‘algorithms,’ as in-house counsel, that is a signal to you that that is a product or tool that requires further scrutiny,” said Kristin Madigan, a former Federal Trade Commission attorney who’s now a partner at the San Francisco office of Crowell & Moring.

It’s not always clear, however, whether companies inadvertently misusing AI tools will be the only one subject to enforcement actions or lawsuits, lawyers say. After all, didn’t another company create that tool in the first place? Couldn’t the AI tool also be on the hook? The answer is not always straightforward.





We can’t get there from here?

https://www.brookings.edu/research/six-steps-to-responsible-ai-in-the-federal-government/

Six Steps to Responsible AI in the Federal Government

Editor's Note: This report from The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative is part of “AI Governance,” a series that identifies key governance and norm issues related to AI and proposes policy remedies to address the complex challenges associated with emerging technologies.

… In the criminal justice area, for example, Richard Berk and colleagues argue that there are many kinds of fairness and it is “impossible to maximize accuracy and fairness at the same time, and impossible simultaneously to satisfy all kinds of fairness.”[4] While sobering, that assessment likely is on the mark and therefore must be part of our thinking on ways to resolve these tensions.



No comments: