Saturday, January 14, 2023

I wonder what they match to? How much of a face covered by a hijab is required for a match?

https://news.yahoo.com/chinese-facial-recognition-technology-helping-152536986.html

Chinese facial recognition technology helping Iran to identify women breaking strict dress code: Report

Shajarizadeh is one of several observers of Iran who fear that the country's Islamist regime has begun to weaponize facial recognition technology to find and punish women who flaunt laws about their dress and appearance in public, a setback for activists amid months of protesting for women's rights and regime change.

The fears that Iran could be using the technology come a year after such a system was proposed by Iranian lawmakers. Their calls were heard by the head of the government agency responsible for enforcing morality laws, who in a September interview said facial recognition would be used "to identify inappropriate and unusual movements," and a "failure to observe hijab laws."



Friday, January 13, 2023

It isn’t the end of the world.

https://www.schneier.com/blog/archives/2023/01/threats-of-machine-generated-text.html

Threats of Machine-Generated Text

With the release of ChatGPT, I’ve read many random articles about this or that threat from the technology. This paper is a good survey of the field: what the threats are, how we might detect machine-generated text, directions for future research. It’s a solid grounding amongst all of the hype.

Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods
Abstract: Advances in natural language generation (NLG) have resulted in machine generated text that is increasingly difficult to distinguish from human authored text. Powerful open-source models are freely available, and user-friendly tools democratizing access to generative models are proliferating. The great potential of state-of-the-art NLG systems is tempered by the multitude of avenues for abuse. Detection of machine generated text is a key countermeasure for reducing abuse of NLG models, with significant technical challenges and numerous open problems. We provide a survey that includes both 1) an extensive analysis of threat models posed by contemporary NLG systems, and 2) the most complete review of machine generated text detection methods to date. This survey places machine generated text within its cybersecurity and social context, and provides strong guidance for future work addressing the most critical threat models, and ensuring detection systems themselves demonstrate trustworthiness through fairness, robustness, and accountability.



(Related) Imagine the volume of AI generated text growing exponentially. AI will then “learn” by reading that text.

https://www.bespacific.com/abstracts-written-by-chatgpt-fool-scientists/

Abstracts written by ChatGPT fool scientists

Nature: “An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December1. Researchers are divided over the implications for science. “I am very worried,” says Sandra Wachter, who studies technology and regulation at the University of Oxford, UK, and was not involved in the research. “If we’re now in a situation where the experts are not able to determine what’s true or not, we lose the middleman that we desperately need to guide us through complicated topics,” she adds. The chatbot, ChatGPT, creates realistic and intelligent-sounding text in response to user prompts. It is a ‘large language model, a system based on neural networks that learn to perform a task by digesting huge amounts of existing human-generated text. Software company OpenAI, based in San Francisco, California, released the tool on 30 November, and it is free to use. Since its release, researchers have been grappling with the ethical issues surrounding its use, because much of its output can be difficult to distinguish from human-written text. Scientists have published a preprint and an editorial written by ChatGPT. Now, a group led by Catherine Gao at Northwestern University in Chicago, Illinois, has used ChatGPT to generate artificial research-paper abstracts to test whether scientists can spot them…”





Mad at Comcast? It does suggest what the impact of a more general attack would be.

https://www.fox21news.com/top-stories/comcast-widespread-power-outage-deliberately-caused/

Comcast: Widespread internet outage deliberately caused

A widespread power outage affecting hundreds of thousands of people, from Chipita Park to Fountain, was due to an overnight act of vandalism, according to Comcast workers.

Security footage, from a local business, Status Symbol Auto Body, shows a person severing fiber network cables around 3:00 a.m. Wednesday morning, behind the shop.

The person, who was driving a burgundy Dodge truck, drove past the shop a couples times before getting out of the car. It took them about ten seconds to cut the cables, using a tool that is believed to be a Sawzall.

Crews on-site started working on fixing the cables around 5:00 a.m. Wednesday morning. Bendele had said they were hoping to get service back up and running by 3:00 p.m. but even with 12 people on hand, the meticulous work took them four hours more than he estimated.

Students in the Manitou Springs School District had the day off due to the outage.

It shows how reliant we are on internet,” said Jensen.

The Colorado Springs Police Department said they are investigating the incident but called it a “low priority call.”



Thursday, January 12, 2023

New law happens…

https://www.databreaches.net/sec-sues-covington-law-firm-for-names-of-300-clients-caught-up-in-hack/

SEC sues Covington law firm for names of 300 clients caught up in hack

Andrew Goudsward reports:

The U.S. Securities and Exchange Commission has sued law firm Covington & Burling for details about nearly 300 of the firm’s clientsd whose information was accessed or stolen by hackers in a previously undisclosed cyberattack, court documents show.

Hackers associated with the Hafnium cyber-espionage group, which has alleged ties to the Chinese government, gained access to Covington’s computer networks around November 2020, accessing private information about the firm’s clients, including 298 publicly traded companies, according to a lawsuit filed Tuesday by the SEC.

Read more at Reuters.

For its part, Covington says it was a state-sponsored attack that affected a limited number of clients and that the firm had notified those clients and worked with the FBI.

But does attorney-client privilege trump the SEC’s authority to investigate? This is an interesting one to watch.





Are we closer to replacing lawyers with AI? (If not, why not?)

https://www.bespacific.com/the-implications-of-openais-assistant-for-legal-services-and-society/

The Implications of OpenAI’s Assistant for Legal Services and Society

ChatGPT, Open AI’s Assistant and Perlman, Andrew, The Implications of OpenAI’s Assistant for Legal Services and Society (December 5, 2022). Available at SSRN: https://ssrn.com/abstract=4294197 or http://dx.doi.org/10.2139/ssrn.4294197

On November 30, 2022, OpenAI released a chatbot called ChatGPT. To demonstrate the chatbot’s remarkable sophistication and its potential implications, both for legal services and society more generally, a human author generated this paper in about an hour through prompts within ChatGPT. Only this abstract, the outline headers, and the prompts were written by a person. ChatGPT generated the rest of the text with no human editing. To be clear, the responses generated by ChatGPT were imperfect and at times problematic, and the use of an AI tool for law-related services raises a host of regulatory and ethical issues. At the same time, ChatGPT highlights the promise of artificial intelligence, including its ability to affect our lives in both modest and more profound ways. ChatGPT suggests an imminent reimagination of how we access and create information, obtain legal and other services, and prepare people for their careers. We also will soon face new questions about the role of knowledge workers in society, the attribution of work (e.g., determining when people’s written work is their own), and the potential misuse of and excessive reliance on the information produced by these kinds of tools. The disruptions from AI’s rapid development are no longer in the distant future. They have arrived, and this document offers a small taste of what lies ahead.”



Wednesday, January 11, 2023

Imagine what a copy of this data could reveal about you! Self surveillance just keeps getting easier.

https://www.lajollalight.com/news/story/2023-01-10/extensions-of-our-mind-personalized-artificial-intelligence-is-on-the-way-courtesy-of-local-company

Extensions of our mind’: Personalized artificial intelligence is on the way, courtesy of local company

A local company is taking the next steps toward creating individualized artificial intelligence with a digital vault of one’s mind.

Thus, while the human brain keeps only a percentage of the information it takes in, the AI model will be able to retain everything for quick access

… “It’s kind of like a little assistant on your shoulder, and when you’re struggling to remember something, it whispers in your ear,” said Personal.ai head of finance Jonathan Bikoff. “The model gets larger and larger as you feed it data, and it’s your assistant — no one else gets to use it. [??? Bob] And unlike me, my model will never forget. So if I ever want to retrieve that data, instead of scattering through my internet history, emails or text messages to find whatever it was, I just ask my model.”

Personal.ai’s model can receive information from across multiple platforms such as text messages, emails or chat apps, and rather than rely on keywords to locate the information, the model can learn the user’s way of describing the information that is needed and locate it that way.

It will remember everything — it will contextualize it, understand it and help you recall it,” Bikoff said.





It’s a free service. Did they get what they paid for?

https://arstechnica.com/information-technology/2023/01/contoversy-erupts-over-non-consensual-ai-mental-health-experiment/

Controversy erupts over non-consensual AI mental health experiment [Updated]

On Friday, Koko co-founder Rob Morris announced on Twitter that his company ran an experiment to provide AI-written mental health counseling for 4,000 people without informing them first, Vice reports. Critics have called the experiment deeply unethical because Koko did not obtain informed consent from people seeking counseling.

Koko is a nonprofit mental health platform that connects teens and adults who need mental health help to volunteers through messaging apps like Telegram and Discord.

During the AI experiment—which applied to about 30,000 messages, according to Morris—volunteers providing assistance to others had the option to use a response automatically generated by OpenAI's GPT-3 large language model instead of writing one themselves (GPT-3 is the technology behind the recently popular ChatGPT chatbot).

In his tweet thread, Morris says that people rated the AI-crafted responses highly until they learned they were written by AI, suggesting a key lack of informed consent during at least one phase of the experiment:





I too would like a list of vulnerable technology, so I know how to prioritize my hacks.

https://www.insideprivacy.com/policy-and-legislation/president-biden-signs-quantum-computing-cybersecurity-preparedness-act/

President Biden Signs Quantum Computing Cybersecurity Preparedness Act

In a new post on the Inside Tech Media blog, our colleagues discuss the “Quantum Computing Cybersecurity Preparedness Act,” which President Biden signed into law in the final days of 2022. The Act recognizes that current encryption protocols used by the federal government might one day be vulnerable to compromise as a result of quantum computing, which could allow adversaries of the United States to steal sensitive encrypted data. To address these concerns, the Act will require an inventory and prioritization of vulnerable information technology in use by federal agencies; a plan to migrate existing information technology systems; and reports to Congress on the progress of the migration and funding required.



(Related) Vulnerability is everywhere…

https://www.bespacific.com/government-watchdog-spent-15000-to-crack-a-federal-agencys-passwords-in-minutes/

Government watchdog spent $15,000 to crack a federal agency’s passwords in minutes

TechCrunch: “A government watchdog has published a scathing rebuke of the Department of the Interior’s cybersecurity posture, finding it was able to crack thousands of employee user accounts because the department’s security policies allow easily guessable passwords like 'Password1234'. The report by the Office of the Inspector General for the Department of the Interior, tasked with oversight of the U.S. executive agency that manages the country’s federal land, national parks and a budget of billions of dollars, said that the department’s reliance on passwords as the sole way of protecting some of its most important systems and employees’ user accounts has bucked nearly two decades of the government’s own cybersecurity guidance of mandating stronger two-factor authentication. It concludes that poor password policies puts the department at risk of a breach that could lead to a “high probability” of massive disruption to its operations.





Something to read? Doctorow is a great ‘explainer.’ [Podcast]

https://www.theverge.com/23547877/decoder-chokepoint-capitalism-cory-doctorow-rebecca-giblin-spotify-ticketmaster-antitrust

What is chokepoint capitalism, with authors Cory Doctorow and Rebecca Giblin

Last year, I spoke with Cory Doctorow and Rebecca Giblin about their new book, Chokepoint Capitalism. It’s a book about artists and technology and platforms and how different kinds of distribution and creation tools create chokepoints for different companies to capture value that might otherwise go to artists and creators. In other words, it’s a lot of Decoder stuff.

As we were prepping this episode, the Decoder team realized it previews a lot of things we’re going to talk about in 2023: antitrust law; Ticketmaster; Spotify and the future of the music industry; Amazon and the book industry; and of course, being a creator trying to make a living on all of these platforms.

The best part of the book is that Rebecca and Cory have some good ideas about how to actually solve some of the problems they talk about. As you’ll hear Cory say, the book isn’t just expounding on all the problems — half the book is about solutions.

The following transcript has been lightly edited for clarity.





Tools & Techniques.

https://www.bespacific.com/google-docs-voice-to-text-feature-is-getting-major-upgrades-heres-how-to-use-it/

Google Docs’ voice-to-text feature is getting major upgrades. Here’s how to use it

ZDNET: “Your voice is a powerful tool, and Google’s dictation tool can help you harness it, convert it, and even present it. With the Google suite’s voice-to-text capabilities, transferring thoughts from speech to digital copy is quick and simple. On top of that, Google just announced that its upcoming batch of voice-to-text improvements, coming in February, will make for a more reliable and accurate transcription process. The list of changes includes:

  • Reduction of transcription errors

  • Minimizing lost audio during transcription

  • Expanded support for major browsers

  • Captions in Slides will have auto-generated punctuation

While you will have to wait a few more weeks before you can fully benefit from the performance gains, allow me to first show you how to use Google’s voice dictation feature to type and edit on Google Docs and even transcribe speaker notes on Google Slides…”



Tuesday, January 10, 2023

A surveillance vacuum? What could other appliances reveal?

https://www.pogowasright.org/roomba-testers-feel-misled-after-intimate-images-ended-up-on-facebook/

Roomba testers feel misled after intimate images ended up on Facebook

Eileen Guo reports:

When Greg unboxed a new Roomba robot vacuum cleaner in December 2019, he thought he knew what he was getting into.
[…]
But what Greg didn’t know—and does not believe he consented to—was that iRobot would share test users’ data in a sprawling, global data supply chain, where everything (and every person) captured by the devices’ front-facing cameras could be seen, and perhaps annotated, by low-paid contractors outside the United States who could screenshot and share images at their will.

Read more at MIT Technology Review.





Why we need AI lawyers!

https://www.computerworld.com/article/3684734/this-lawsuit-against-microsoft-could-change-the-future-of-ai.html

This lawsuit against Microsoft could change the future of AI

Artificial intelligence (AI) is suddenly the darling of the tech world, thanks to ChatGPT, an AI chatbot which can do things such as carry on conversations and write essays and articles with what some people believe is human-like skill. In its first five days, more than a million people signed up to try it. The New York Times hails its “brilliance and weirdness” and says it inspires both awe and fear.

For all the glitz and hype surrounding ChatGPT, what it’s doing now are essentially stunts — a way to get as much attention as possible. The future of AI isn’t in writing articles about Beyonce in the style of Charles Dickens, or any of the other oddball things people use ChatGPT for. Instead, AI will be primarily a business tool, reaping billions of dollars for companies that use it for tasks like improving Internet searches, writing software code, discovering and fixing inefficiencies in a company’s business, and extracting useful, actionable information from massive amounts of data.

But there's a dirty little secret at the core of AI — intellectual property theft. To do its work, AI needs to constantly ingest data, lots of it. Think of it as the monster plant Audrey II in Little Shop of Horrors, constantly crying out “Feed me!” Detractors say AI is violating intellectual property laws by hoovering up information without getting the rights to it, and that things will only get worse from here.

An intellectual property lawsuit against Microsoft may determine the future of AI. It charges that Microsoft, the Microsoft code repository GitHub, and OpenAI, the parent of ChatGPT, have illegally used code created by others in order to build and train the Copilot service that uses AI to write software. (Microsoft has invested $1 billion in OpenAI.)

The future of AI may well hinge on the suit’s outcome.





“We don’t sell ads, Senator.”

https://www.makeuseof.com/how-does-mastodon-make-money/

How Does Mastodon Make Money?

Mastodon, the open-source social media platform that is quickly becoming a major player in the world of online communication, may have you wondering how it’s actually making money to keep the lights on and the servers running. After all, it’s free to use and there aren’t any ads, fees, or VC funding either like with other popular platforms.



Monday, January 09, 2023

I would be shocked if the cyberattack part of the war crime did not also occur outside of war zones.

https://www.politico.eu/article/victor-zhora-ukraine-russia-cyberattack-infrastructure-war-crime/

Kyiv argues Russian cyberattacks could be war crimes

One of Ukraine's top cyber officials said some cyberattacks on Ukrainian critical and civilian infrastructure could amount to war crimes.

Victor Zhora, chief digital transformation officer at the State Service of Special Communication and Information Protection (SSSCIP) of Ukraine, said Russia has launched cyberattacks in coordination with kinetic military attacks as part of its invasion of Ukraine, arguing the digital warfare is part of what Kyiv considers war crimes committed against its citizens.

When we observe the situation in cyberspace we notice some coordination between kinetic strikes and cyberattacks, and since the majority of kinetic attacks are organized against civilians — being a direct act of war crime — supportive actions in cyber can be considered as war crimes,” Zhora told POLITICO in an interview.

Classifying Russia's digital attacks on Ukrainian infrastructure as part of war crimes would be a first. Academics and researchers have been making the case for it since earlier this year, asking the Office of the Prosecutor at the ICC to add cyberattacks to their investigations into the war in Ukraine.





When writing Insurance is too risky (too expensive) sell some of that risk to others.

https://www.ft.com/content/a945d290-a7f1-427c-84a6-b0b0574f7376

Insurer Beazley launches first catastrophe bond for cyber threats

Lloyd’s of London insurer Beazley has launched the first cyber catastrophe bond, opening up one of the fastest-growing areas of the underwriting industry to investors as companies and governments seek to shield themselves from ransomware strikes.

The $45mn private bond will pay out to Beazley if total claims from a cyber attack on its clients exceed $300mn — a structure intended to give some protection to the insurer’s balance sheet from “remote probability catastrophe and systemic events”.

Adrian Cox, Beazley’s chief executive, told the Financial Times that the new instrument gave the insurer access to a much larger source of capital.

What that taps into is a pool that is trillions [of dollars] rather than hundreds of billions, and is a pathway for us to be able to hedge and grow,” Cox said. Beazley hoped, he added, to scale this “new tool” to eventually provide billions of dollars worth of reinsurance cover.





This likely will happen to most law firms. Plan for it now.

https://abovethelaw.com/2023/01/cyberattack-forces-biglaw-firm-to-take-document-management-system-down-for-weeks/

Cyberattack Forces Biglaw Firm To Take Document Management System Down For Weeks

We are confident that our process has been professional and appropriate. In fact, I am proud to say that we have received overwhelming praise from our clients for our transparency and the professionalism of our response to this attack.

Pat Quinn, managing partner of Cadwalader, in a statement given to the American Lawyer, concerning the firm’s response to a mid-November cyberattack that forced the Am Law 100 mainstay to wipe hard drives and take many of its systems offline, some of them for weeks (like its internal document management system). Quinn went on to note that the firm quickly informed clients about the issue and hired “renowned external cybersecurity experts and legal counsel.” Cybersecurity experts told Am Law that Cadwalader’s response was in-line with industry best practices.





Cell phone surveillance is complicated.

https://www.schneier.com/blog/archives/2023/01/identifying-people-using-cell-phone-location-data.html

Identifying People Using Cell Phone Location Data

The two people who shut down four Washington power stations in December were arrested. This is the interesting part:

Investigators identified Greenwood and Crahan almost immediately after the attacks took place by using cell phone data that allegedly showed both men in the vicinity of all four substations, according to court documents.

Nowadays, it seems like an obvious thing to do—although the search is probably unconstitutional. But way back in 2012, the Canadian CSEC—that’s their NSA—did some top-secret work on this kind of thing. The document is part of the Snowden archive, and I wrote about it:

The second application suggested is to identify a particular person whom you know visited a particular geographical area on a series of dates/times. The example in the presentation is a kidnapper. He is based in a rural area, so he can’t risk making his ransom calls from that area. Instead, he drives to an urban area to make those calls. He either uses a burner phone or a pay phone, so he can’t be identified that way. But if you assume that he has some sort of smart phone in his pocket that identifies itself over the Internet, you might be able to find him in that dataset. That is, he might be the only ID that appears in that geographical location around the same time as the ransom calls and at no other times.

There’s a whole lot of surveillance you can do if you can follow everyone, everywhere, all the time. I don’t even think turning your cell phone off would help in this instance. How many people in the Washington area turned their phones off during exactly the times of the Washington power station attacks? Probably a small enough number to investigate them all.





Reasonable suspicion overrules any Rights I might have?

https://www.bespacific.com/the-right-to-equal-protection-and-fourth-amendment-rights-are-distinct-rights/

The Right to Equal Protection and Fourth Amendment Rights Are Distinct Rights

Courts Must Protect Both: “On behalf of MACDL, Attorney Wood and a team of attorneys from Wilmer Cutler Pickering Hale and Dorr recently filed an amicus brief urging the Supreme Judicial Court to fully enforce people’s rights not to be targeted for stops based on their race, regardless of whether the police have reasonable suspicion. The Commonwealth [of Massachusetts] has repeatedly argued that if the police have reasonable suspicion, then it does not matter whether someone has been targeted because of their race. This argument is pernicious, essentially reading the equal protection clause out of the constitution. The SJC must reject such arguments.”



Sunday, January 08, 2023

I didn’t know Aunt Minnie knew Karate.

https://lawresearchmagazine.sbu.ac.ir/article_103035.html?lang=en

Legal Aspects of Deepfake

Deepfake means creating animated content (video) by mimicking, simulating or replacing a pre-existing face. In the way that the viewer imagines, the produced video is real; but it is not real. Deep learning technology, which is based on artificial intelligence, has made it possible to produce such content. This article focuses on the legal aspects of deepfaking and, in particular, the possible abuses of deepfake content. Or, basically, fake content may be produced and/or distributed with the intent to defraud abuse, deceive the public, or defame a celebrity. The main question is, what legal challenges does deepfake entail, and are the existing Criminal Acts and civil liability regulations responsive to the new situation that has arisen with the spread of deepfake or not? In order to answer such questions, in this article, the characteristics and practical challenges of deepfake, the use of the provisions of intellectual property agreements to manage deepfake, and legal solutions to reduce and manage deepfake are examined.





All our students are crazy and it’s your fault!

https://www.geekwire.com/2023/seattle-public-schools-sues-tiktok-youtube-instagram-and-others-seeking-compensation-for-youth-mental-health-crisis/

Seattle Public Schools sues TikTok, YouTube, Instagram and others, seeking compensation for youth mental health crisis

A new lawsuit filed by Seattle Public Schools against TikTok, YouTube, Facebook, Snap, Instagram, and their parent companies alleges that the social media giants have “successfully exploited the vulnerable brains of youth” for their own profit, using psychological tactics that have led to a mental health crisis in schools.

The suit, filed Friday in U.S. District Court in Seattle, seeks “the maximum statutory and civil penalties permitted by law,” making the case that the companies have violated Washington state’s public nuisance law.

Hundreds of families are pursuing similar cases against the companies, following revelations about the tactics used by Facebook, Instagram, and others to boost engagement among children and teenagers.

However, Seattle Public Schools appears to be the first school district in the country to file such a suit against the companies.

The district alleges that it has suffered widespread financial and operational harm from social media usage and addiction among students. The lawsuit cites factors including the resources required to provide counseling services to students in crisis, and to investigate and respond to threats made against schools and students over social media.

… Read the complaint Seattle School District No. 1 v. Meta Platforms, Snap Inc., TikTok, Alphabet, et al.





The worms are out of the can. Deal with it.

https://www.theguardian.com/commentisfree/2023/jan/07/chatgpt-bot-excel-ai-chatbot-tech

The ChatGPT bot is causing panic now – but it’ll soon be as mundane a tool as Excel

So the ChatGPT language processing model burst upon an astonished world and the air was rent by squeals of delight and cries of outrage or lamentation. The delighted ones were those transfixed by discovering that a machine could apparently carry out a written commission competently. The outrage was triggered by fears of redundancy on the part of people whose employment requires the ability to write workmanlike prose. And the lamentations came from earnest folks (many of them teachers at various levels) whose day jobs involve grading essays hitherto written by students.

So far, so predictable. If we know anything from history, it is that we generally overestimate the short-term impact of new communication technologies, while grossly underestimating their long-term implications. So it was with print, movies, broadcast radio and television and the internet. And I suspect we have just jumped on to the same cognitive merry-go-round.



(Related)

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4312358

Exploring the Role of Artificial Intelligence in Enhancing Academic Performance: A Case Study of ChatGPT

This study aims to explore the potential of artificial intelligence, particularly natural language processing, in enhancing academic performance using economics and finance as an illustrative example. The study employs a case study approach, using ChatGPT as a specific example of an NLP tool that has the potential to advance research. Our analysis of ChatGPT's applications, capabilities, and limitations revealed that it has the potential to significantly enhance academic research in general and economics and finance in particular. ChatGPT and other AI tools can assist researchers in data analysis and interpretation, scenario generation, and communication of findings. However, there are several limitations to consider when using chatbots or similar tools in research, including generalizability, dependence on data quality and diversity, lack of domain expertise, limited ability to understand context, ethical considerations, and limited ability to generate original insights. It is therefore important to carefully consider these limitations when using ChatGPT and to use it in conjunction with human analysis and interpretation.





Search everywhere, seize anyone?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4311941

Fourth Amendment Constraints on Automated Surveillance Technology in the Public to Safeguard the Right of an Individual to be 'Secure in Their Person'

Law-enforcement throughout the United States is adopting automated surveillance technology, like facial recognition, at breakneck speeds. The use of such technology is often not approved by a legislative body. Yet, people are subject to this technology with many being incorrectly identified and arrested as a result of such misidentification. As automated surveillance technology proliferates it comes in direct conflict with constitutional traditions. In particular, the Fourth Amendment protection against search and seizure would limit the use of such technology. Although courts have not addressed the growing specter of automated surveillance technology in depth, its impact will likely result in this outcome, especially its use in the public where traditionally privacy expectations have been lower. The Fourth Amendment requirement for “particularity" places an acute limitation on broad dragnet style automated surveillance systems, which requires that law-enforcement particularly identify the place or person to be searched or seized. This article addresses the need to develop jurisprudence that tackles the problem of automated surveillance technology and provides recommendations on how courts can address the use of this technology, as well as suggest remedies that can limit injury caused by unlawful use of this technology.





Some Apps I should have.

https://www.makeuseof.com/unique-iphone-scanner-apps/

7 iPhone Scanner Apps to Track Receipts, Solve Math Problems, and More

Although your iPhone has a built-in scanner in the Notes and Files apps, these third-party scanner apps can do more than just scan documents.