Thursday, July 24, 2025

No doubt everyone in law enforcement will want one of these, attached to their own databases.

https://www.bespacific.com/new-ice-mobile-app-pushes-biometric-policing-onto-american-streets/

New ICE mobile app pushes biometric policing onto American streets

BiometricUpdate.com: “U.S. Immigration and Customs Enforcement (ICE) has quietly deployed a new surveillance tool in its Enforcement and Removal Operations (ERO) arsenal – a smartphone app known as Mobile Fortify. Designed for ICE field agents, the app enables real-time biometric identity verification using facial recognition or contactless fingerprints. Based on leaked emails reported by 404 Media, the introduction of Mobile Fortify marks a profound shift in ICE’s operational methodology of using traditional fingerprint-based stationary checks to using mobile, on-the-go biometric profiling that echoes the type of border surveillance previously confined to airports and ports of entry. Mobile Fortify was built to integrate seamlessly with multiple Department of Homeland Security (DHS) biometric systems. Agents using ICE-issued mobile devices can now photograph a subject’s face or fingerprint, triggering a near-instant biometric match against data sources that include CBP’s Traveler Verification Service and DHS’s broader Automated Biometric Identification System (IDENT) database which contains biometric records on over 270 million individuals. This level of portability and automation suggests a capability that is poised to extend biometric surveillance far beyond designated checkpoints and into neighborhoods, local transport hubs, and any environment in which ICE officers operate. Facial recognition, though notably less reliable than fingerprints, is nevertheless embedded in the app’s core functionality. A February 2025 DHS Inspector General audit had warned that reliance on facial recognition risked misidentification. ICE agents have been observed pointing phones at individuals in cars during protests and other domestic operations, although it remains unclear whether Mobile Fortify was active in those encounters. The presence of a “training mode” within the app’s software though suggests that ICE envisions a spectrum of deployments from casual identity checks to more deliberate urban biometric sweeps. Although ICE officials stress that biometric matching happens in real time, the underlying model appears to be automated. A mobile photo or print is captured, transmitted to a DHS server linked to identity repositories, and compared through algorithmic matching – most likely involving AI-enhanced pattern recognition.



(Related)

https://www.bespacific.com/deportation-data-project/

Deportation Data Project

Immigration and Customs Enforcement. ICE collects data on every person it encounters, arrests, detains, transports via flight, and deports. We post below data that ICE produced in response to several FOIA requests by multiple organizations. Crucially, in some data releases, there are linked identifiers across data types such as arrests and detainers, allowing merges that enable tracing immigrants’ pathways (anonymously) through the immigration enforcement pipeline. The identifiers are, unfortunately, different across releases, only enabling merging within a data release. See below for a description of each release. Our ICE codebook describes each data table and the fields within them.





Sounds like someone who does not understand technology. Of course it is ‘do-able’ it’s just expensive. (and not even very expensive.)

https://deadline.com/2025/07/trump-ai-action-plan-copyright-1236466617/

Donald Trump Says AI Companies Can’t Be Expected To Pay For All Copyrighted Content Used In Their Training Models: “Not Do-Able”

Donald Trump said that AI companies can’t be expected to pay for the use of copyrighted content in their systems, amid a fierce debate over the use of intellectual property in training models.





I don’t use social media. I could never get a visa…

https://www.eff.org/deeplinks/2025/07/you-shouldnt-have-make-your-social-media-public-get-visa

You Shouldn’t Have to Make Your Social Media Public to Get a Visa

The Trump administration is continuing  its  dangerous push  to surveil and suppress foreign students’ social media activity. The State Department recently announced an unprecedented new requirement that applicants for student and exchange visas must set all social media accounts to “public” for government review. The State Department also indicated that if applicants refuse to unlock their accounts or otherwise don’t maintain a social media presence, the government may interpret it as an attempt to evade the requirement or deliberately hide online activity.





Perspective.

https://www.zdnet.com/article/will-ai-think-like-humans-were-not-even-close-and-were-asking-the-wrong-question/

Will AI think like humans? We're not even close - and we're asking the wrong question

Artificial intelligence may have impressive inferencing powers, but don't count on it to have anything close to human reasoning powers anytime soon. The march to so-called artificial general intelligence (AGI), or AI capable of applying reasoning through changing tasks or environments in the same manner as humans, is still a long way off.  Large reasoning models (LRMs), while not perfect, do offer a tentative step in that direction. 

In other words, don't count on your meal-prep service robot to react appropriately to a kitchen fire or a pet jumping on the table and slurping up food. 



Wednesday, July 23, 2025

Should anyone use devices like this?

https://www.theverge.com/news/711621/amazon-bee-ai-wearable-acquisition

Amazon buys Bee AI wearable that listens to everything you say

Bee makes a $49.99 Fitbit-like device that listens in on your conversations while using AI to transcribe everything that you and the people around you say, allowing it to generate personalized summaries of your days, reminders, and suggestions from within the Bee app. You can also give the device permission to access your emails, contacts, location, reminders, photos, and calendar events to help inform its AI-generated insights, as well as create a searchable history of your activities.

My colleague Victoria Song got to try out the device for herself and found that it didn’t always get things quite right. It tended to confuse real-life conversations with the TV shows, TikTok videos, music, and movies that it heard.





Oh yeah, that makes perfect sense.

https://www.zdnet.com/article/people-dont-trust-ai-but-theyre-increasingly-using-it-anyway/

People don't trust AI but they're increasingly using it anyway

According to data first reported by Axios, ChatGPT now responds to around 2.5 billion user queries daily, with 330 million of those (roughly 13%) originating in the US. That's around 912.5 billion queries per year.

ChatGPT was also the most downloaded app in the world in April; in June, it clocked more App Store downloads than TikTok, Facebook, Instagram, and X combined.





Tools & Techniques.

https://news.mit.edu/2025/mit-learn-offers-whole-new-front-door-institute-0721

MIT Learn offers “a whole new front door to the Institute”

In 2001, MIT became the first higher education institution to provide educational resources for free to anyone in the world. Fast forward 24 years: The Institute has now launched a dynamic AI-enabled website for its non-degree learning opportunities, making it easier for learners around the world to discover the courses and resources available on MIT’s various learning platforms.

MIT Learn enables learners to access more than 12,700 educational resources — including introductory and advanced courses, courseware, videos, podcasts, and more — from departments across the Institute. MIT Learn is designed to seamlessly connect the existing Institute’s learning platforms in one place.



Tuesday, July 22, 2025

To err is human. To hallucinate is AI?

https://www.bespacific.com/generative-artificial-intelligence-and-copyright-law-4/

Generative Artificial Intelligence and Copyright Law

Generative Artificial Intelligence and Copyright Law CRS Legal Sidebar – LSB10922, 7/18/25 – “Innovations in artificial intelligence (AI) have raised several new questions in the field of copyright law.  Generative AI programs—such as Open AI’s DALL-E and ChatGPT programs, Stability AI’s Stable Diffusion program, and Midjourney’s self-titled program—are able to generate new images, texts, and other content (or “outputs”) in response to a user’s textual or other prompts. Generative AI programs are trained to create such outputs partly by exposing them to large quantities of existing writings, photos, paintings, or other works. This Legal Sidebar explores questions that courts and the U.S. Copyright Office have confronted regarding whether generative AI outputs may be copyrighted as well as whether training and using generative AI programs may infringe copyrights in other works. Other CRS Legal Sidebars explore questions AI raises in the intellectual property fields of patents and the right of publicity…”





No encryption no privacy.

https://scholarship.law.marquette.edu/mulr/vol108/iss2/5/

Encryption Backdoors and the Fourth Amendment

The National Security Agency (NSA) reportedly paid and pressured technology companies to trick their customers into using vulnerable encryption products. This Article examines whether any of three theories removed the Fourth Amendment’s requirement that this be reasonable. The first is that a challenge to the encryption backdoor might fail for want of a search or seizure. The Article rejects this both because the Amendment reaches some vulnerabilities apart from the searches and seizures they enable and because the creation of this vulnerability was itself a search or seizure. The second is that the role of the technology companies might have brought this backdoor within the private-search doctrine. The Article criticizes the doctrine— particularly its origins in Burdeau v. McDowell—and argues that if it ever should apply, it should not here. The last is that the customers might have waived their Fourth Amendment rights under the third-party doctrine. The Article rejects this both because the customers were not on notice of the backdoor and because historical understandings of the Amendment would not have tolerated it. The Article concludes that none of these theories removed the Amendment’s reasonableness requirement.



Monday, July 21, 2025

Yeah, we knew that.

https://arstechnica.com/tech-policy/2025/07/its-frighteningly-likely-many-us-courts-will-overlook-ai-errors-expert-says/

It’s “frighteningly likely” many US courts will overlook AI errors, expert says

Fueling nightmares that AI may soon decide legal battles, a Georgia court of appeals judge, Jeff Watkins, explained why a three-judge panel vacated an order last month that appears to be the first known ruling in which a judge sided with someone seemingly relying on fake AI-generated case citations to win a legal fight.

Now, experts are warning that judges overlooking AI hallucinations in court filings could easily become commonplace, especially in the typically overwhelmed lower courts. And so far, only two states have moved to force judges to sharpen their tech competencies and adapt so they can spot AI red flags and theoretically stop disruptions to the justice system at all levels.

The recently vacated order came in a Georgia divorce dispute, where Watkins explained that the order itself was drafted by the husband's lawyer, Diana Lynch. That's a common practice in many courts, where overburdened judges historically rely on lawyers to draft orders. But that protocol today faces heightened scrutiny as lawyers and non-lawyers increasingly rely on AI to compose and research legal filings, and judges risk rubberstamping fake opinions by not carefully scrutinizing AI-generated citations.

The errant order partly relied on "two fictitious cases" to deny the wife's petition—which Watkins suggested were "possibly 'hallucinations' made up by generative-artificial intelligence"—as well as two cases that had "nothing to do" with the wife's petition.





Tools & Techniques. Take a picture of your document and output text.

https://www.yourvalley.net/stories/pdfgear-scan-finally-a-completely-free-ai-scanner-app-for-all,600756

PDFgear Scan, Finally, a Completely Free AI Scanner App for All



Sunday, July 20, 2025

Your relationships are changing.

https://pogowasright.org/as-companies-race-to-add-ai-terms-of-service-changes-are-going-to-freak-a-lot-of-people-out-think-twice-before-granting-consent/

As companies race to add AI, terms of service changes are going to freak a lot of people out. Think twice before granting consent!

Jude Karabus reports:

WeTransfer this week denied claims it uses files uploaded to its ubiquitous cloud storage service to train AI, and rolled back changes it had introduced to its Terms of Service after they deeply upset users. The topic? Granting licensing permissions for an as-yet-unreleased LLM product.
Agentic AI, GenAI, AI service bots, AI assistants to legal clerks, and more are washing over the tech space like a giant wave as the industry paddles for its life hoping to surf on a neural networks breaker. WeTransfer is not the only tech giant refreshing its legal fine print – any new product that needs permissions-based data access – not just for AI – is going to require a change to its terms of service.
In the case of WeTransfer, the passage that aroused ire was:
You hereby grant us a perpetual, worldwide, non-exclusive, royalty-free, transferable, sub-licensable license to use your Content for the purposes of operating, developing, commercializing, and improving the Service or new technologies or services, including to improve performance of machine learning models that enhance our content moderation process, in accordance with the Privacy & Cookie Policy. (Emphasis ours.)

Read more at The Register.

Meanwhile, over on TechCrunch, Zack Whittaker writes: think twice before granting AI access to your personal data:

There is a trend of AI apps that promise to save you time by transcribing your calls or work meetings, for example, but which require an AI assistant to access your real-time private conversations, your calendars, contacts, and more. Meta, too, has been testing the limits of what its AI apps can ask for access to, including tapping into the photos stored in a user’s camera roll that haven’t been uploaded yet.
Signal president Meredith Whittaker recently likened the use of AI agents and assistants to “putting your brain in a jar.” Whittaker explained how some AI products can promise to do all kinds of mundane tasks, like reserving a table at a restaurant or booking a ticket for a concert. But to do that, AI will say it needs your permission to open your browser to load the website (which can allow the AI access to your stored passwords, bookmarks, and your browsing history), a credit card to make the reservation, your calendar to mark the date, and it may also ask to open your contacts so you can share the booking with a friend.





No doubt incorporating Asimov’s three laws...

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5351275

Should AI Write Your Constitution?

Artificial Intelligence (AI) now has the capacity to write a constitution for any country in the world. But should it? The immediate reaction is likely emphatically no—and understandably so, given that there is no greater exercise of popular sovereignty than the act of constituting oneself under higher law legitimated by the consent of the governed. But constitution-making is not a single act at a single moment. It is a series of discrete steps demanding varying degrees of popular participation to produce a text that enjoys legitimacy both in perception and reality. Some of these steps could prudently integrate human-AI collaboration or autonomous AI assistance—or so we argue in this first Article to explain and evaluate how constitutional designers not only could, but also should, harness the extraordinary potential of AI. We combine our expertise as innovators in the use and design of AI with our direct involvement as advisors in constitution-making processes around the world to map the terrain of opportunities and hazards in the next iteration of the continuing fusion of technology with governance. We ask and answer the most important question now confronting constitutional designers: how to use AI in making and reforming constitutions?

We make five major contributions to jumpstart the study of AI and constitutionalism. First, we unveil the results of the first Global Survey of Constitutional Experts on AI. How do constitutional experts view the risks and rewards of AI, would they use AI to write their own constitution, and what red lines would they impose around AI? Second, we introduce a novel spectrum of human control to classify and distinguish three types of tasks in constitution-making: high sensitivity tasks that should remain fully within the domain of human judgment and control, lower sensitivity tasks that are candidates for significant AI assistance or automation, and moderate sensitivity tasks that are ripe for human-AI collaboration. Third, we take readers through the key steps in the constitution-making process, from start to finish, to thoroughly explain how AI can assist with discrete tasks in constitution-making. Our objective here is to show scholars and practitioners how and when AI may be integrated into foundational democratic processes. Fourth, we construct a Democracy Shield—a set of specific practices, principles, and protocols—to protect constitutionalism and constitutional values from the real, perceived, and unanticipated risks that AI raises when merged into acts of national self-definition and popular reconstitution. Fifth, we make specific recommendations on how constitutional designers should use AI to make and reform constitutions, recognizing that openness to using AI in governance is likely to grow as human use and familiarity with AI increases over time, as we anticipate it will. This cutting-edge Article is therefore simultaneously descriptive, prescriptive, and normative.



Thursday, July 17, 2025

Those who do not read science fiction are doomed to repeat it?

https://futurism.com/ai-models-flunking-three-laws-robotics

Leading AI Models Are Completely Flunking the Three Laws of Robotics

Last month, for instance, researchers at Anthropic found that top AI models from all major players in the space — including OpenAI, Google, Elon Musk's xAI, and Anthropic's own cutting-edge tech — happily resorted to blackmailing human users when threatened with being shut down.

In other words, that single research paper caught every leading AI catastrophically bombing all three Laws of Robotics: the first by harming a human via blackmail, the second by subverting human orders, and the third by protecting its own existence in violation of the first two laws.





Perspective.

https://sloanreview.mit.edu/article/stop-deploying-ai-start-designing-intelligence/

Stop Deploying AI. Start Designing Intelligence

Stephen Wolfram is a physicist-turned-entrepreneur whose pioneering work in cellular automata, computational irreducibility, and symbolic knowledge systems fundamentally reshaped our understanding of complexity. His theoretical breakthroughs led to successful commercial products, Wolfram Alpha and Wolfram Language. Despite his success, the broader business community has largely overlooked these foundational insights. As part of our ongoing “Philosophy Eats AI”  exploration — the thesis that foundational philosophical clarity is essential to the future value of intelligent systems — we find that Wolfram’s fundamental insights about computation have distinctly actionable, if underappreciated, uses for leaders overwhelmed by AI capabilities but underwhelmed by AI returns.





A reasonable example?

https://www.aclu.org/about/privacy/archive/2025-07-31/privacy-statement

American Civil Liberties Union Privacy Statement



Wednesday, July 16, 2025

Tools & Techniques.

https://www.bespacific.com/handbook-of-the-law-ethics-and-policy-of-artificial-intelligence/

Handbook of the Law, Ethics and Policy of Artificial Intelligence

The Cambridge University Press & Assessment Handbook of the Law, Ethics and Policy of Artificial Intelligence (2025), edited by Nathalie Smuha, KU Leuven – is a comprehensive 600-page resource that brings together 30+ leading scholars across law, ethics, philosophy, and AI policy [Open Access – PDF and HTML]. It’s structured into three parts:

  • AI, Ethics & Philosophy

  • AI, Law & Policy

  • AI in Sectoral Applications (healthcare, finance, education, law enforcement, military, etc.)



Tuesday, July 15, 2025

I wondered how they would do this. Now I wonder how the government can verify that is was done.

https://arstechnica.com/tech-policy/2025/07/reddit-starts-verifying-ages-of-uk-users-to-comply-with-child-safety-law/

Reddit’s UK users must now prove they’re 18 to view adult content

Reddit announced today that it has started verifying UK users' ages before letting them "view certain mature content" in order to comply with the country's Online Safety Act.

Reddit said that users "shouldn't need to share personal information to participate in meaningful discussions," but that it will comply with the law by verifying age in a way that protects users' privacy. "Using Reddit has never required disclosing your real world identity, and these updates don't change that," Reddit said.

Reddit said it contracted with the company Persona, which "performs the verification on either an uploaded selfie or a photo of your government ID. Reddit will not have access to the uploaded photo, and Reddit will only store your verification status along with the birthdate you provided so you won't have to re-enter it each time you try to access restricted content."

Reddit said that Persona made promises about protecting the privacy of data. "Persona promises not to retain the photo for longer than 7 days and will not have access to your Reddit data such as the subreddits you visit," the Reddit announcement said.

Reddit provided more detail on how the age verification works here, and a list of what content is restricted here





The truth is not supposed to be a matter of opinion.

https://reason.com/2025/07/14/missouri-harasses-ai-companies-over-chatbots-dissing-glorious-leader-trump/

Missouri Harasses AI Companies Over Chatbots Dissing Glorious Leader Trump

Missourians deserve the truth, not AI-generated propaganda masquerading as fact," said Missouri Attorney General Andrew Bailey. That's why he's investigating prominent artificial intelligence companies for…failing to spread pro-Trump propaganda?

Under the guise of fighting "big tech censorship" and "fake news," Bailey is harassing Google, Meta, Microsoft, and OpenAI. Last week, Bailey's office sent each company a formal demand letter seeking "information on whether these AI chatbots were trained to distort historical facts and produce biased results while advertising themselves to be neutral."

And what, you might wonder, led Bailey to suspect such shenanigans?

Chatbots don't rank President Donald Trump on top.



Monday, July 14, 2025

Why indeed?

https://www.bespacific.com/cops-favorite-ai-tool-automatically-deletes-evidence-of-when-ai-was-used/

Cops’ favorite AI tool automatically deletes evidence of when AI was used

Ars Technica: AI police tool is designed to avoid accountability, watchdog says. On Thursday, a digital rights group, the Electronic Frontier Foundation, published an expansive investigation into AI-generated police reports that the group alleged are, by design, nearly impossible to audit and could make it easier for cops to lie under oath.  Axon’s Draft One debuted last summer at a police department in Colorado, instantly raising questions about the feared negative impacts of AI-written police reports on the criminal justice system. The tool relies on a ChatGPT variant to generate police reports based on body camera audio, which cops are then supposed to edit to correct any mistakes, assess the AI outputs for biases, or add key context. But the EFF found that the tech “seems designed to stymie any attempts at auditing, transparency, and accountability.” Cops don’t have to disclose when AI is used in every department, and Draft One does not save drafts or retain a record showing which parts of reports are AI-generated. Departments also don’t retain different versions of drafts, making it difficult to assess how one version of an AI report might compare to another to help the public determine if the technology is “junk,” the EFF said. That raises the question, the EFF suggested, “Why wouldn’t an agency want to maintain a record that can establish the technology’s accuracy?” It’s currently hard to know if cops are editing the reports or “reflexively rubber-stamping the drafts to move on as quickly as possible,” the EFF said. That’s particularly troubling, the EFF noted, since Axon disclosed to at least one police department that “there has already been an occasion when engineers discovered a bug that allowed officers on at least three occasions to circumvent the ‘guardrails’ that supposedly deter officers from submitting AI-generated reports without reading them first.” The AI tool could also possibly be “overstepping in its interpretation of the audio,” possibly misinterpreting slang or adding context that never happened.

A “major concern,” the EFF said, is that the AI reports can give cops a “smokescreen,” perhaps even allowing them to dodge consequences for lying on the stand by blaming the AI tool for any “biased language, inaccuracies, misinterpretations, or lies” in their reports. “There’s no record showing whether the culprit was the officer or the AI,” the EFF said. “This makes it extremely difficult if not impossible to assess how the system affects justice outcomes over time.” According to the EFF, Draft One “seems deliberately designed to avoid audits that could provide any accountability to the public.” In one video from a roundtable discussion the EFF reviewed, an Axon senior principal product manager for generative AI touted Draft One’s disappearing drafts as a feature, explaining, “we don’t store the original draft and that’s by design and that’s really because the last thing we want to do is create more disclosure headaches for our customers and our attorney’s offices.”





Sure to be a very common question.

https://www.bespacific.com/can-you-trust-ai-in-legal-research/

Can you trust AI in legal research?

Sally McLaren post on LinkedIn: “Can you trust AI in legal research? Our study tested how leading Generative AI tools responded to fake case citations and the results were eye-opening. While some models correctly flagged the fabricated case, others confidently generated detailed but entirely false legal content, even referencing real statutes and cases.  Our table of findings (which is not behind a paywall) breaks down how each model performed. We encourage you to repurpose this for use in your AI literacy sessions to help build critical awareness in legal research.”

We have another article, this time in Legal Information Management. A deep dive into Generative AI outputs and the legal information professional’s role (paywall). In the foot notes we have included our data and encourage use of it in you AI literacy sessions. “You’re right to be skeptical!”: The Role of Legal Information Professionals in Assessing Generative AI Outputs | Legal Information Management | Cambridge Core: “Generative AI tools, such as ChatGPT, have demonstrated impressive capabilities in summarisation and content generation. However, they are infamously prone to hallucination, fabricating plausible information and presenting it as fact. In the context of legal research, this poses significant risk. This paper, written by Sally McLaren and Lily Rowe, examines how widely available AI applications respond to fabricated case citations and assesses their ability to identify false cases, the nature of their summaries, and any commonalities in their outputs. Using a non-existent citation, we analysed responses from multiple AI models, evaluating accuracy, detail, structure and the inclusion of references. Results revealed that while some models flagged our case as fictitious, others generated convincing but erroneous legal content, occasionally citing real cases or legislation. The experiment underscores concern about AI’s credibility in legal research and highlights the role of legal information professionals in mitigating risks through user education and AI literacy training. Practical engagement with these tools is crucial to understanding the user experience. Our findings serve as a foundation for improving AI literacy in legal research.”



Sunday, July 13, 2025

Do I look like an immigrant?

https://pogowasright.org/trump-border-czar-boasts-ice-can-briefly-detain-people-based-on-physical-appearance/

Trump Border Czar Boasts ICE Can ‘Briefly Detain’ People Based On ‘Physical Appearance’

This is what our country has deteriorated to under Trump. T hink about whether this is acceptable to you, and if not, what you can and will do about it. 

David Moyes reports:

President Donald Trump’s border czar Tom Homan went viral on Friday after practically boasting on TV about all the ways ICE agents and Border Patrol agents can go after suspected illegal immigrants.
Homan was being interviewed on Fox News about a potential ruling from a federal judge in Los Angeles over whether the Trump administration could be ordered to pause its ICE raids on immigrants.
He responded by claiming that immigration law enforcers don’t actually need “probable cause” to detain a possible suspect, despite it being a key part of the Constitution’s Fourth Amendment.
People need to understand, ICE [Immigration and Customs Enforcement] officers and Border Patrol don’t need probable cause to walk up to somebody, briefly detain them, and question them,” Homan said. “They just go through the observations, get articulable facts based on their location, their occupation, their physical appearance, their actions.”
Homan also insisted that if his agents briefly detained someone, “it’s not probable cause. It’s reasonable suspicion.”

Read more at HuffPost.





All students are criminals?

https://www.jmir.org/2025/1/e71998/

School-Based Online Surveillance of Youth: Systematic Search and Content Analysis of Surveillance Company Websites

Background:

School-based online surveillance of students has been widely adopted by middle and high school administrators over the past decade. Little is known about the technology companies that provide these services or the benefits and harms of the technology for students. Understanding what information online surveillance companies monitor and collect about students, how they do it, and if and how they facilitate appropriate intervention fills a crucial gap for parents, youth, researchers, and policy makers.

Objective:

The two goals of this study were to (1) comprehensively identify school-based online surveillance companies currently in operation, and (2) collate and analyze company-described surveillance services, monitoring processes, and features provided.

Methods:

We systematically searched GovSpend and EdSurge’s Education Technology (EdTech) Index to identify school-based online surveillance companies offering social media monitoring, student communications monitoring, or online monitoring. We extracted publicly available information from company websites and conducted a systematic content analysis of the websites identified. Two coders independently evaluated all company websites and discussed the findings to reach 100% consensus regarding website data labeling.

Results:

Our systematic search identified 14 school-based online surveillance companies. Content analysis revealed that most of these companies facilitate school administrators’ access to students’ digital behavior, well beyond monitoring during school hours and on school-provided devices. Specifically, almost all companies reported conducting monitoring of students at school, but 86% (12/14) of companies reported also conducting monitoring 24/7 outside of school and 7% (1/14) reported conducting monitoring outside of school at school administrator-specified locations. Most online surveillance companies reported using artificial intelligence to conduct automated flagging of student activity (10/14, 71%), and less than half of the companies (6/14, 43%) reported having a secondary human review team. Further, 14% (2/14) of companies reported providing crisis responses via company staff, including contacting law enforcement at their discretion.



Conclusions:

This study is the first detailed assessment of the school-based online surveillance industry and reveals that student monitoring technology can be characterized as heavy-handed. Findings suggest that students who only have school-provided devices are more heavily surveilled and that historically marginalized students may be at a higher risk of being flagged due to algorithmic bias. The dearth of research on efficacy and the notable lack of transparency about how surveillance services work indicate that increased oversight by policy makers of this industry may be warranted. Dissemination of our findings can improve parent, educator, student, and researcher awareness of school-based online monitoring services.