Saturday, March 11, 2023

As a kid, I would have hated anything that restricted my social interactions and would have found at least a few ways around the blocks.

https://www.pogowasright.org/utah-legislature-passes-bills-restricting-social-media-accounts-for-minors/

Utah Legislature Passes Bills Restricting Social Media Accounts for Minors

Hunton Andrews Kurth writes:

On March 1-3, 2023, the Utah legislature passed a series of bills, SB 152 and HB 311, regarding social media usage for minors. For social media companies with more than five million users worldwide, SB 152 would require parental permission for social media accounts for users under age 18, while HB 311 would hold social media companies liable for harm minors experience on the platforms. Both bills have been sent to the governor’s desk for signature.
    • SB 152: Beginning in March 2024, SB 152 would require social media companies to verify the age of a Utah resident seeking to maintain or open an account, and would require the consent of a parent or guardian before a minor under age 18 could maintain or open an account…
    • HB 311: Also effective March 2024, HB 311 would prohibit social media companies from designing their platforms in a way that “causes a minor to have an addiction to the company’s social media platform.”….[Like making it interesting? Bob]

Read more at Privacy & Information Security Law Blog.





Not too techie…

https://www.zdnet.com/article/how-does-chatgpt-work/

How does ChatGPT work?

Google, Wolfram Alpha, and ChatGPT all interact with users via a single line text entry field and provide text results. Google returns search results, a list of web pages and articles that will (hopefully) provide information related to the search queries. Wolfram Alpha generally provides mathematically and data analysis-related answers.

Fundamentally, Google's power is the ability to do enormous database lookups and provide a series of matches. Wolfram Alpha's power is the ability to parse data-related questions and perform calculations based on those questions. ChatGPT's power is the ability to parse queries and produce fully-fleshed out answers and results based on most of the world's digitally-accessible text-based information -- at least information that existed as of its time of training prior to 2021.

In this article, we'll look at how ChatGPT can produce those fully-fleshed out answers. We'll start by looking at the main phases of ChatGPT operation, then cover some of the core AI architecture components that make it all work.

In addition to the sources cited in this article (many of which are the original research papers behind each of the technologies), I used ChatGPT itself to help me create this backgrounder. I asked it a lot of questions. Some answers are paraphrased within the overall context of this discussion.



(Related)

https://www.makeuseof.com/ways-kids-can-use-chatgpt-safely/

5 Ways Kids Can Use ChatGPT Safely

ChatGPT is open to everyone, including children, but you should take steps to help them use it safely. Here's how.





Perspective.

https://www.uschamber.com/technology/artificial-intelligence-commission-report

Artificial Intelligence Commission Report

  • The development of AI and the introduction of AI-based systems are growing exponentially. Over the next 10 to 20 years, virtually every business and government agency will use AI. This will have a profound impact on society, the economy, and national security.

  • Policy leaders must undertake initiatives to develop thoughtful laws and rules for the development of responsible AI and its ethical deployment.

  • A failure to regulate AI will harm the economy, potentially diminish individual rights, and constrain the development and introduction of beneficial technologies.

  • The United States, through its technological advantages, well-developed system of individual rights, advanced legal system, and interlocking alliances with democracies, is uniquely situated to lead this effort.

  • The United States needs to act to ensure future economic growth, provide for a competitive workforce, maintain a competitive position in a global economy, and provide for our future national security needs.

  • Policies to promote responsible AI must be a top priority for this and future administrations and Congresses.



Friday, March 10, 2023

Not sure this is winning…

https://www.newstatesman.com/quickfire/2023/03/philosophy-beat-ai

Only philosophy can beat AI

The problem in asking whether a robot will ever be as smart as a human is in how we frame the question. For starters there are many forms of intelligence – and philosophy attempts to answer the questions of what we mean when we talk about intelligence and self-awareness. Responding to whether humans are fully in control of their own thinking, Descartes famously asserted “I think therefore I am”. Although a machine could recognise its own existence in a mirror, it cannot feel that existential dread that death is an inevitability. And while machines could converse with one another, they could never feel the emotional frustration that led Jean-Paul Sartre to conclude that hell is other people.





Vigilante surveillance?

https://www.theregister.com/2023/03/10/catholic_clergy_surveillance/

Catholic clergy surveillance org 'outs gay priests'

A Catholic clergy conformance organization has reportedly been buying mobile app tracking data to identify gay priests, and providing that information to bishops around the US.

The group, Catholic Laity and Clergy for Renewal (CLCR), was formed in Colorado in 2019 and relocated its principal office to Casper, Wyoming in April, 2020, according to Colorado State business records [PDF].





Tools & Techniques.

https://www.lawnext.com/2023/03/zuva-launches-free-ai-powered-contracts-review-tool.html

Zuva Launches Free AI-Powered Contracts Review Tool

Zuva, the company that spun off from Kira Systems after Kira was acquired by Litera in 2021, is offering a completely free version of its AI-powered contract review technology, which can be used by anyone just by uploading contracts to Zuva’s website.

Contracts AI is typically pretty expensive,” Noah Waisberg, Zuva’s CEO and cofounder and the original cofounder of Kira, told me. “We’ve had people pay us literally millions of dollars to use a different version of the same tech. This is free.”

There are some limitations over the paid version of the same technology. Document sizes are limited to PDF documents of under 150 pages or 5 MB, and users will not be able to export or copy and paste the results.



Thursday, March 09, 2023

It seems that you surrender control (ownership?) of the data your camera records. Interesting to imagine a lawyer asked for video on a neighbor who was a client.

https://www.politico.com/news/2023/03/07/privacy-loophole-ring-doorbell-00084979

The privacy loophole in your doorbell

The week of last Thanksgiving, Michael Larkin, a business owner in Hamilton, Ohio, picked up his phone and answered a call. It was the local police, and they wanted footage from Larkin’s front door camera.

Larkin had a Ring video doorbell, one of the more than 10 million Americans with the Amazon-owned product installed at their front doors. His doorbell was among 21 Ring cameras in and around his home and business, picking up footage of Larkin, neighbors, customers and anyone else near his house.

The police said they were conducting a drug-related investigation on a neighbor, and they wanted videos of “suspicious activity” between 5 and 7 p.m. one night in October. Larkin cooperated, and sent clips of a car that drove by his Ring camera more than 12 times in that time frame.

He thought that was all the police would need. Instead, it was just the beginning.

They asked for more footage, now from the entire day’s worth of records. And a week later, Larkin received a notice from Ring itself: The company had received a warrant, signed by a local judge. The notice informed him it was obligated to send footage from more than 20 cameras — whether or not Larkin was willing to share it himself.





A scary new word?

https://www.theatlantic.com/technology/archive/2023/03/ai-chatgpt-writing-language-models/673318/

Prepare for the Textpocalypse

What if, in the end, we are done in not by intercontinental ballistic missiles or climate change, not by microscopic pathogens or a mountain-size meteor, but by … text? Simple, plain, unadorned text, but in quantities so immense as to be all but unimaginable—a tsunami of text swept into a self-perpetuating cataract of content that makes it functionally impossible to reliably communicate in any digital setting?

Our relationship to the written word is fundamentally changing. So-called generative artificial intelligence has gone mainstream through programs like ChatGPT, which use large language models, or LLMs, to statistically predict the next letter or word in a sequence, yielding sentences and paragraphs that mimic the content of whatever documents they are trained on. They have brought something like autocomplete to the entirety of the internet. For now, people are still typing the actual prompts for these programs and, likewise, the models are still (mostly) trained on human prose instead of their own machine-made opuses.





Tools & Techniques.

https://9to5google.com/2023/03/09/grammarly-ai-generative/

Grammarly adding ChatGPT-like AI to create text in your writing style, outlines, and more

Like ChatGPT, GrammarlyGO is able to create text based on a short prompt, though Grammarly’s special trick is that the content generated copies your usual writing style – after all, Grammarly already analyzes everything you write for typos, so there’s plenty of data to work with. Use cases for this that Grammarly points out includes writing email replies based on one-click prompts such as “I’m not interested.”

GrammarlyGO will be available to all users, free or paid, starting with a beta program in April.



Wednesday, March 08, 2023

As always, output is based on the input. If you want answers relevant to your business you have to tell ChatGPT about your business.

https://www.darkreading.com/risk/employees-feeding-sensitive-business-data-chatgpt-raising-security-fears

Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears

Employees are submitting sensitive business data and privacy-protected information to large language models (LLMs) such as ChatGPT, raising concerns that artificial intelligence (AI) services could be incorporating the data into their models, and that information could be retrieved at a later date if proper data security isn't in place for the service.

In one case, an executive cut and pasted the firm's 2023 strategy document into ChatGPT and asked it to create a PowerPoint deck. In another case, a doctor input his patient's name and their medical condition and asked ChatGPT to craft a letter to the patient's insurance company.





Alternatives?

https://www.bespacific.com/the-privacy-loophole-in-your-doorbell/

The privacy loophole in your doorbell

Politico: “…As networked home surveillance cameras become more popular, Larkin’s case, which has not previously been reported, illustrates a growing collision between the law and people’s own expectation of privacy for the devices they own — a loophole that concerns privacy advocates and Democratic lawmakers, but which the legal system hasn’t fully grappled with. Questions of who owns private home security footage, and who can get access to it, have become a bigger issue in the national debate over digital privacy. And when law enforcement gets involved, even the slim existing legal protections evaporate. “It really takes the control out of the hands of the homeowners, and I think that’s hugely problematic,” said Jennifer Lynch, the surveillance litigation director of the Electronic Frontier Foundation, a digital rights advocacy group. In the debate over home surveillance, much of the concern has focused on Ring in particular, because of its popularity, as well as the company’s track record of cooperating closely with law enforcement agencies. The company offers a multitude of products such as indoor cameras or spotlight cameras for homes or businesses, recording videos based on motion activation, with the footage stored for up to 180 days on Ring’s servers. They amount to a large and unregulated web of eyes on American communities — which can provide law enforcement valuable information in the event of a crime, but also create a 24/7 recording operation that even the owners of the cameras aren’t fully aware they’ve helped to build. “They are part of an ever-expanding web of surveillance in communities across America,” Sen. Ed Markey (D-Mass.) said in a statement to POLITICO about Ring’s products. “I’ve been ringing alarms about this company’s threats to our privacy and civil liberties for years.”

Stored video footage is generally governed by data privacy laws, which are still new in the U.S. and largely limited to the state level. So far, all the U.S. state privacy laws, from the strictest regulations in California to the industry-backed law in Virginia, include exemptions if law enforcement comes asking. The most ambitious federal law so far proposed in Congress — the American Data Privacy and Protection Act, which died in committee last year — included the same loophole. As private surveillance grows, this loophole looks bigger and bigger to privacy advocates and security-minded homeowners like Larkin. When it comes to Ring in particular, the company hasn’t just been a passive actor in that growth, or in law enforcement’s interest. As its doorbell cams grew more popular, Ring developed a symbiotic relationship with police, who realized that the privately owned cameras were generating valuable surveillance footage that they could leverage for investigations. Local police departments would often give away Ring doorbells, which the company provided for free in some cases. Ring has an app called Neighbors, where users can upload and post clips, like a virtual neighborhood watch. In 2018, it started partnering with local police departments, with features specifically for officers on the app, allowing them to send public safety alerts and requests for video footage to users in a specific area. By 2023, Ring had nearly 2,350 police departments on its Neighbors network…”





Confusing. This article seems to suggest that the FBI was programming the system. Were commercial packages too slow to market?

https://gizmodo.com/fbi-facial-recognition-janus-horus-1850198100

The FBI Tested Facial Recognition Software on Americans for Years, New Documents Show

New documents revealed by the ACLU and shared with Gizmodo show the lengths FBI and Pentagon officials went to develop “truly unconstrained” facial recognition capable of being deployed in public street cameras, mobile drones, and cops’ body cameras.

The goal of the project, code-named “Janus” after the Roman god with two opposing faces, was to develop highly advanced facial scanning tech capable of scanning people’s faces across a vast swath of public places, from subway cars and street corners to hospitals and schools. In some cases, researchers believed the advanced tech could detect targets from up to 1,000 meters away.





Tools & Techniques.

https://www.bespacific.com/duckduckgo-releases-its-own-chatgpt-powered-search-engine-duckassist/

DuckDuckGo Releases Its Own ChatGPT-Powered Search Engine, DuckAssist

Gizmodo: “DuckDuckGo launched a beta version of an AI search tool powered by ChatGPT Wednesday called DuckAssist. The addition to the company’s privacy-focused search engine uses ChatGPT’s language parsing capability to generate answers scraped from Wikipedia and related sources like the Encyclopedia Britannica. The tool is free and available on the DuckDuckGo web browsing apps for phones and computers as well as the company’s browser extension starting today. “DuckAssist is a new type of Instant Answer in our search results, just like News, Maps, Weather, and many others we already have,” said Gabriel Weinberg, CEO of DuckDuckGo, in a blog post. “We designed DuckAssist to be fully integrated into DuckDuckGo Private Search, mirroring the look and feel of our traditional search results, so while the AI-generated content is new, we hope using DuckAssist feels second nature. Unlike Microsoft’s bungled AI projects with Bing (RIP Sydney ), DuckAssist isn’t a chatbot. Instead, DuckAssist will suggest an automatic answer when it recognizes a search term it can answer. It’s not being forced on anyone. When an AI-powered response is available, you’ll see a magic wand icon with an “ask me” button in your search results. The company says DuckAssist is still in beta, so it may not pop up that often yet…”



 

Tuesday, March 07, 2023

In short, you can talk (prompt) the LLM into including or excluding certain data, changing the output.

https://www.schneier.com/blog/archives/2023/03/prompt-injection-attacks-on-large-language-models.html

Prompt Injection Attacks on Large Language Models

This is a good survey on prompt injection attacks on large language models (like ChatGPT).

Abstract: We are currently witnessing dramatic advances in the capabilities of Large Language Models (LLMs). They are already being adopted in practice and integrated into many systems, including integrated development environments (IDEs) and search engines. The functionalities of current LLMs can be modulated via natural language prompts, while their exact internal functionality remains implicit and unassessable. This property, which makes them adaptable to even unseen tasks, might also make them susceptible to targeted adversarial prompting. Recently, several ways to misalign LLMs using Prompt Injection (PI) attacks have been introduced. In such attacks, an adversary can prompt the LLM to produce malicious content or override the original instructions and the employed filtering schemes. Recent work showed that these attacks are hard to mitigate, as state-of-the-art LLMs are instruction-following. So far, these attacks assumed that the adversary is directly prompting the LLM.
In this work, we show that augmenting LLMs with retrieval and API calling capabilities (so-called Application-Integrated LLMs) induces a whole new set of attack vectors. These LLMs might process poisoned content retrieved from the Web that contains malicious prompts pre-injected and selected by adversaries. We demonstrate that an attacker can indirectly perform such PI attacks. Based on this key insight, we systematically analyze the resulting threat landscape of Application-Integrated LLMs and discuss a variety of new attack vectors. To demonstrate the practical viability of our attacks, we implemented specific demonstrations of the proposed attacks within synthetic applications. In summary, our work calls for an urgent evaluation of current mitigation techniques and an investigation of whether new techniques are needed to defend LLMs against these threats.



(Related)

https://sloanreview.mit.edu/article/the-no-1-question-to-ask-when-evaluating-ai-tools/

The No. 1 Question to Ask When Evaluating AI Tools

In the fast-moving and highly competitive artificial intelligence sector, developers’ claims that their AI tools can make critical predictions with a high degree of accuracy are key to selling prospective customers on their value. Because it can be daunting for people who are not AI experts to evaluate these tools, leaders may be tempted to rely on the high-level performance metrics published in sales materials. But doing so often leads to disappointing or even risky implementations.

Over the course of an 11-month investigation, we observed managers in a leading health care organization as they conducted internal pilot studies of five AI tools. Impressive performance results had been promised for each, but several of the tools did extremely poorly in their pilots. Analyzing the evaluation process, we found that an effective way to determine an AI tool’s quality is understanding and examining its ground truth.1 In this article, we’ll explain what that is and how managers can dig into it to better assess whether a particular AI tool may enhance or diminish decision-making in their organization.





Tools & Techniques.

https://beebom.com/how-build-own-ai-chatbot-with-chatgpt-api/

How to Build Your Own AI Chatbot With ChatGPT API: A Step-by-Step Tutorial

In a breakthrough announcement, OpenAI recently introduced the ChatGPT API to developers and the public. Particularly, the new “gpt-3.5-turbo” model, which powers ChatGPT Plus has been released at a 10x cheaper price, and it’s extremely responsive as well. Basically, OpenAI has opened the door for endless possibilities and even a non-coder can implement the new ChatGPT API and create their own AI chatbot. So in this article, we bring you a tutorial on how to build your own AI chatbot using the ChatGPT API. We have also implemented a Gradio interface so you can easily demo the AI model and share it with your friends and family. On that note, let’s go ahead and learn how to create a personalized AI with ChatGPT API.



Monday, March 06, 2023

Do I need to be worried?

https://www.businessinsider.com/florida-blogger-registration-bill-violates-first-amendment-aclu-2023-3

A proposed law that would require people blogging about Ron DeSantis to register with the state is a 'clear violation of the First Amendment:' ACLU

A Florida lawmaker has introduced a bill that would require all bloggers who write about Gov. Ron DeSantis to register with the state or be fined. Organizations like the ACLU tell Insider that the proposed law violates the right to free speech.

A representative for the American Civil Liberties Union's Florida chapter told Insider the bill, S.B. 1316, is "un-American to its core."

"This is a clear violation of the First Amendment because it strongly discourages bloggers from speaking on politics – one of the most critical types of speech for maintaining a democracy," the ACLU representative told Insider.





Better than I feared, but I’d like some examples.

https://techpolicy.press/on-facebook-visual-misinfo-widespread-highly-asymmetric-across-party-lines/

On Facebook, Visual Misinfo Widespread, Highly Asymmetric Across Party Lines

In “Visual misinformation on Facebook,” published this week in the Journal of Communication, scholars from Texas A&M University’s Department of Communication & Journalism, Columbia University’s Tow Center for Digital Journalism, and the George Washington University’s Institute for Data, Democracy & Politics collected and analyzed nearly 14 million posts from more than 14,000 pages and 11,000 public groups from August through October 2020.

From this corpus, the researchers arrived at a representative data set of political images, and another of images that specifically depicted political figures. An analysis found that 23% of political images in a sample contained misinformation, while 20% of those that depicted a political figure were misleading.





Surveillance is not just a people thing.

https://timesofindia.indiatimes.com/world/china/trojan-horse-why-us-officials-have-raised-alarm-over-giant-chinese-cargo-cranes/articleshow/98449335.cms

'Trojan horse': Why US officials have raised alarm over giant Chinese cargo cranes

Ever since US shot down a giant Chinese surveillance balloon over its airspace last month, there has been heightened global scrutiny on the scale of Beijing's spying operations.

Now, American officials have raised concerns over the possibility of a "new tool" of spying hiding in plain sight: giant Chinese-made cranes operating at US ports.

According to a report in the Wall Street Journal (WSJ), Pentagon officials suspect that cranes made by Chinese manufacturer ZPMC operating at ports across America are being used to register sensitive information.

They said that these cranes contain sophisticated sensors which can register and track the provenance and destination of containers, enabling China to capture information about materiel being shipped in or out of US to support its military operations around the world, the report said.





Perspective.

https://www.computerweekly.com/news/365531613/AI-interview-Michael-Osborne-professor-of-machine-learning

AI interview: Michael Osborne, professor of machine learning

A key theme of his overall research has been the societal and economic impacts of new technologies, particularly with regards to automation of the workplace.

Speaking with Computer Weekly, Osborne notes the pressing need to wrest control of AI’s development and deployment away from corporations in the private sector, so that it can be used to its full potential in promoting human flourishing.

It’s fair to say that the private sector has been a bit too powerful when it comes to AI…if we want to see the benefits of these technologies, states need to step in to unlock and to drain some of the moats that are protecting big tech,” he says.

To achieve this, he argues for tighter guardrails on what the private sector can do with AI and for a rebalancing of the scales in favour of public research institutions. Osborne also says there should be serious thought given to how political processes can be used to change the direction of travel with AI.



Sunday, March 05, 2023

But the computer said it was okay!

https://link.springer.com/article/10.1007/s10506-023-09347-w

Going beyond the “common suspects”: to be presumed innocent in the era of algorithms, big data and artificial intelligence

This article explores the trend of increasing automation in law enforcement and criminal justice settings through three use cases: predictive policing, machine evidence and recidivism algorithms. The focus lies on artificial-intelligence-driven tools and technologies employed, whether at pre-investigation stages or within criminal proceedings, in order to decode human behaviour and facilitate decision-making as to whom to investigate, arrest, prosecute, and eventually punish. In this context, this article first underlines the existence of a persistent dilemma between the goal of increasing the operational efficiency of police and judicial authorities and that of safeguarding fundamental rights of the affected individuals. Subsequently, it shifts the focus onto key principles of criminal procedure and the presumption of innocence in particular. Using Article 6 ECHR and the Directive (EU) 2016/343 as a starting point, it discusses challenges relating to the protective scope of presumption of innocence, the burden of proof rule and the in dubio pro reo principle as core elements of it. Given the transformations law enforcement and criminal proceedings go through in the era of algorithms, big data and artificial intelligence, this article advocates the adoption of specific procedural safeguards that will uphold rule of law requirements, and particularly transparency, fairness and explainability. In doing so, it also takes into account EU legislative initiatives, including the reform of the EU data protection acquis, the E-evidence Proposal, and the Proposal for an EU AI Act. Additionally, it argues in favour of revisiting the protective scope of key fundamental rights, considering, inter alia, the new dimensions suspicion has acquired.





Which ethic?

https://ui.adsabs.harvard.edu/abs/2023arXiv230212149K/abstract

Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI

AI ethics is an emerging field with multiple, competing narratives about how to best solve the problem of building human values into machines. Two major approaches are focused on bias and compliance, respectively. But neither of these ideas fully encompasses ethics: using moral principles to decide how to act in a particular situation. Our method posits that the way data is labeled plays an essential role in the way AI behaves, and therefore in the ethics of machines themselves. The argument combines a fundamental insight from ethics (i.e. that ethics is about values) with our practical experience building and scaling machine learning systems. We want to build AI that is actually ethical by first addressing foundational concerns: how to build good systems, how to define what is good in relation to system architecture, and who should provide that definition. Building ethical AI creates a foundation of trust between a company and the users of that platform. But this trust is unjustified unless users experience the direct value of ethical AI. Until users have real control over how algorithms behave, something is missing in current AI solutions. This causes massive distrust in AI, and apathy towards AI ethics solutions. The scope of this paper is to propose an alternative path that allows for the plurality of values and the freedom of individual expression. Both are essential for realizing true moral character.





Yes, this again. Cutting the AI out of the benefits?

https://digitalcommons.law.scu.edu/chtlj/vol39/iss2/2/

RECONCEPTUALIZING CONCEPTION: MAKING ROOM FOR ARTIFICIAL INTELLIGENCE INVENTIONS

Artificial intelligence (AI) enables the creation of inventions that no natural person conceived, at least as conception is traditionally understood in patent law. These can be termed “AI inventions,” i.e., inventions for which an AI system has contributed to the conception in a manner that, if the AI system were a person, would lead to that person being named as an inventor. Deeming such inventions unpatentable would undermine the incentives at the core of the patent system, denying society access to the full benefits of the extraordinary potential of AI systems with respect to innovation. But naming AI systems as inventors and allowing patentability on that basis is also problematic, as it involves granting property rights to computer programs. This Article proposes a different approach: AI inventions should be patentable, with inventorship attributed to the natural persons behind the AI under a broadened view of conception. More specifically, conception should encompass ideas formed through collaboration between a person and tools that act as extensions of their mind. The “formation” of those ideas should be attributed to the person, including when the ideas underlying the invention were first expressed by a tool used to enhance their creative capacity and subsequently conveyed to them.