Saturday, June 24, 2023

Perspective.

https://www.cnbc.com/2023/06/23/the-ai-spending-boom-is-spreading-far-beyond-big-tech-companies.html

A.I. is now the biggest spend for nearly 50% of top tech executives across the economy: CNBC survey

In a signal of just how quickly and widely the artificial intelligence boom is spreading, nearly half of the companies (47%) surveyed by CNBC say that AI is their top priority for tech spending over the next year, and AI budgets are more than double the second-biggest spending area in tech, cloud computing, at 21%.





Perspective.

https://hbr.org/2023/06/companies-that-replace-people-with-ai-will-get-left-behind

Companies That Replace People with AI Will Get Left Behind

After much discussion, the debate over job displacement from artificial intelligence is settling into a consensus. Historically, we’ve never experienced macro-level unemployment from new technologies, so AI is unlikely to make many people jobless in the long term — especially since most advanced countries are now seeing their working-age populations decline. However, because companies are adopting ChatGPT and other generative AI remarkably fast, we may see substantial job displacement in the short term.



Friday, June 23, 2023

Illiterate in a whole new genre? Don’t worry, the nice AI will do it for you.

https://www.bespacific.com/how-non-ai-experts-try-and-fail-to-design-llm-prompts/

Why Johnny Can’t Prompt: How Non-AI Experts Try (and Fail) to Design LLM Prompts

Why Johnny Can’t Prompt: How Non-AI Experts Try (and Fail) to Design LLM Prompts J.D. Zamfirescu-Pereira, Richmond Y. Wong, Bjoern Hartmann, Qian Yang

Pre-trained large language models (“LLMs”) like GPT-3 can engage in fluent, multi-turn instruction-taking out-of-the-box, making them attractive materials for designing natural language interactions. Using natural language to steer LLM outputs (“prompting”) has emerged as an important design technique potentially accessible to non-AI-experts. Crafting effective prompts can be challenging, however, and prompt-based interactions are brittle. Here, we explore whether non-AI-experts can successfully engage in “end-user prompt engineering” using a design probe—a prototype LLM-based chatbot design tool supporting development and systematic evaluation of prompting strategies. Ultimately, our probe participants explored prompt designs opportunistically, not systematically, and struggled in ways echoing end-user programming systems and interactive machine learning systems. Expectations stemming from human-to-human instructional experiences, and a tendency to overgeneralize, were barriers to effective prompt design. These findings have implications for non-AI-expert-facing LLM-based tool design and for improving LLM-and-prompt literacy among programmers and the public, and present opportunities for further research.”





Tools & Techniques. Something for the forensics team?

https://www.bespacific.com/bellingcats-online-investigation-toolkit/

Bellingcat’s Online Investigation Toolkit

Bellingcat’s Online Investigation Toolkit, a spreadsheet that includes “satellite and mapping services, tools for verifying photos and videos, websites to archive web pages, and much more”. Use the navigational icons on the bottom of the spreadsheet to access all the content. Each entry includes: type, name, description, free or fee, guide, country-specific.



Thursday, June 22, 2023

I thought we had agreed that this approach had some problems?

https://www.pogowasright.org/lexisnexis-is-selling-your-personal-data-to-ice-so-it-can-try-to-predict-crimes/

LexisNexis Is Selling Your Personal Data to ICE So It Can Try to Predict Crimes

Sam Biddle reports:

The legal research and public records data broker LexisNexis is providing U.S. Immigration and Customs Enforcement with tools to target people who may potentially commit a crime — before any actual crime takes place, according to a contract document obtained by The Intercept. LexisNexis then allows ICE to track the purported pre-criminals’ movements.
The unredacted contract overview provides a rare look at the controversial $16.8 million agreement between LexisNexis and ICE, a federal law enforcement agency whose surveillance of and raids against migrant communities are widely criticized as brutal, unconstitutional, and inhumane.

Read more at The Intercept.





Perspective.

https://www.pwc.com/gx/en/issues/workforce/hopes-and-fears.html

PwC’s Global Workforce Hopes and Fears Survey 2023

Business leaders everywhere are prioritising transformation, but what if your most skilled people are more reinvention ready than your company culture is? And what if your employees say they are even more likely to quit now than they were last year—back when everyone thought the “great resignation” was at its peak?





Is this how all classes will be (should be) structured?

https://www.thecrimson.com/article/2023/6/21/cs50-artificial-intelligence/

CS50 Will Integrate Artificial Intelligence Into Course Instruction

This year, students who enroll in Computer Science 50: Introduction to Computer Science, Harvard’s flagship coding course, will have a new learning tool at their disposal: artificial intelligence.

Starting in the fall, students will be able to use AI to help them find bugs in their code, give feedback on the design of student programs, explain unfamiliar lines of code or error messages, and answer individual questions, CS50 professor David J. Malan ’99 wrote in an emailed statement.

AI use has exploded in recent months: As large language models like ChatGPT become widely accessible for free, companies are laying off workers, experts are sounding the alarm about the proliferation of disinformation, and academics are grappling with its impact on teaching and research.



Wednesday, June 21, 2023

Interesting, but why in the Justice Dept? Have they decided these are not warlike acts?

https://www.databreaches.net/justice-department-announces-new-national-security-cyber-section-within-the-national-security-division/

Justice Department Announces New National Security Cyber Section Within the National Security Division

The Justice Department today announced the creation of the new National Security Cyber Section – known as NatSec Cyber – within its National Security Division. The newly established litigating section has secured congressional approval and comes in response to the core findings in Deputy Attorney General Lisa O. Monaco’s Comprehensive Cyber Review in July of 2022.

NatSec Cyber will give us the horsepower and organizational structure we need to carry out key roles of the Department in this arena,” said Assistant Attorney General Matthew G. Olsen of the Justice Department’s National Security Division. “This new section will allow NSD to increase the scale and speed of disruption campaigns and prosecutions of nation-state threat actors, state-sponsored cybercriminals, associated money launderers, and other cyber-enabled threats to national security.”

The National Security Cyber Section will increase the Justice Department’s capacity to disrupt and respond to malicious cyber activity, while promoting Department-wide and intragovernmental partnerships in tackling increasingly sophisticated and aggressive cyber threats by hostile nation-state adversaries. The Section will bolster collaboration between key partners, notably the Criminal Division’s Computer Crimes and Intellectual Property Section (CCIPS) and the FBI’s Cyber Division and will serve as a valuable resource for prosecutors in the 94 U.S. Attorneys’ Offices and 56 FBI Field Offices across the country.

Responding to highly technical cyber threats often requires significant time and resources,” said Assistant Attorney General Olsen. “NatSec Cyber will serve as an incubator, able to invest in the time-intensive and complex investigative work for early-stage cases.”

Today’s announcement builds upon recent successes in identifying, addressing and eliminating national security cyber threats, including the charging of an alleged cybercriminal with ransomware attacks against U.S. critical infrastructure and disruption the Russian government’s premier cyberespionage malware tool.





Might be interesting to law enforcement too.

https://www.bespacific.com/how-your-new-car-tracks-you/

How Your New Car Tracks You

Wired (free link) -” Vehicles from Toyota, Honda, Ford, and more can collect huge volumes of data. Here’s what the companies can access… In May, US-based automotive firm Privacy4Cars released a new tool, dubbed the Vehicle Privacy Report, that reveals how much information on your car can be hoovered up. Much like Apple and Google’s privacy labels for apps which show how Facebook might use your camera, or how Uber might use your location data—the tool indicates what vehicle manufacturers can know. Using industry sales data, WIRED ran 10 of the most popular cars in the US through the privacy tool to see just how much information they can collect. Spoiler: It’s a lot. The analysis follows previous reporting on the amount of data modern cars can collect and sharewith estimates saying cars can produce 25 gigabytes of data per hour …”





On the Internet, nobody knows you’re a dog.” Not so in the office.

https://www.bespacific.com/gen-z-is-taking-courses-on-how-to-send-an-email-and-what-to-wear-in-the-office/

Gen-Z Is Taking Courses On How To Send An Email and What To Wear In the Office

Business Insider: “Recent graduates from Generation Z, who have primarily experienced virtual classes and remote internships during college, may need to improve their soft skills such as email writing, casual conversation, and appropriate work attire. According to a new report from the Wall Street Journal, companies like KPMG, Deloitte, and PwC are offering training programs to help these employees adapt to the office, focusing on in-person communication, eye contact, conversation pauses, and professional dress. Insider reports: KPMG is offering new hires introductory training that includes how to talk to people in person, with tips on the appropriate level of eye contact and pauses in a conversation, the company’s vice chair of talent and culture, Sandy Torchia, told the Journal. Deloitte and PwC also began offering similar trainings earlier this year, the Financial Times reported in May. Similarly, the consulting company Proviti said it expanded its training for new hires during the pandemic to include a series of virtual meetings that focus on issues like how to make authentic conversation, according to the Journal. Scott Redfearn, Protiviti’s executive vice president of global human resources, told the Journal the company has had to remind new hires to avoid casual attire like blue jeans with holes in them. Some universities have also stepped in to bridge the gap. Michigan State University’s director of career management, Marla McGraw, told the Journal that companies need to be more direct when it comes to telling new hires what to wear and how to act in the office. The school now requires many of its business majors to take classes that foster soft skills like how to network in person. The Journal reported that one course breaks down a networking conversation by reminding students to pause after they introduce themselves in order to let the other person say their name, as well as respond to signs the other person might be looking to end the conversation. While it’s common for companies to host onboarding sessions that cover office dynamics like attire and rules for interpersonal relationships, some experts say younger employees need these reminders now more than ever.”





Perspective. (It’s not that complicated…)

https://www.kdnuggets.com/2023/06/data-scientist-essential-guide-exploratory-data-analysis.html

A Data Scientist’s Essential Guide to Exploratory Data Analysis

Best practices, techniques, and tools to fully understand your data.

Exploratory Data Analysis (EDA) is the single most important task to conduct at the beginning of every data science project.

In essence, it involves thoroughly examining and characterizing your data in order to find its underlying characteristics, possible anomalies, and hidden patterns and relationships.

This understanding of your data is what will ultimately guide through the following steps of you machine learning pipeline, from data preprocessing to model building and analysis of results.



Tuesday, June 20, 2023

As I predicted!

https://bdtechtalks.com/2023/06/19/chatgpt-model-collapse/

ChatGPT will make the web toxic for its successors

Generative artificial intelligence has empowered everyone to be more creative. Large language models (LLM ) like ChatGPT can generate essays and articles with impressive quality. Diffusion models such as Stable Diffusion and DALL-E create stunning images.

But what happens when the internet becomes flooded with AI-generated content? That content will eventually be collected and used to train the next iterations of generative models. According to a study by researchers at the University of Oxford, University of Cambridge, Imperial College London, and the University of Toronto, machine learning models trained on content generated by generative AI will suffer from irreversible defects that gradually exacerbate across generations.

The only way to maintain the quality and integrity of future models is to make sure they are trained on human-generated content. But with LLMs such as ChatGPT and GPT-4 enabling the creation of content at scale, access to human-created data might soon become a luxury that few can afford.





How dare you tell us we’re lying!

https://www.nytimes.com/2023/06/19/technology/gop-disinformation-researchers-2024-election.html

G.O.P. Targets Researchers Who Study Disinformation Ahead of 2024 Election

On Capitol Hill and in the courts, Republican lawmakers and activists are mounting a sweeping legal campaign against universities, think tanks and private companies that study the spread of disinformation, accusing them of colluding with the government to suppress conservative speech online.

The effort has encumbered its targets with expansive requests for information and, in some cases, subpoenas — demanding notes, emails and other information related to social media companies and the government dating back to 2015. Complying has consumed time and resources and already affected the groups’ ability to do research and raise money, according to several people involved.





Sounds like a rather serious hole in TSA “security.” (Yes, we just ignore that high tech ID.)

https://coloradosun.com/2023/06/19/denver-airport-colorado-license-tsa-security-dia/

Got a Colorado driver’s license? Expect to run into problems with TSA at the airport.

Dankers said TSA couldn’t provide any specific detail about why their system has issues with Colorado IDs or when the issue would be resolved.

If a traveler’s license is stopped by a TSA machine, however, they need only show their boarding pass to be allowed through, she said.





Did anyone listen?

https://foreignpolicy.com/2023/06/19/ai-artificial-intelligence-national-security-foreign-policy-threats-prediction/

AI Has Entered the Situation Room

At the start of 2022, seasoned Russia experts and national security hands in Washington watched in disbelief as Russian President Vladimir Putin massed his armies on the borders of Ukraine. Was it all a bluff to extract more concessions from Kyiv and the West, or was he about to unleash a full-scale land war to redraw Europe’s borders for the first time since World War II? The experts shook the snow globe of their vast professional expertise, yet the debate over Putin’s intentions never settled on a conclusion.

But in Silicon Valley, we had already concluded that Putin would invade—four months before the Russian attack. By the end of January, we had predicted the start of the war almost to the day.

How? Our team at Rhombus Power, made up largely of scientists, engineers, national security experts, and former national security practitioners, was looking at a completely different picture than the traditional foreign-policy community. Relying on artificial intelligence to sift through almost inconceivable amounts of online and satellite data, our machines were aggregating actions on the ground, counting inputs that included movements at missile sites and local business transactions, and building heat maps of Russian activity virtually in real time.

We got it right because we weren’t bound by the limitations of traditional foreign-policy analysis. We weren’t trying to divine Putin’s motivations, nor did we have to wrestle with our own biases and assumptions trying to interpret his words. Instead, we were watching what the Russians were actually doing by tracking often small but highly important pieces of data that, when aggregated effectively, became powerful predictors. All kinds of details caught our attention: Weapons systems moved to the border regions in 2021 for what the Kremlin claimed were military drills were still there, as if pre-positioned for future forward advances. Russian officers’ spending patterns at local businesses made it obvious they weren’t planning on returning to barracks, let alone home, anytime soon. By late October 2021, our machines were telling us that war was coming.





Perspective.

https://www.technologyreview.com/2023/06/20/1075075/metas-ai-leaders-want-you-to-know-fears-over-ai-existential-risk-are-ridiculous/

Meta’s AI leaders want you to know fears over AI existential risk are “ridiculous”

Plus: Five big takeaways from Europe’s AI Act.

It’s a really weird time in AI. In just six months, the public discourse around the technology has gone from “Chatbots generate funny sea shanties” to “AI systems could cause human extinction.” Who else is feeling whiplash?

My colleague Will Douglas Heaven asked AI experts why exactly people are talking about existential risk, and why now. Meredith Whittaker, president of the Signal Foundation (which is behind the private messaging app Signal) and a former Google researcher, sums it up nicely: “Ghost stories are contagious. It’s really exciting and stimulating to be afraid.”



Monday, June 19, 2023

Perspective. If AI achieves personhood, could it run for president?

https://thehill.com/homenews/campaign/4054333-how-ai-is-changing-the-2024-election/

How AI is changing the 2024 election

Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights, said the proliferation of the AI systems available to the public, awareness of how simple it is to use them and the “erosion of the sense that creating things like deepfakes is something that good, honest people would never do” will make 2024 a “significant turning point” for how AI is used in campaigns.

I think now, increasingly, there’s an attitude that, ‘Well, it’s just the way it goes, you can’t tell what’s true anymore,’” Barrett said.





Perspective.

https://sloanreview.mit.edu/article/the-impact-of-generative-ai-on-hollywood-and-entertainment/

The Impact of Generative AI on Hollywood and Entertainment

It’s still early days for generative AI-created entertainment, but it’s clear that something big is happening. A recent Wall Street Journal article noted that widely available AI tools can suggest storylines, character arcs, and dialogue; it even includes an interactive module that lets readers see for themselves how easily ChatGPT can write a basic script when given a few prompts. The article also raises questions about image intellectual property: “If a user prompts an AI tool to build a new character influenced by say, SpongeBob, should the original creators have to grant permission? Who owns it? Can the new work itself be copyrighted?”



Sunday, June 18, 2023

Introducing bias by using familiar rather than accurate terms.

https://link.springer.com/article/10.1007/s11245-023-09934-1

The Ethics of Terminology: Can We Use Human Terms to Describe AI?

Despite facing significant criticism for assigning human-like characteristics to artificial intelligence, phrases like “trustworthy AI” are still commonly used in official documents and ethical guidelines. It is essential to consider why institutions continue to use these phrases, even though they are controversial. This article critically evaluates various reasons for using these terms, including ontological, legal, communicative, and psychological arguments. All these justifications share the common feature of trying to justify the official use of terms like “trustworthy AI” by appealing to the need to reflect pre-existing facts, be it the ontological status, ways of representing AI or legal categories. The article challenges the justifications for these linguistic practices observed in the field of AI ethics and AI science communication. In particular, it takes aim at two main arguments. The first is the notion that ethical discourse can move forward without the need for philosophical clarification, bypassing existing debates. The second justification argues that it’s acceptable to use anthropomorphic terms because they are consistent with the common concepts of AI held by non-experts—exaggerating this time the existing evidence and ignoring the possibility that folk beliefs about AI are not consistent and come closer to semi-propositional beliefs. The article sounds a strong warning against the use of human-centric language when discussing AI, both in terms of principle and the potential consequences. It argues that the use of such terminology risks shaping public opinion in ways that could have negative outcomes.



(Related)

https://arxiv.org/abs/2306.06499

Defining and Explorting the Intelligence Space

Intelligence is a difficult concept to define, despite many attempts at doing so. Rather than trying to settle on a single definition, this article introduces a broad perspective on what intelligence is, by laying out a cascade of definitions that induces both a nested hierarchy of three levels of intelligence and a wider-ranging space that is built around them and approximations to them. Within this intelligence space, regions are identified that correspond to both natural – most particularly, human – intelligence and artificial intelligence (AI), along with the crossover notion of humanlike intelligence. These definitions are then exploited in early explorations of four more advanced, and likely more controversial, topics: the singularity, generative AI, ethics, and intellectual property.





A duty to use?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4477704

Why Law Firms Must Responsibly Embrace Generative AI

In this article, we explore the compelling reasons why law firms should responsibly embrace generative artificial intelligence (GAI). We believe doing so is imperative for the future of the legal profession and provide several real-world examples of how organizations, including law firms, are already successfully leveraging GAI. We highlight the many advantages GAI can bring to the legal field, but also confront the common arguments against its adoption. Our analysis includes strategies for mitigating these risks, as well as supporting evidence for why a complete ban on GAI technology is not justified. Most importantly, we articulate how law firms can effectively manage these risks by following organizational best practices, adhering to legal and ethical standards, and using GAI responsibly.





A cautionary tale. No supervision?

https://reason.com/volokh/2023/06/16/lawyers-affidavit-in-the-colorado-ai-hallucinated-precedent-case/

Lawyer's Affidavit in the Colorado AI-Hallucinated Precedent Case

Thanks to the invaluable UCLA Law Library, I got a copy of the affidavit in which the lawyer apologizes and explains why he used ChatGPT to draft a motion:



(Related) This is usually on the other coast...

https://www.nj.com/politics/2023/06/nj-takes-stab-at-regulating-and-policing-ai-and-the-harm-caused-by-deepfakes.html

N.J. takes stab at regulating and policing AI and the harm caused by ‘deepfakes’

Lawmakers in New Jersey are attempting to regulate and police the use of Artificial Intelligence in certain cases with a collection of bills that have prompted some Republicans to raise concerns about the potential harm to free speech.

The state Assembly Judiciary Committee on Thursday voted along party lines to advance three Democratic-sponsored bills aimed at criminalizing the creation and distribution of deceptive media known as “deepfakes.”

One of the measures (A5511) would prohibit the creation or disclosure of deceptive audio or visual media in certain circumstances, while a second bill (A5510) would criminalize the distribution of intentionally deceptive media within 90 days of an election.





Toward inevitable personhood.

https://nigerianjournalsonline.com/index.php/IRLJ/article/view/3346

A RESPONSE TO SOME RANDOM THOUGHTS ON LEGAL PERSONALITY AND SUBJECTNESS OF ARTIFICIAL INTELLIGENCE ENTITIES

Artificial Intelligence (AI) is one of the dynamically developing and promising digital technologies. The use of AI, for instance, makes it possible to transfer the industrial segment of the economy to a new technological level. It results in the increase of economic efficiency of industrial enterprises and can radically transform existing social, economic, financial and industrial ecosystems. However, as the use of technologies based on AI becomes more widespread, the number of associated incidents grows as well, indicating that AI are not mere objects whose operation is influenced by others. Regardless of the exceptional operating principle of AI entities, none of the legal systems has recognized AI as subjects of law. It is trite that AI entities are capable of learning from their own personal experience leading to independent conclusions and autonomous decision-making. Systems of AI are different from other regular computer algorithms due to their uniqueness in their capacity to learn and act independently of the will of their developers or programmers. Therefore, failure to manage this technology can lead to major concerns such as moral, ethical issues and problems. This paper considered some random thoughts on whether AI entities can be called subjects of law and drew a response. This paper made use of doctrinal method of analysis data gathered from primary sources such as case laws, legislation, statutes and secondary sources such as books and journal articles. It was discovered that extant laws are not sufficient to capture the operations of AI entities. There is too no accurate definition of the AI concept. The study recommended the granting of legal personality, even if fictionally, on AI entities based on their autonomy and independence, just as the case with corporations.