Saturday, June 17, 2023

How does the AI determine what is real?

https://www.quantamagazine.org/neural-networks-need-data-to-learn-even-if-its-fake-20230616/

Neural Networks Need Data to Learn. Even If It’s Fake.

Real data can be hard to get, so researchers are turning to synthetic data to train their artificial intelligence systems.





Perspective. (Those magic words…)

https://www.investors.com/news/technology/quantum-computing-after-artificial-intelligence-it-could-be-the-next-big-thing/

After Artificial Intelligence, Quantum Computing Could Be The Next Big Thing

Artificial intelligence is clearly the latest craze sweeping the technology industry, but an even bigger trend may be on the horizon in the form of quantum computing — provided it can solve troubling cybersecurity questions.



(Related)

https://timesofindia.indiatimes.com/education/upskill/quantum-computing-explained-with-simple-examples/articleshow/101050318.cms

Quantum Computing Explained with Simple Examples





Resource?

https://www.schneier.com/blog/archives/2023/06/security-and-human-behavior-shb-2023.html

Security and Human Behavior (SHB) 2023

I’m just back from the sixteenth Workshop on Security and Human Behavior, hosted by Alessandro Acquisti at Carnegie Mellon University in Pittsburgh.

This year’s schedule is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is live blogging the talks. We are back 100% in person after two years of fully remote and one year of hybrid.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, eleventh, twelfth, thirteenth, fourteenth, and fifteenth SHB workshops. Follow those links to find summaries, papers, and occasionally audio/video recordings of the sessions. Ross also maintains a good webpage of psychology and security resources.



Friday, June 16, 2023

AI is trained on existing data. Since tools like ChatGPT began publishing sometimes erroneous data, AI has been trained on that too. Now it seems that AI is generating data specifically to train AI. What is the probability that errors will produce errors more frequently?

https://www.theregister.com/2023/06/16/crowd_workers_bots_ai_training/

AI is going to eat itself: Experiment shows people training bots are using bots

Workers hired via crowdsource services like Amazon Mechanical Turk are using large language models to complete their tasks – which could have negative knock-on effects on AI models in the future.

Data is critical to AI. Developers need clean, high-quality datasets to build machine learning systems that are accurate and reliable. Compiling valuable, top-notch data, however, can be tedious. Companies often turn to third party platforms such as Amazon Mechanical Turk to instruct pools of cheap workers to perform repetitive tasks – such as labeling objects, describing situations, transcribing passages, and annotating text.

But an experiment conducted by researchers at the École polytechnique fédérale de Lausanne (EPFL) in Switzerland has concluded that these crowdsourced workers are using AI systems – such as OpenAI's chatbot ChatGPT – to perform odd jobs online.

Training a model on its own output is not recommended. We could see AI models being trained on data generated not by people, but by other AI models – perhaps even the same models. That could lead to disastrous output quality, more bias, and other unwanted effects.





Tools & Techniques. I got one.

https://www.makeuseof.com/unlock-secrets-of-chatgpt-with-free-ebook-unlocking-the-potential-of-chatgpt/

Unlock the Secrets of ChatGPT with Our Free eBook: 'Unlocking the Potential of ChatGPT”

ChatGPT is all the rave and hardly requires an introduction. It's the phenomenal AI chatbot that's taking the world by storm. From writing resumes, and cover letters, to building entire websites, ChatGPT is helping people do things they could only imagine in science fiction.

Yet as powerful as this technology is, only a few people are currently unlocking the best of it. That's why at MakeUseOf, we've put together this practical, fascinating, free ebook that you can download right now.



Thursday, June 15, 2023

But I want to use it anyway…

https://www.cnn.com/2023/06/14/business/artificial-intelligence-ceos-warning/index.html

Exclusive: 42% of CEOs say AI could destroy humanity in five to ten years

Many top business leaders are seriously worried that artificial intelligence could pose an existential threat to humanity in the not-too-distant future.

Forty-two percent of CEOs surveyed at the Yale CEO Summit this week say AI has the potential to destroy humanity five to ten years from now, according to survey results shared exclusively with CNN.

Sonnenfeld, the Yale management guru, told CNN business leaders break down into five distinct camps when it comes to AI.

The first group, as described by Sonnenfeld, includes “curious creators” who are “naïve believers” who argue everything you can do, you should do.

They are like Robert Oppenheimer, before the bomb,” Sonnenfeld said, referring to the American physicist known as the “father of the atomic bomb.”

Then there are the “euphoric true believers” who only see the good in technology, Sonnenfeld said.

Noting the AI boom set off by the popularity of ChatGPT and other new tools, Sonnenfeld described “commercial profiteers” who are enthusiastically seeking to cash in on the new technology. “They don’t know what they’re doing, but they’re racing into it,” he said.

And then there are the two camps pushing for an AI crackdown of sorts: alarmist activists and global governance advocates.

These five groups are all talking past each other, with righteous indignation,” Sonnenfeld said.





Are we there yet? This article says no.

https://www.ejiltalk.org/artificial-intelligence-and-international-criminal-law/

Artificial Intelligence and International Criminal Law

… International criminal litigation is largely guided by complex documents. The monumental investigations needed to determine the occurrence of international crimes often result in a massive influx of documentary materials. LLMs are designed to comprehend large volumes of data, but their effectiveness can be hampered by poor-quality scans and documents in languages not commonly used in their training.

Moreover, ICL jurisprudence is diverse. Each ICL institution represents a jurisdiction with its unique repositories for storing its jurisprudence. Despite the existence of centralised compilations like the ICC Legal Tools Database, no publicly available AI tool has been specifically trained on these compilations. In contrast, American lawyers can more easily integrate AI into their domestic practice through various avenues like Westlaw Edge, Lexis +, and Casetext.

ICL also faces significant data security risks. Due to the novelty of LLMs, their security implications remain largely undefined, posing potential threats. Any data breaches in the prosecution of war crimes or crimes against humanity could have catastrophic consequences, such as the identification and targeting of victims, witnesses, and others at risk.





Perspective. Are we stuck in 1726?

https://www.makeuseof.com/when-was-ai-first-discovered-history-of-ai/

When Was AI First Discovered? The History of AI

… AI has a rich, complex history. Here are some of the most notable breakthroughs that shape today's most sophisticated AI models.

  • 1726: Gulliver's Travels by Jonathan Swift introduces The Engine. It's a fictional device that generates logical word sets and permutations, enabling even "the most ignorant person" to write scholarly pieces on various subjects. Generative AI performs this exact function.





Tools & Techniques.

https://www.bespacific.com/free-useful-artificial-intelligence-tools-for-the-classroom/

Free & Useful Artificial Intelligence Tools For The Classroom

Larry Ferlazzois teaches English, Social Studies and International Baccalaureate classes to English Language Learners and mainstream students at Luther Burbank High School in Sacramento, California. Recently on his blog he has been highlighting free sites and services (many require registration) that are useful to educators as well as librarians. This week among the apps he recommends: “PPTX lets you create AI-powered PowerPoint presentations for you. It really doesn’t produce much actual info content, but the slides look nice.”





Tools & Techniques.

https://www.bespacific.com/assigning-ai-seven-approaches-for-students-with-prompts/

Assigning AI: Seven Approaches for Students, with Prompts

Mollick, Ethan R. and Mollick, Lilach, Assigning AI: Seven Approaches for Students, with Prompts (June 12, 2023). Available at SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4475995

This paper examines the transformative role of Large Language Models (LLMs) in education and their potential as learning tools, despite their inherent risks and limitations. The authors propose seven approaches for utilizing AI in classrooms: AI-tutor, AI-coach, AI-mentor, AI-teammate, AI-tool, AI-simulator, and AI-student, each with distinct pedagogical benefits and risks. The aim is to help students learn with and about AI, with practical strategies designed to mitigate risks such as complacency about the AI’s output, errors, and biases. These strategies promote active oversight, critical assessment of AI outputs, and complementarity of AI’s capabilities with the students’ unique insights. By challenging students to remain the “human in the loop”, the authors aim to enhance learning outcomes while ensuring that AI serves as a supportive tool rather than a replacement. The proposed framework offers a guide for educators navigating the integration of AI-assisted learning in classrooms.”



Wednesday, June 14, 2023

Is this because AI isn’t human? Another area for the recognition of AI personhood?

https://www.axios.com/pro/tech-policy/2023/06/14/hawley-blumenthal-bill-section-230-ai

First look: Bipartisan bill denies Section 230 protection for AI

Sens. Josh Hawley and Richard Blumenthal want to clarify that the internet's bedrock liability law does not apply to generative AI, per a new bill introduced Wednesday that was shared exclusively with Axios.

Details: Hawley and Blumenthal's "No Section 230 Immunity for AI Act" would amend Section 230 "by adding a clause that strips immunity from AI companies in civil claims or criminal prosecutions involving the use or provision of generative AI," per a description of the bill from Hawley's office.





A common problem. Employees (especially techies) want to try the latest, flashy gizmos, even if the organization hasn’t approved the risk.

https://www.theverge.com/2023/6/13/23759101/stack-overflow-developers-survey-ai-coding-tools-moderators-strike

Stack Overflow survey finds developers are ready to use AI tools — even if they don’t fully trust them

A survey of developers by coding Q&A site Stack Overflow has found that AI tools are becoming commonplace in the industry even as coders remain skeptical about their accuracy. The survey comes at an interesting time for the site, which is trying to work out how to benefit from AI while dealing with a strike by moderators over AI-generated content.

The survey found that 77 percent of respondents felt favorably about using AI in their workflow and that 70 percent are already using or plan to use AI coding tools this year.

Respondents cited benefits like increased productivity (33 percent) and faster learning (25 percent) but said they were wary about the accuracy of these systems. Only 3 percent of respondents said they “highly trust” AI coding tools, with 39 percent saying they “somewhat trust” them. Another 31 percent were undecided, with the rest describing themselves as somewhat distrustful (22 percent) or highly distrustful (5 percent).





The automated lawyer?

https://www.bespacific.com/the-gptjudge-justice-in-a-generative-ai-world/

The GPTJudge: Justice in a Generative AI World

Grossman, Maura and Grimm, Paul and Brown, Dan and Xu, Molly, The GPTJudge: Justice in a Generative AI World (May 23, 2023). Duke Law & Technology Review, Vol. 23, No. 1, 2023, Available at SSRN: https://ssrn.com/abstract=4460184

Generative AI (“GenAI”) systems such as ChatGPT recently have developed to the point where they are capable of producing computer-generated text and images that are difficult to differentiate from human-generated text and images. Similarly, evidentiary materials such as documents, videos and audio recordings that are AI-generated are becoming increasingly difficult to differentiate from those that are not AI-generated. These technological advancements present significant challenges to parties, their counsel, and the courts in determining whether evidence is authentic or fake. Moreover, the explosive proliferation and use of GenAI applications raises concerns about whether litigation costs will dramatically increase as parties are forced to hire forensic experts to address AI- generated evidence, the ability of juries to discern authentic from fake evidence, and whether GenAI will overwhelm the courts with AI-generated lawsuits, whether vexatious or otherwise. GenAI systems have the potential to challenge existing substantive intellectual property (“IP”) law by producing content that is machine, not human, generated, but that also relies on human-generated content in potentially infringing ways. Finally, GenAI threatens to alter the way in which lawyers litigate and judges decide cases. This article discusses these issues, and offers a comprehensive, yet understandable, explanation of what GenAI is and how it functions. It explores evidentiary issues that must be addressed by the bench and bar to determine whether actual or asserted (i.e., deepfake) GenAI output should be admitted as evidence in civil and criminal trials. Importantly, it offers practical, step-by- step recommendations for courts and attorneys to follow in meeting the evidentiary challenges posed by GenAI. Finally, it highlights additional impacts that GenAI evidence may have on the development of substantive IP law, and its potential impact on what the future may hold for litigating cases in a GenAI world.”

See also e-Discovery Team – REAL OR FAKE? New Law Review Article Provides a Good Framework for Judges to Make the Call





So, whatch’a doin’?

https://www.bespacific.com/surveillance-and-digital-control-at-work/

Surveillance and Digital Control at Work

Cracked Labs: “A research project on the datafication of work with a focus on Europe – Data collection is becoming ubiquitous, including at work. Systems that constantly record data about activities and behaviors in the workplace can quickly turn into devices for extensive monitoring and control, deeply affecting the rights and freedoms of employees. Opportunities and risks are not distributed equally. While employers optimize their business processes, workers are being rated, ranked, pressured and disciplined. Companies use this recorded data to monitor behavior, assess performance and, increasingly, to direct tasks, manage workers and make automated decisions about them. The project examines and maps how companies use personal data on (and against) employees. Based on previous German-language research, it investigates and documents systems and technologies that process personal data in the workplace and identifies key developments and issues relevant to worker rights. The project, which is described in more detail here, results in a series of case studies and research reports, which are published online over the course of 2023 and 2024 below.

  • Surveillance and Algorithmic Control in the Call Center. A case study on contact and service center software, automated management and outsourced work (52 pages, May 2023) – This project follows up on previous research on surveillance and digital control at work that focused on German-speaking countries, which was carried out by Cracked Labs between 2019 and 2021 and which resulted in a comprehensive German-language report, a web publication and a report on research based on interviews with work councils in Austria.”





Tools & Techniques. (Some may actually be useful!)

https://www.makeuseof.com/best-ai-tools-boredhumans/

The 7 Best AI Tools on BoredHumans

… BoredHumans is a website that offers a wide variety of free AI tools for anyone feeling a bit bored. Some examples include virtual pets, tarot card readings, deepfake videos, and a quote generator. While most of the AI functions are meant for entertainment, some are truly impressive and can be quite useful.

Once you're done getting a good laugh from the meme generator and seeing what you'd look like when you're older with the age progression tool, you can check out some of the best tools on the site. This includes a fake person generator, a super resolution tool, an interior design tool, and many more.



Tuesday, June 13, 2023

I approve of the Mad Magazine approach.

https://sloanreview.mit.edu/article/what-me-worry/

What, Me Worry?

Call me shortsighted, but I am not losing sleep over the prospect of a supercharged AI gaining consciousness and waging war on humans.

What does keep me up at night is that humans are already wielding the power of artificial intelligence to control, exploit, discriminate against, misinform, and manipulate other humans. Tools that can help us solve complex and vexing problems can also be put to work by cybercriminals or give authoritarian governments unprecedented power to spy on and direct the lives of their citizens. We can build models that lead to the development of new, more sustainable materials or important new drugs — and we can build models that embed biased decision-making into systems and processes and then grind individuals up in their gears.

In other words, AI already gives us plenty to worry about. We shouldn’t be distracted by dystopian fever dreams that misdirect our attention from present-day risk.





Confusing. Would friends and coworkers be next?

https://www.wired.com/story/anti-porn-covenant-eyes-bond-revoked/

An Anti-Porn App Put Him in Jail and His Family Under Surveillance

A court used an app called Covenant Eyes to surveil the family of a man released on bond. Now he’s back in jail, and tech misuse may be to blame.

Hannah’s husband is now awaiting trial in jail, in part because of an anti-pornography app called Covenant Eyes. The company explicitly says the app is not meant for use in criminal proceedings, but the probation department in Indiana’s Monroe County has been using it for the past month to surveil not only Hannah’s husband but also the devices of everyone in their family. To protect their privacy, WIRED is not disclosing their surname or the names of individual family members. Hannah agreed to use her nickname.

Prosecutors in Monroe County this spring charged Hannah’s husband with possession of child sexual abuse material—a serious crime that she says he did not commit and to which he pleaded not guilty. Given the nature of the charges, the court ordered that he not have access to any electronic devices as a condition of his pretrial release from jail. To ensure he complied with those terms, the probation department installed Covenant Eyes on Hannah’s phone, as well as those of her two children and her mother-in-law.

In near real time, probation officers are being fed screenshots of everything Hannah’s family views on their devices. From images of YouTube videos watched by her 14-year-old daughter to online underwear purchases made by her 80-year-old mother-in-law, the family’s entire digital life is scrutinized by county authorities. “I’m afraid to even communicate with our lawyer,” Hannah says. “If I mention anything about our case, I’m worried they are going to see it and use it against us.”





Privacy? What’s that?

https://www.pogowasright.org/one-of-the-last-bastions-of-digital-privacy-is-under-threat/

One of the Last Bastions of Digital Privacy Is Under Threat

Julia Angwin has an OpEd on the NY Times. She writes, in part:

One of the last bastions of privacy are encrypted messaging programs such as Signal and WhatsApp. These apps, which employ a technology called end-to-end encryption, are designed so that even the app makers themselves cannot view their users’ messages. Texting on one of these apps — particularly if you use the “disappearing messages” feature — can be almost as private and ephemeral as most real-life conversations used to be.
However, governments are increasingly demanding that tech companies surveil encrypted messages in a new and dangerous way. For years, nations sought a master key to unlock encrypted content with a search warrant, but largely gave up because they couldn’t prove they could keep such a key safe from bad actors. Now they are seeking to force companies to monitor all their content, whether or not it is encrypted.

Read the full piece at The New York Times. Julia has gifted free access to this article.





Ready or not, someone (many someones?) will push to use these tools.

https://www.wavy.com/news/local-news/williamsburg/testimony-by-hologram-instant-voice-to-text-trial-records-artificial-intelligence-reshaping-the-legal-system/

Testimony by hologram, instant voice-to-text trial records: Artificial intelligence reshaping the legal system

A recent demonstration at the William & Mary Law School shows how technology can transform the judicial system.

WAVY-TV watched two breakthrough technologies – one that’s already in place, another that will need to clear a constitutional hurdle.

Remote appearances in courtrooms have been going on for years, and usually it’s a defendant appearing by video from a nearby jail for an early-stage hearing. But new hologram technology from Los Angeles-based Proto takes it several steps further.

You might have seen it on NBC’s America’s Got Talent, or at a Brooklyn Nets or Dallas Cowboys game. In a courtroom context, it would enable a prosecution witness to testify from across the country or the other side of the world.

But enshrined in the Sixth Amendment is the confrontation clause, the notion that you get to confront your accuser – so is this encounter the same as face-to-face?

Another state of the art technology for courtrooms displayed at William & Mary is called For the Record RealTime, the company’s latest iteration of voice-to-text recording of court proceedings that would eliminate or at least greatly reduce the need for human court reporters to document court proceedings.





Perspective. (And a bit of tech history I didn’t know)

https://www.brookings.edu/research/the-turing-transformation-artificial-intelligence-intelligence-augmentation-and-skill-premiums/

The Turing Transformation: Artificial intelligence, intelligence augmentation, and skill premiums

Almon Brown Strowager, an American undertaker from the 19th century, allegedly angry that a local switch operator (and wife of a competing undertaker) was redirecting his customer calls to her husband, sought to take all switch operators to their employment graves. He conceived of and, with family members, invented the Strowager switch that automated the placement of phone calls in a network. The switch spread worldwide and, as a consequence, a job that once employed over 200,000 Americans has almost disappeared.

It appears that Acemoglu and Brynjolfsson want to change the objectives and philosophy of the entire research field. The underlying hypothesis is that if the technical objectives of AI research are changed, then this will steer the economy away from potential loss of jobs, devaluation of skills, inequality, and social discord following from this. In this way, society can avoid what Brynjolfsson calls the “Turing Trap,” where AI-enabled automation leads to a concentration of wealth and power.

In this paper, we question this hypothesis. We ask whether it is really the case that the current technical objective of using human performance of tasks as a benchmark for AI performance will result in the negative outcomes described above. Instead, we argue that task automation, especially when driven by AI advances, can enhance job prospects and potentially widen the scope for employment of many workers. The neglected mechanism we highlight is the potential for changes in the skill premium where AI automation of tasks exogenously improves the value of the skills of many workers, expands the pool of available workers to perform other tasks, and, in the process, increases labor income and potentially reduces inequality. We label this possibility the “Turing Transformation.”





Monday, June 12, 2023

Is AI a greater risk?

https://www.bespacific.com/generative-artificial-intelligence-and-data-privacy-a-primer/

Generative Artificial Intelligence and Data Privacy: A Primer

Congressional Research Service (CRS) – Generative Artificial Intelligence and Data Privacy. A Primer May 23, 2023: “Since the public release of Open AI’s ChatGPT, Google’s Bard, and other similar systems, some Members of Congress have expressed interest in the risks associated with “generative artificial intelligence (AI).” Although exact definitions vary, generative AI is a type of AI that can generate new content—such as text, images, and videos—through learning patterns from pre-existing data. It is a broad term that may include various technologies and techniques from AI and machine learning (ML). Generative AI models have received significant attention and scrutiny due to their potential harms, such as risks involving privacy, misinformation, copyright, and non-consensual sexual imagery. This report focuses on privacy issues and relevant policy considerations for Congress. Some policymakers and stakeholders have raised privacy concerns about how individual data may be used to develop and deploy generative models. These concerns are not new or unique to generative AI, but the scale, scope, and capacity of such technologies may present new privacy challenges for Congress.

See also CRS In Focus, June 9, 2023: Generative Artificial Intelligence: Overview, Issues, and Questions for Congress





Clear and simple. A diagram worth keeping?

https://www.barrons.com/news/the-three-types-of-machine-learning-algorithms-224da9b6

The Three Types Of Machine Learning Algorithms





Perspective.

https://thebulletin.org/2023/06/artificial-intelligence-challenges-and-controversies-for-us-national-security/

Artificial intelligence: challenges and controversies for US national security

An autonomous AI technology that equaled or surpassed human cognition could redefine how we understand both technology and humanity, but there is no surety as to whether or when such a “superintelligence” might emerge. Amid the uncertainty, the United States and other countries must consider the possible impact of AI on their armed forces and their preparedness for war fighting or deterrence. Military theorists, strategic planners, scientists and political leaders will face at least seven different challenges in anticipating the directions in which the interface between human and machine will move in the next few decades.



Sunday, June 11, 2023

Calling ‘FOUL!’

https://www.mdpi.com/2673-5172/4/2/43

Artificial Intelligence in Automated Detection of Disinformation: A Thematic Analysis

The increasing prevalence of disinformation has led to a growing interest in leveraging artificial intelligence (AI) for detecting and combating this phenomenon. This article presents a thematic analysis of the potential benefits of automated disinformation detection from the perspective of information sciences. The analysis covers a range of approaches, including fact checking, linguistic analysis, sentiment analysis, and the utilization of human-in-the-loop systems. Furthermore, the article explores how the combination of blockchain and AI technologies can be used to automate the process of disinformation detection. Ultimately, the article aims to consider the integration of AI into journalism and emphasizes the importance of ongoing collaboration between these fields to effectively combat the spread of disinformation. The article also addresses ethical considerations related to the use of AI in journalism, including concerns about privacy, transparency, and accountability.





The true long-term threat of AI?

https://www.scmp.com/comment/opinion/article/3223308/once-ai-can-do-everything-us-what-do-we-do

Once AI can do everything for us, what do we do?

Crucially, human contributions are becoming unnecessary. As a result, humans will be relegated to performing increasingly simple tasks.

Humans will not be required to make any physical or cognitive contributions towards their survival any more, making the genetically embedded survival instinct outdated. In principle, this can be considered tremendously positive news. But this good news comes with a catch: the survival instinct has been the core force behind human activity, creativity and productivity, and it will need to be replaced.

If the survival instinct is not replaced by a new source of motivation soon, humans can be expected to adopt the genetically prescribed path of least effort and risk. This passive attitude could result in physical decline and mental stupor, and ultimately humans degenerating as a species.