Saturday, August 12, 2023

If you provide “official” disinformation with a straight face it still will come back to haunt you.

https://thenextweb.com/news/criticism-uk-online-safety-bill-end-to-end-encryption

UK’s promise to protect encrypted messaging is ‘delusional,’ say critics

The British government’s promise to protect encryption has been pilloried by security experts and libertarians.

The dispute stems from a section of the Online Safety Bill. Under the legislation, messaging apps would be forced to provide access to private communications when requested by the regulator Ofcom.

Proponents say the measures will combat child abuse, but critics are aghast about the threat to privacy. They fear the plans will facilitate mass surveillance and damage the UK’s tech sector. Signal, Whatsapp, and five other messaging apps have all threatened to leave the country if the law is passed.

The British government has sought to allay their concerns. On Thursday, technology minister Michelle Donelan said the government is “not anti-encryption” and will protect user privacy.

Technology is in development to enable you to have encryption as well as to be able to access this particular information, and the safety mechanism that we have is very explicit that this can only be used for child exploitation and abuse,” Donelan told the BBC.

Her remarks were quickly lambasted by critics. Matthew Hodgson, CEO of secure messaging app Element — which is used by the government’s own Ministry of Defense — described Donelan’s claims as “factually incorrect.”

No technology exists which allows encryption AND access to ‘this particular information.’ Detecting illegal content means ALL content must be scanned in the first place,” he said.





Serious AI tools.

https://www.nature.com/articles/d41586-023-01907-z

Artificial-intelligence search engines wrangle academic literature

For a researcher so focused on the past, Mushtaq Bilal spends a lot of time immersed in the technology of tomorrow.

A postdoctoral researcher at the University of Southern Denmark in Odense, Bilal studies the evolution of the novel in nineteenth-century literature. Yet he’s perhaps best known for his online tutorials, in which he serves as an informal ambassador between academics and the rapidly expanding universe of search tools that make use of artificial intelligence (AI).

This new generation of search engines, powered by machine learning and large language models, is moving beyond keyword searches to pull connections from the tangled web of the scientific literature. Some programs, such as Consensus, give research-backed answers to yes-or-no questions; others, such as Semantic Scholar, Elicit and Iris, act as digital assistants — tidying up bibliographies, suggesting new papers and generating research summaries. Collectively, the platforms facilitate many of the early steps in the writing process. Critics note, however, that the programs remain relatively untested and run the risk of perpetuating existing biases in the academic publishing process.





Creative is in the eye of the creator. (and more tools)

https://time.com/6300950/ai-schools-chatgpt-teachers/

The Creative Ways Teachers Are Using ChatGPT in the Classroom

… “The majority of the teachers are panicked because they see [ChatGPT] as a cheating tool, a tool for kids to plagiarize,” says Rachael Rankin, a high school principal in Newton Falls, outside of Youngstown, Ohio.

But Paccone and a growing group of educators believe it’s too late to keep AI out of their classrooms. Randi Weingarten, President of the American Federation of Teachers, a major teachers union, believes the panic about AI is not unlike the ones caused by the Internet and graphing calculators when they were first introduced, arguing ChatGPT “is to English and to writing like the calculator is to math.” In this view, there are two options facing teachers: show their students how to use ChatGPT in a responsible way, or expect the students to abuse it.

… At another Zoom teacher training workshop that TIME observed in July, hosted by Garnet Valley School District in Garnet Valley, Penn., education consultant A.J. Juliani ran through various AI apps that students are using to cut corners in class. Photomath lets students upload a picture of a math problem and get detailed instructions on how to solve it. Tome can turn notes into a narrative, perfect for essay writing and preparing for presentations. And Readwise can highlight key parts of PDFs so that students can get through readings faster.





Just because…

https://www.makeuseof.com/how-to-use-harry-potter-spells-with-siri-on-iphone/

How to Cast Harry Potter Spells on Your iPhone With Siri



Friday, August 11, 2023

I suspect there is a lot here that applies elsewhere.

https://www.bespacific.com/artificial-intelligence-and-the-practice-of-law-part-1/

Artificial Intelligence and the Practice of Law Part 1

Murray, Michael D., Artificial Intelligence and the Practice of Law Part 1: Lawyers Must be Professional and Responsible Supervisors of AI (June 14, 2023). Available at SSRN: https://ssrn.com/abstract=4478588 or http://dx.doi.org/10.2139/ssrn.4478588 This article discusses the benefits and challenges of using artificial intelligence (AI) systems to assist lawyers in legal practice. It argues that at present AI systems are not a threat to take over lawyers’ jobs, [Darn. Bob] but rather a powerful tool that can enhance the efficiency and quality of lawyers’ work. However, it also warns that AI systems are not infallible and require professional and responsible supervision by lawyers. The article provides some best practices and recommendations for lawyers to ensure the accuracy and reliability of AI-generated legal work:

AI systems are tools that learn to do legal tasks and do them well at amazing speeds, but they are not (yet) self-aware or capable of reasoning like humans. Lawyers must exercise due diligence and ethical standards when using AI systems to support their legal practice. They should supervise everything the AI does and verify the sources, information, and forms that the AI provides. Lawyers must make judgments as to proving the AI with useful information but still respecting client confidentiality.

AI systems can generate coherent and persuasive legal texts, but they can also make errors such as fabricating sources, misinterpreting facts, or applying the wrong laws and legal sources.

AI systems learn from large datasets of legal and non-legal material on the internet, but they do not understand the meaning or context of what they read or write. The AI systems aim to please, but they lack judgment, wisdom, honesty, and legal experience.

Lawyers should use AI systems to assist them with tasks where they already know what the research, sources, and form of the response should look like. This will help the lawyers to evaluate the results and spot any errors or inconsistencies.

AI systems are very good (and very fast) at performing tasks such as document review, translation, summarization, analysis, explanation, or making connections using facts, datasets, and publicly available legal sources. Lawyers should not ask AI systems to make choices or exercise discretionary judgment. Lawyers should ask the AI to perform tasks that the AI is good at and leave the talents and skills that are uniquely human to human lawyers.

The conclusion is that lawyers should embrace AI systems as a powerful tool that can enhance their efficiency and quality of work, but they should also remember the importance of human oversight and judgment in the use of AI for law.”





There will be a scramble to protect any useful collection of data. How will you identify the AI that copies your data?

https://searchengineland.com/new-york-times-content-train-ai-systems-430556

New York Times: Don’t use our content to train AI systems

Although Google wants all online content available for AI training, the New York Times clearly wants to opt out.

The Times has changed its terms of service, aiming to prevent AI companies from using the media organization’s content to train their systems.



Thursday, August 10, 2023

Paranoia, it’s not just for lawyers.

https://www.searchenginejournal.com/chatgpt-law-firm-website-content/490442/#close

5 Reasons You Shouldn’t Use ChatGPT for Legal Website Content

Is ChatGPT capable of providing accurate and reliable legal content? Can ChatGPT be “trusted” to write good website content for law firms?

In this guide, we’ll cover some of the risks of using ChatGPT for law firm website content writing – and what to use instead.





If you want to understand the process…

https://www.makeuseof.com/python-plagiarism-detector-how-to-build/

How to Build a Plagiarism Detector Using Python

… Building a plagiarism tool can help you understand sequence matching, file operations, and user interfaces. You’ll also explore natural language processing (NLP) techniques to enhance your application.





Resources.

https://www.bespacific.com/google-free-courses-iot-machine-learning-data-science-computer-science/

Google Free courses: IoT; Machine Learning; Data Science; Computer Science

1. Introduction to Generative AI https://lnkd.in/eXp8h7dY
2. Introduction to Large Language Models https://lnkd.in/eK7qQcJY
3. Introduction to Responsible AI https://lnkd.in/e-aX58Ky
4. Introduction to Image Generation https://lnkd.in/ewM_aXam
5. Encoder-Decoder Architecture https://lnkd.in/e3ZGHSfz
6. Attention Mechanism https://lnkd.in/ea9crZiR
7. Transformer Models and BERT Model https://lnkd.in/eM5Dhi4N
8. Create Image Captioning Models https://lnkd.in/eeBEJx_n
9. Introduction to Generative AI Studio https://lnkd.in/eGn8NUCD
10. Machine Learning Crash Course https://lnkd.in/ecMHVTZ5
11. Learn Python basics for data analysis https://lnkd.in/eXsd_dcE
12. Data Science Foundations https://lnkd.in/eD5eUD8m
13. Fundamentals of digital marketing https://lnkd.in/euz7-S5Q
14. Learn programming with JavaScript https://lnkd.in/ewrusSPA
15. Build apps with Fluttr https://lnkd.in/eEVKkW6V
16. Google Cloud Computing Foundations: Cloud Computing Fundamentals https://lnkd.in/ejYM5ZDP
17. Baseline: Data, ML, AI https://lnkd.in/e_5XzZBN
18. Google Cloud Essentials https://lnkd.in/edY9QifH
19. Google IT Automation with Python https://lnkd.in/eWH3859f
20. Google Professional Workspace Administrator https://lnkd.in/eKGsDjiN
21. Master the Google tools you use at work with online training https://lnkd.in/eXEsErSb
22. Fundamentals of digital marketing https://lnkd.in/euz7-S5Q
23. What are you trying to do with AI today? https://lnkd.in/eDSxZyzc



Wednesday, August 09, 2023

Sounds like whining to me, but they might have a point.

https://www.cpomagazine.com/data-protection/why-we-need-a-completely-new-type-of-laws-and-regulations-in-the-era-of-ai/

Why We Need a Completely New Type of Laws and Regulations in the Era of AI

We live in the times of rapid technological developments. This influences business processes, and how companies operate, on a massive scale. Arguably, with AI we face the greatest change since hundreds of years. At the same time, our reality and much of the activities companies perform, are highly regulated. Apart from many existing laws, new laws and regulations are constantly drafted.



(Related)

https://www.pogowasright.org/a-new-open-letter-to-law-school-deans-about-privacy-law-scholars-and-curriculum/

A New Open Letter to Law School Deans about Privacy Law Scholars and Curriculum

Privacy law scholar Daniel Solove writes:

Before the pandemic, which seems like eons ago, I spearheaded a group of legal academics and practitioners in the field of privacy law who sent a letter to the deans of all U.S. law schools about privacy law education. The pandemic occurred not too long after our letter, and deans had many other things to worry about during that time.
The time is right to send a follow up letter about why law schools should increase and improve their privacy law faculty and curriculum. So, I am emailing the letter below to all U.S. law school deans.
You can see a PDF of the letter here.

Or read the letter and its distinguished list of privacy law scholars and privacy law practitioners on  TeachPrivacy.





A tool for the grammatically handicapped, like me.

https://www.makeuseof.com/google-check-grammar/

How to Have Google Check Your Grammar

Starting in August 2023, Google Search now includes an AI-powered grammar checker, commonly dubbed Grammar Check, into the search bar.



Tuesday, August 08, 2023

If I sold you a bogus ChatGPT clone, who would you complain to?

https://www.wired.com/story/chatgpt-scams-fraudgpt-wormgpt-crime/

Criminals Have Created Their Own ChatGPT Clones

IT DIDN'T TAKE long. Just months after OpenAI’s ChatGPT chatbot upended the startup economy, cybercriminals and hackers are claiming to have created their own versions of the text-generating technology. The systems could, theoretically at least, supercharge criminals’ ability to write malware or phishing emails that trick people into handing over their login information.

There are outstanding questions about the authenticity of the chatbots. Cybercriminals are not exactly trustworthy characters, and there remains the possibility that they’re trying to make a quick buck by scamming each other. Despite this, the developments come at a time when scammers are exploiting the hype of generative AI for their own advantage.

The shady LLMs claim to strip away any kind of safety protections or ethical barriers. WormGPT was first spotted by independent cybersecurity researcher Daniel Kelley, who worked with security firm SlashNext to detail the findings. WormGPT’s developers claim the tool offers an unlimited character count and code formatting. “The AI models are notably useful for phishing, particularly as they lower the entry barriers for many novice cybercriminals,” Kelley says in an email. “Many people argue that most cybercriminals can compose an email in English, but this isn’t necessarily true for many scammers.”





Not sure the answer is here, but there are plenty of arguments…

https://www.scientificamerican.com/article/we-need-smart-intellectual-property-laws-for-artificial-intelligence/

We Need Smart Intellectual Property Laws for Artificial Intelligence

A pressing question worldwide is whether the data used to train AI systems requires consent from authors or performers, who are also seeking attribution and compensation for the use of their works.

Several governments have created special text and data mining exceptions to copyright law to make it easier to collect and use information for training AI. These allow some systems to train on online texts, images and other work that is owned by other people. These exceptions have been met with opposition recently, particularly from copyright owners and critics with more general objections who want to slow down or degrade the services.

Beyond consent, the other two c’s, credit and compensation, have their own challenges, as illustrated even now with the high cost of litigation regarding infringements of copyright or patents. But one can also imagine datasets and uses in the arts or biomedical research where a well-managed AI program could be helpful to implement benefit sharing, such as the proposed open-source dividend for seeding successful biomedical products.





How much does it take to get your attention when you are making gazillions of dollars?

https://thenextweb.com/news/norway-fines-meta-privacy-violations-behavioural-advertising-ad-targeting-facebook

Norway fines Meta 1 MILLION crowns per day over data harvesting for behavioural ads

Meta’s litany of European privacy sanctions in 2023 just got a little longer. After a €390mn fine for illegal personalised ads, another €5.5mn hit for similar violations in WhatsApp, and a GDPR record €1.2bn for unsafe data transfers, this week yet another punishment arrived — and the sentence did not disappoint.

Norwegian regulators have demanded a gloriously round figure that would make Dr Evil proud: 1 MILLION crowns (€89,000) per day. The penalties are due to begin on August 14, but Meta wants a temporary injunction against the order, Reuters reports.





Should teachers panic?

https://www.bespacific.com/practical-ai-for-teachers-and-students/

Practical AI for Teachers and Students

Wharton School – 5 Part Course on YouTube for Students and Instructors/Teachers – Description of the Introduction: “In this introduction, Wharton Interactive’s Faculty Director Ethan Mollick and Director of Pedagogy Lilach Mollick provide an overview of how large language models (LLMs) work and explain how this latest generation of models has impacted how we work and how we learn. They also discuss the different types of large language models referenced in their five-part crash course: OpenAI’s ChatGPT4, Microsoft’s Bing in Creative Mode, and Google’s Bard. This video is Part 1 of a five-part course in which Wharton Interactive provides an overview of AI large language models for educators and students. They take a practical approach and explore how the models work, and how to work effectively with each model, weaving in your own expertise. They also show how to use AI to make teaching easier and more effective, with example prompts and guidelines, as well as how students can use AI to improve their learning. Links to sources and prompts”:



Monday, August 07, 2023

Insight into possible Federal law? (Probably not.)

https://www.bespacific.com/the-state-of-state-ai-laws/

The State of State AI Laws

Tech Policy Press: “Lots of voices are calling for the regulation of artificial intelligence. I n the US, at present it seems there is no federal legislation close to becoming law. But in 2023 legislative sessions in states across the country, there has been a surge in AI laws proposed and passed, and some have already taken effect. To learn more about this wave of legislation, I spoke to two people who just posted a comprehensive review of AI laws in US states: Katrina Zhu, a law clerk at the Electronic Privacy Information Center (EPIC) and a law student at the UCLA School of Law, and EPIC senior counsel Ben Winters. What follows is a lightly edited transcript of the discussion …”





AI is cheap. Good AI is not cheap.

https://a16z.com/2023/08/03/the-economic-case-for-generative-ai-and-foundation-models/

The Economic Case for Generative AI and Foundation Models

Artificial intelligence has been a staple in computer science since the 1950s. Over the years, it has also made a lot of money for the businesses able to deploy it effectively. However, as we explained in a recent op-ed piece for the Wall Street Journal—which is a good starting point for the more detailed argument we make here—most of those gains have gone to large incumbent vendors (like Google or Meta) rather than to startups. Until very recently—with the advent of generative AI and all that it encompasses—we’ve not seen AI-first companies that seriously threaten the profits of their larger, established peers via direct competition or entirely new behaviors that make old ones obsolete.

With generative AI applications and foundation models (or frontier models), however, things look very different. Incredible performance and adoption, combined with a blistering pace of innovation, suggest we could be in the early days of a cycle that will transform our lives and economy at levels not seen since the microchip and the internet.

This post explores the economics of traditional AI and why it’s typically been difficult to reach escape velocity for startups using AI as a core differentiator (something we’ve written about in the past ). It then covers why generative AI applications and large foundation-model companies look very different, and what that may mean for our industry.





Tools for Teachers?

https://www.euronews.com/next/2023/08/07/best-ai-tools-academic-research-chatgpt-consensus-chatpdf-elicit-research-rabbit-scite

The best AI tools to power your academic research

"There are two camps in academia. The first is the early adopters of artificial intelligence, and the second is the professors and academics who think AI corrupts academic integrity," Bilal told Euronews Next.





Tools to replace lawyers?

https://www.bespacific.com/best-of-7-best-ai-legal-assistants/

Best Of 7 “Best” AI Legal Assistants

Unite AI: “In the fast-paced world of legal practice, keeping up with the demands of case management, research, and client communication can be challenging. Artificial intelligence has stepped in to alleviate some of these challenges by providing AI-powered legal assistant tools. These tools are designed to streamline processes, improve efficiency, and assist law professionals in various tasks. In this blog post, we will explore the best AI legal assistant tools, discussing their features, benefits, and what makes them unique.”



(Related)

https://www.bespacific.com/law-schools-split-on-chatgpt-in-admissions-essays/

Law Schools Split on ChatGPT in Admissions Essays

Inside Higher Education: “As ChatGPT becomes commonplace among legal professionals, law schools are divided on whether to allow students to use the artificial intelligence tool in the admissions process. A week after the University of Michigan Law School announced the AI tool would be banned in law school applications, Arizona State University Law School took the opposite approach. ASU announced on July 27 that future applicants will be allowed to use ChatGPT in their applications, specifically for their personal statements, which are akin to the essays required in undergraduate applications… The growing adoption of ChatGPT among lawyers, who use it for researching and writing in legal briefs and filings, has created a sense of urgency for law schools. Multiple law professors said it would be “malpractice” to not teach students how to use AI chat bots like ChatGPT…



Sunday, August 06, 2023

Might be interesting to follow…

https://www.commondreams.org/news/chinook-center-aclu-lawsuit

ACLU Sues Colorado Springs, FBI Over 'Unconstitutional' Spying on Activists' Devices

The ACLU of Colorado on Tuesday filed a federal lawsuit against the city of Colorado Springs, four members of the Colorado Springs Police Department, and the Federal Bureau of Investigation, accusing them of illegally spying on the private communications of a local activist arrested on minor—and critics say dubious—charges during a 2021 housing rights protest.

The suit continues:

Ultimately, a CSPD commander ordered arrests of prominent Chinook Center members for marching in the street, even after the protestors complied with police requests to move onto the sidewalk.
Colorado Springs police then obtained a search warrant—one of several that are the subject of this lawsuit—to search the Chinook Center's private chats on Facebook Messenger. The warrant did not even purport to be supported by probable cause. It was not limited to a search for any particular evidence, let alone evidence of a particular crime, and it was unlimited as to topics.





Gotta love these “How To” manuals…

https://www.researchgate.net/profile/Abu-Rayhan-11/publication/372775589_THE_DARK_SIDE_OF_INTELLIGENCE_HOW_TO_MANIPULATE_AND_CONTROL_WITH_CHATBOTS/links/64c7c2a13d1a321c1b4cf3b3/THE-DARK-SIDE-OF-INTELLIGENCE-HOW-TO-MANIPULATE-AND-CONTROL-WITH-CHATBOTS.pdf

THE DARK SIDE OF INTELLIGENCE: HOW TO MANIPULATE AND CONTROL WITH CHATBOTS

Chatbots are becoming increasingly sophisticated, and with this sophistication comes the potential for misuse. In this paper, we explore the dark side of chatbot intelligence, examining the ways in which chatbots can be used to manipulate and control users. We begin by discussing the psychology of intelligence, the ethics of artificial intelligence, and the implications of dark intelligence. We then explore the power of chatbots to manipulate and control users, examining the psychology of persuasion, the manipulative power of chatbots, and the use of chatbots in advertising and marketing. We also examine the role of chatbots in political campaigns, highlighting the ways in which chatbots can be used to sway public opinion and influence election outcomes. Finally, we discuss the art of deception and the use of chatbots for fraud and scams. This paper provides a comprehensive overview of the dark side of chatbot intelligence. It discusses the potential for chatbots to be used for malicious purposes, and it provides insights into how these dangers can be mitigated. The paper is intended for researchers, developers, and policymakers who are interested in the ethical and legal implications of chatbot technology.





This makes it even more difficult to determine if the training data is likely to produce a “true” outcome.

https://mindmatters.ai/2023/08/the-secret-ingredient-for-ai-ergodicity/

THE SECRET INGREDIENT FOR AI: ERGODICITY

Before applying AI in deep convolutional neural networks, practitioners need to address whether the problem under consideration is “ergodic.” 1

We are rightly amazed when deep learning wins at Atari arcade games using only display pixels. But in doing so, the AI is exposed to the same game again and again (and again). The scenarios change, but the game and its rules remain static. The same is true with chess or GO. When trained and tested against a human opponent, we know that the AI will be playing the same game.

Statisticians know that ergodicity comes in many flavors. Here ergodicity simply means that the data used to train AI must characterize similar data not yet seen. [i.e. a representative sample Bob]





Does it work?

https://dataspace.princeton.edu/handle/88435/dsp01dn39x481s

Faces at Face Value: An Analysis of Face Recognition Technology Policy and Performance

Facial recognition is one of the best developed and widely used applications of machine learning and examples of artificial intelligence in 2023. What was once a distant idea of futurism is now used in nearly every smart phone for identity verification. Alongside the convenient uses of the technology stand its more Orwellian counterparts - most notably, the use of face recognition for public surveillance. The development of the technology and the ubiquity of high-quality video recording devices like traffic cameras, surveillance cameras, and police body cameras enables the permeation of this technology throughout all spheres of life. Such technology invites the fear of constant surveillance and the decline in individual privacy, particularly in public areas. While the field has been subject to significant research and policy interest, it remains insufficiently regulated and misunderstood from a technological perspective. This study aims to comprehensively gauge the current policies surrounding the use of face recognition - specifically regarding its implementation on police body cameras and public footage as well as the use of personal photos in face recognition databases. Additionally, this study aims to quantify its accuracy in the face of adversarial factors - specifically, relating to age-invariant cross-demographic identity tracking i.e. identity matching with photos from different ages across different ethnic groups.