Saturday, July 15, 2023

Imagine a ‘sting’ operation that identifies potential scammers and pays for itself…

https://www.pcmag.com/news/wormgpt-is-a-chatgpt-alternative-with-no-ethical-boundaries-or-limitations

WormGPT Is a ChatGPT Alternative With 'No Ethical Boundaries or Limitations'

A hacker has created his own version of ChatGPT, but with a malicious bent: Meet WormGPT, a chatbot designed to assist cybercriminals.

WormGPT’s developer is selling access to the program in a popular hacking forum, according to email security provider SlashNext, which tried the chatbot. “We see that malicious actors are now creating their own custom modules similar to ChatGPT, but easier to use for nefarious purposes,” the company said in a blog post(Opens in a new window).





The best indication that the AI invasion is inevitable? The tool works!

https://www.technologyreview.com/2023/07/13/1076199/chatgpt-can-turn-bad-writers-into-better-ones/

ChatGPT can turn bad writers into better ones

People who use ChatGPT to help with writing tasks are more productive and produce higher-quality work than those who don’t, a study found.

People have been using ChatGPT to help them to do their jobs since it was released in November of last year, with enthusiastic adopters using it to help them write everything from marketing materials to emails to reports.

Now we have the first indication of its effect in the workplace. A new study by two MIT economics graduate students, published today in Science, suggests it could help reduce gaps in writing ability between employees. They found that it could enable less experienced workers who lack writing skills to produce work similar in quality to that of more skilled colleagues.

The writers who chose to use ChatGPT took 40% less time to complete their tasks, and produced work that the assessors scored 18% higher in quality than that of the participants who didn’t use it.





Nonsense. Not being able to predict an answer is not the same a not understanding the process.

https://www.vox.com/unexplainable/2023/7/15/23793840/chat-gpt-ai-science-mystery-unexplainable-podcast

Even the scientists who build AI can’t tell you how it works

… “If we open up ChatGPT or a system like it and look inside, you just see millions of numbers flipping around a few hundred times a second,” says AI scientist Sam Bowman. “And we just have no idea what any of it means.”





What good is a “Turing test” that most humans can’t pass?

https://www.technologyreview.com/2023/07/14/1076296/mustafa-suleyman-my-new-turing-test-would-see-if-ai-can-make-1-million/

Mustafa Suleyman: My new Turing test would see if AI can make $1 million





I’d be curious to see how other fields stack up.

https://www.edweek.org/technology/what-educators-know-about-artificial-intelligence-in-3-charts/2023/07

What Educators Know About Artificial Intelligence, in 3 Charts

The survey results paint a picture of a profession that is keenly aware of how artificial intelligence is swiftly changing what students need to learn and how educators will do their jobs, but one that may not be fully prepared to meet these new demands.



Friday, July 14, 2023

I’m not going to suggest that this hack might have other applications. (Where numbers of people have value.)

https://www.schneier.com/blog/archives/2023/07/buying-campaign-contributions-as-a-hack.html

Buying Campaign Contributions as a Hack

The first Republican primary debate has a popularity threshold to determine who gets to appear: 40,000 individual contributors. Now there are a lot of conventional ways a candidate can get that many contributors. Doug Burgum came up with a novel idea: buy them:

A long-shot contender at the bottom of recent polls, Mr. Burgum is offering $20 gift cards to the first 50,000 people who donate at least $1 to his campaign. And one lucky donor, as his campaign advertised on Facebook, will have the chance to win a Yeti Tundra 45 cooler that typically costs more than $300—just for donating at least $1.

It’s actually a pretty good idea. He could have spent the money on direct mail, or personalized social media ads, or television ads. Instead, he buys gift cards at maybe two-thirds of face value (sellers calculate the advertising value, the additional revenue that comes from using them to buy something more expensive, and breakage when they’re not redeemed at all), and resells them. Plus, many contributors probably give him more than $1, and he got a lot of publicity over this.

Probably the cheapest way to get the contributors he needs. A clever hack.





A long post that explains that Google thinks it can get away with. Worth reading.

https://www.pogowasright.org/can-google-really-just-use-all-your-posts-and-tweets-to-train-ai-models-seems-like-they-can/

Can Google really just use all your posts and tweets to train AI models? Seems like they can.

Seen recently on my favorite newsletter, Risky Biz News:

Google changes privacy policy: Google has changed its privacy policy to let its users know that any publicly-available information may be scanned and used to train its AI models. It’s funny that Google’s legal team thinks its privacy policy is stronger than copyright law. Hilarious!

That gave me pause, because I’m not sure about this at all. Could using publicly available info to train AI be considered “fair use?”



(Related)

https://www.bespacific.com/crawlers-search-engines-and-the-sleaze-of-generative-ai-companies/

Crawlers, search engines and the sleaze of generative AI companies

Search Engine Land: “…LLMs are not search engines It should now be very clear that an LLM is a different beast from a search engine. A language model’s response does not directly point back to the website(s) whose content was used to train the model. There is no economic exchange like we see with search engines, and this is why many publishers (and authors ) are upset. The lack of direct source citations is the fundamental difference between a search engine and an LLM, and it is the answer to the very common question of “why should Google and Bing be allowed to scrape content but not OpenAI?” (I’m using a more polite phrasing of this question.). Google and Bing are trying to show source links in their generative AI responses, but these sources, if shown at all, are not the complete set. This opens up a related question: Why should a website allow its content to be used to train a language model if it doesn’t get anything in return? That’s a very good question – and probably the most important one we should answer as a society. LLMs do have benefits despite the major shortcomings with the current generation of LLMs (such as hallucinations, lying to the human operators, and biases, to name a few), and these benefits will only increase over time while the shortcomings get worked out. But for this discussion, the important point is to realize that a fundamental pillar of how the open web functions right now is not suited for LLMs…”



(Related)

https://www.axios.com/2023/07/13/ap-openai-news-sharing-tech-deal

Exclusive: AP strikes news-sharing and tech deal with OpenAI

The Associated Press on Thursday said it reached a two-year deal with OpenAI, the parent company to ChatGPT, to share access to select news content and technology.

Why it matters: The deal marks one of the first official news-sharing agreements made between a major U.S. news company and an artificial intelligence firm.

The AP will get access to OpenAI’s technology and product expertise.



Thursday, July 13, 2023

If the FTC has questions, will ChatGPT provide the answers?

https://www.bespacific.com/ftc-is-investigating-whether-chatgpt-harms-consumers/

FTC is investigating whether ChatGPT harms consumers

Washington Post [read free ]: “The Federal Trade Commission has opened an expansive investigation into OpenAI, probing whether the maker of the popular ChatGPT bot has run afoul of consumer protection laws by putting personal reputations and data at risk. The agency this week sent the San Francisco company a 20-page demand for records about how it addresses risks related to its AI models, according to a document reviewed by The Washington Post. The salvo represents the most potent regulatory threat to date to OpenAI’s business in the United States, as the company goes on a global charm offensive to shape the future of artificial intelligence policy. Analysts have called OpenAI’s ChatGPT the fastest-growing consumer app in history, and its early success set off an arms race among Silicon Valley companies to roll out competing chatbots. The company’s chief executive, Sam Altman, has emerged as an influential figure in the debate over AI regulation, testifying on Capitol Hill, dining with lawmakers and meeting with President Biden and Vice President Harris…”





Could be an exceptionally useful tool.

https://www.makeuseof.com/use-chatgpt-wolfram-plugin/

3 Ways to Use ChatGPT'S Wolfram Plugin

1. Fact-Checking Information

This means you can run any claims in your ChatGPT-generated content against the more carefully curated Wolfram database to ensure accuracy. How can you do this?

Simply paste the text of the content you want to fact-check or provide a link to it and ask ChatGPT to invoke the Wolfram plugin to fact-check it.

2. Solve Complex STEM Problems

One of Wolfram's biggest selling points is its computation abilities. The Wolfram plugin will typically be able to handle a lot of the math problems that ChatGPT might have a hard time knitting together.

3. Data Analysis

ChatGPT, especially when backed by the GPT-4 model, has quite some impressive data analysis abilities. However, the Wolfram plugin can significantly improve those abilities.

Wolfram's ability to generate graphs of different kinds means you can analyze and summarize data in dozens and represent them in dozens of possible infographics formats without the limits of ChatGPT itself.





AI is coming so you should probably get ready. Any insights for the non-lawyers?

https://www.bespacific.com/an-ai-primer-for-legal-professionals/

An AI Primer for Legal Professionals

How will new AI technologies shape the future of law? And how can legal professionals ensure that they adhere to legal ethics while they benefit from new technologies? In part I of our new series, Filevine’s legal futurists Dr. Cain Elliott and Dr. Megan Ma, along with Senior Director of Product Alex McLaughlin, help lawyers answer these questions — and prepare for the future of their practice. Don’t want to wait for Part IV, watch the full AI, Ethics, and Legal: A Deep Dive Into the Future of Legal Tech Webinar on Youtube.”





Not sure how useful this is, but it does show a lot of them…

https://www.bespacific.com/search-ai-models/

Search AI Models

AllModels.fyi: “Search [over 30,000 entries], filter, and sort AI models. Find the right one for your AI project. Subscribe for a monthly update of new models.”



Wednesday, July 12, 2023

I call this “real data.” Data that has not been contaminated by AI generated nonsense. (It has been contaminated by disinformation and other random noise.) Does this suggest that what exists today is a good as something like ChatGPT will ever get?

https://www.businessinsider.com/ai-could-run-out-text-train-chatbots-chatgpt-llm-2023-7

Generative AI tools are quickly 'running out of text' to train themselves on, UC Berkeley professor warns

ChatGPT and other AI-powered bots may soon be "running out of text in the universe" that trains them to know what to say, an artificial intelligence expert and professor at the University of California, Berkeley says.

Stuart Russell said that the technology that hoovers up mountains of text to train artificial intelligence bots like ChatGPT is "starting to hit a brick wall." In other words, there's only so much digital text for these bots to ingest, he told an interviewer last week from the International Telecommunication Union, a UN communications agency.

A study conducted last November by Epoch, a group of AI researchers, estimated that machine learning datasets will likely deplete all "high-quality language data" before 2026. Language data in "high-quality" sets comes from sources such as "books, news articles, scientific papers, Wikipedia, and filtered web content," according to the study.



(Related)

https://www.bespacific.com/a-categorical-archive-of-chatgpt-failures/

A Categorical Archive of ChatGPT Failures

A Categorical Archive of ChatGPT Failures Ali Borji. Quintic AI. April 5, 2023 “Large language models have been demonstrated to be valuable in different fields. ChatGPT, developed by OpenAI, has been trained using massive amounts of data and simulates human conversation by comprehending context and generating appropriate responses. It has garnered significant attention due to its ability to effectively answer a broad range of human inquiries, with fluent and comprehensive answers surpassing prior public chatbots in both security and usefulness. However, a comprehensive analysis of ChatGPT’s failures is lacking, which is the focus of this study. Eleven categories of failures, including reasoning, factual errors, math, coding, and bias, are presented and discussed. The risks, limitations, and societal implications of ChatGPT are also highlighted. The goal of this study is to assist researchers and developers in enhancing future language models and chatbots. Please refer to here for the list of questions.”





I wonder if this is the only app the government deploys?

https://www.pogowasright.org/watch-government-spyware-on-your-phone-unfortunately-theres-an-app-for-that/

Watch: Government Spyware on Your Phone? Unfortunately, There’s an App for That

Washington, DC (July 9, 2023) – The New Civil Liberties Alliance is challenging the Massachusetts Department of Public Health (DPH) in federal court for coordinating with Google to automatically install spyware on the smartphones of more than one million Commonwealth residents, without their knowledge or consent, in a misguided effort to combat Covid-19. A newly-released video details how DPH’s actions have violated fundamental constitutional rights.

Thousands of people do not know DPH’s Covid-19 tracking app is on their phone, as it does not appear on their home screens like other apps. NCLA client Robert Wright, who commutes to Massachusetts for work, was appalled to learn that the government put an app on his phone without his knowledge, especially one that could constantly track his movements. NCLA’s lawsuit argues the DPH app’s automatic installation infringes on the Fourth Amendment right to privacy because it interferes with phone owners’ private property and collects information about them. By taking up storage space on phones against their owners’ will, such unwanted installations also constitute uncompensated taking of property in violation of the Fifth Amendment.





Humorous or truly scary?

https://thenextweb.com/news/uk-politician-andrew-gray-wants-be-first-ai-powered-member-of-parliament-polis

Budding politician ‘has no policies,’ will use AI to legislate

If elected, I will vote in Parliamant [sic] in accordance with the consensus. Simple,” Gray wrote on LinkedIn.

The tool he’s using, Polis, collects and analyses public opinion in real time.





I’m surprised it took so long…

https://cointelegraph.com/news/google-hit-with-lawsuit-over-ai-privacy-policy

Google hit with lawsuit over new AI data scraping privacy policy

A week after Google updated its privacy policy to allow data scraping for AI training purposes, the company faces a class-action lawsuit.





Tools & Techniques.

https://www.makeuseof.com/build-custom-chatgpt-with-your-own-data/

How to Build a Custom ChatGPT With Your Own Data

Looking to provide ChatGPT with your custom data? Here's a step-by-step on how to do just that!





Tools & Techniques.

https://www.cnbc.com/2023/07/11/anthropic-an-openai-rival-opens-claude-2-ai-chatbot-to-the-public.html

Anthropic — the $4.1 billion OpenAI rival — debuts new A.I. chatbot and opens it to public

There’s a new entrant in the budding AI arms race.

As Microsoft -backed OpenAI and Google race to develop the most advanced chatbots, powered by generative artificial intelligence, Anthropic is investing heavily to keep up. Just a few months after raising $750 million over two financing rounds, the startup is debuting a new AI chatbot: Claude 2.

Founded in 2021 by former OpenAI research executives and funded by companies including Google, Salesforce and Zoom, Anthropic is opening up its chatbot technology to consumers for the first time with Claude 2. For the past two months, the company’s AI models have been tested by businesses such as Slack, Notion and Quora, and Anthropic has accumulated a waitlist of more than 350,000 people requesting access to Claude’s application programming interface and its consumer offering.



Tuesday, July 11, 2023

Good news for students?

https://www.bespacific.com/ai-text-detection-tools-are-really-easy-to-fool/

AI-Text Detection Tools are Really Easy to Fool

MIT Technology Review [free link ]: “Debora Weber-Wulff, a professor of media and computing at the University of Applied Sciences, HTW Berlin, worked with a group of researchers from a variety of universities to assess the ability of 14 tools, including Turnitin, GPT Zero, and Compilatio, to detect text written by OpenAI’s ChatGPT. Most of these tools work by looking for hallmarks of AI-generated text, including repetition, and then calculating the likelihood that the text was generated by AI. But the team found that all those tested struggled to pick up ChatGPT-generated text that had been slightly rearranged by humans and obfuscated by a paraphrasing tool, suggesting that all students need to do is slightly adapt the essays the AI generates to get past the detectors.

These tools don’t work,” says Weber-Wulff. “They don’t do what they say they do. They’re not detectors of AI.” The researchers assessed the tools by writing short undergraduate-level essays on a variety of subjects, including civil engineering, computer science, economics, history, linguistics, and literature. They wrote the essays themselves to be certain the text wasn’t already online, which would have meant it might already have been used to train ChatGPT…”



(Related) For writing right?

https://www.bespacific.com/journaliststoolbox-ai/

JournalistsToolbox.ai

Created by Mike Reilley, Founder and Editor of Journalist’s Toolbox A: “A few years ago, I was doing a digital tools training for a group of journalists in Phoenix. One of the attendees took me to task for saying that a surge of AI tools would be coming in the next few years. “Google uses AI all the time,” he said. “This is nothing new.” He was partly right: Google and other companies have used AI components for many years. But he was clueless about the gold rush of AI tools and resources that were to come. So when the gold rush hit in late 2022 and early 2023, I began to think about a stand-alone website dedicated only to AI tools, ethics and best practices. Journalists would need help navigating the complex, often troubled waters of artificial intelligence tools. Where could they turn? So in June, I built JournalistsToolbox.ai. The site includes links to hundreds of AI tools for writing, editing, image and video creation, data visualization tools, productivity, and, most importantly, ethics and best practices. I’ll be adding more resources – at least five a day – in the coming weeks and months. I also have been publishing this free, twice-monthly Substack newsletter.It features new tools, exercises and training videos that also appear on our free YouTube channel.





Confusion. Who accessed what and how? Should complicate discovery a bit.

https://www.bespacific.com/shadow-libraries-at-heart-of-mounting-copyright-lawsuits-against-openai/

Shadow libraries” at heart of mounting copyright lawsuits against OpenAI

Quartz: “…Shadow libraries are online databases that provide access to millions of books and articles that are out of print, hard to obtain, and paywalled. Many of these databases, which began appearing online around 2008, originated in Russia, which has a long tradition of sharing forbidden books, according to the magazine Reason. Soon enough, these libraries became popular with cash-strapped academics around the world thanks to the high cost of accessing scholarly journals—with some reportedly going for as much as $500 for an entirely open-access article. These shadow libraries are also called “pirate libraries because they often infringe on copyrighted work and cut into the publishing industry’s profits. A 2017 Nielsen and Digimarc study (pdf) found that pirated books were “depressing legitimate book sales by as much as 14%.”…





Could be useful as I look for ‘things to do in retirement.’ (They don’t list skydiving yet.)

https://www.bespacific.com/the-hive-index/

The Hive Index

The Hive Index – A directory of online communities: “We believe that all who want to surround themselves with community should be able to do so. This website is a free resource for professionals, creatives, students, teachers, entrepreneurs, and those that are just looking for some likeminded souls to hang out with. With your help, this list of communities & topics can keep growing. If you know of a good community that’s not listed, submit it. If you’d like a new topic curated, let us know. Thanks, and welcome to the Hive Index.”





Perspective. Be afraid?

https://www.visualcapitalist.com/sentiment-towards-ai-in-workplace/

Charted: Changing Sentiments Towards AI in the Workplace

Amidst all this uncertainty, opinions on how we use AI in the workplace have evolved. Recent survey data from Boston Consulting Group (BCG) reveals how the labor force feels about AI in the workplace today, compared to how they felt five years ago.

The consultancy surveyed 13,000 people (C-suite leaders, managers, and frontline employees) in 18 different countries for the results, and divided their top two responses into five categories: Curiosity, Optimism, Concern, Confidence, and Indifference.



Monday, July 10, 2023

Perspective.

https://www.politico.eu/article/ukraine-has-set-the-standard-on-software-power-russia-war/

Ukraine has set the standard on software power

Europe would thus be wise to incorporate commercially available software into its defense planning and military operations, as Ukraine has so adroitly done, demonstrating that laying cutting edge software over older generations of hardware can significantly improve performance.

The war in Ukraine has also illustrated the importance of data integration and interoperability. The fact that Ukraine isn’t just still standing but also conducting major counteroffensive operations in the face of Russia’s onslaught is due to, among other factors, its ability to embrace the digitization of the battlefield.

Ukraine has so far demonstrated a remarkable ability to collect data from various sources — such as intel, satellite imagery, as well as photos and videos sent in by citizens — then integrate them and deploy algorithms to identify patterns and inconsistencies. Across communications, intelligence, targeting, command and control, to name a few, Ukrainian commanders have leveraged commercial software and other AI-driven tech to update their understanding of the battlefield in real time and, as a result, make faster, better-informed decisions.



Sunday, July 09, 2023

Is AI the future of medicine? Will we find malpractice programmed in? Would one AI testify against another?

https://www.wsj.com/articles/in-battle-with-microsoft-google-bets-on-medical-ai-program-to-crack-healthcare-industry-bb7c2db8?mod=djemalertNEWS

In Battle With Microsoft, Google Bets on Medical AI Program to Crack Healthcare Industry

Google is testing an artificial-intelligence program trained to expertly answer medical questions, racing against rivals including Microsoft

decrease; red down pointing triangle

to translate recent AI advances into products that would be used widely by clinicians.

The November release of ChatGPT, a computer program that can fluently respond to a range of queries across subjects, has sparked early experiments at health systems across the U.S. to use the underlying technology in patient care.

Google is betting that its medical chatbot technology, which is called Med-PaLM 2, will be better at holding conversations on healthcare issues than more general-purpose algorithms because it has been fed questions and answers from medical licensing exams. The company began testing the system with customers including the research hospital Mayo Clinic in April, said people familiar with the matter.





Thinking about the next war…

https://www.ijlsi.com/wp-content/uploads/Means-and-Methods-of-Warfare-and-International-Humanitarian-Law-in-the-Age-of-Artificial-Intelligence-and-Machine-Learning.pdf

Means and Methods of Warfare and International Humanitarian Law in the Age of Artificial Intelligence and Machine Learning

The development of AI and ML technologies has significantly altered how war is fought, which puts the existing legal system of international humanitarian law in jeopardy. The implications of AI and ML in combat are examined in this abstract, which emphasises the necessity for a thorough knowledge of their potential effects on IHL principles. Artificial intelligence (AI) and machine learning (ML) are being incorporated into weapon systems, targeting procedures, and decision-making, which has ramifications for distinction, proportionality, and precautions in assault. In the creation, implementation, and application of AI and ML technologies, the abstract emphasises the significance of ensuring accountability, human control, and compliance with IHL. Additionally, it emphasises the necessity of increased communication between nations, international organisations, and specialists to address the moral and legal issues raised. The goal is to increase awareness of the critical concerns involving AI, ML, and IHL and to promote additional study and conversations to make sure that these developments in combat adhere to the IHL's guiding principles of humanity, distinction, and proportionality.