Saturday, May 11, 2024

Will a “this is AI” flag be an automatic negative?

https://www.reuters.com/technology/tiktok-label-ai-generated-images-video-openai-elsewhere-2024-05-09/

TikTok to label AI-generated content from OpenAI and elsewhere

TikTok plans to start labelling images and video uploaded to its video-sharing service that have been generated using artificial intelligence, it said on Thursday, using a digital watermark known as Content Credentials.

Researchers have expressed concern that AI-generated content could be used to interfere with U.S. elections this fall, and TikTok was already among a group of 20 tech companies that earlier this year signed an accord pledging to fight it.

The company already labels AI-generated content made with tools inside the app, but the latest move would apply a label to videos and images generated outside of the service.





We need to think about AI control…

https://cointelegraph.com/magazine/sic-ais-on-each-other-artificial-intelligence-threat-alignment-david-brin-author/

How to stop the artificial intelligence apocalypse: David Brin, Uplift author

He says only one thing has ever worked in history to curb bad behavior by villains. It’s not asking them nicely, and it’s not creating ethical codes or safety boards.

It’s called reciprocal accountability, and he thinks it will work for AI as well.

Empower individuals to hold each other accountable. We know how to do this fairly well. And if we can get AIs doing this, there may be a soft landing waiting for us,” he tells Magazine.



Friday, May 10, 2024

Why aren’t other states passing similar laws?

https://www.insideprivacy.com/data-privacy/employers-beware-new-wave-of-illinois-genetic-information-privacy-act-litigation/

Employers Beware: New Wave of Illinois Genetic Information Privacy Act Litigation

Likely spurred by plaintiffs’ recent successes in cases under Illinois’s Biometric Information Privacy Act (“BIPA”), a new wave of class actions is emerging under Illinois’s Genetic Information Privacy Act (“GIPA”). While BIPA regulates the collection, use, and disclosure of biometric data, GIPA regulates that of genetic testing information. Each has a private right of action and provides for significant statutory damages, even potentially where plaintiffs allege a violation of the rule without actual damages. From its 1998 enactment until last year, there were few GIPA cases, and they were largely focused on claims related to genetic testing companies. More recently, plaintiffs have brought dozens of cases against employers alleging GIPA violations based on allegations of employers requesting family medical history through pre-employment physical exams. This article explores GIPA’s background, the current landscape and key issues, and considerations for employers.





Explainer.

https://www.makeuseof.com/what-is-predictive-ai-and-how-does-it-work/

What Is Predictive AI, and How Does It Work?

Predictive AI uses information from things that have already happened to make predictions and projections for what might happen in the future. To be used successfully, it requires access to high-quality data and subject matter expertise from humans to correctly identify trends.





Perspective.

https://www.scientificamerican.com/article/ai-doesnt-threaten-humanity-its-owners-do/

AI Doesn’t Threaten Humanity. Its Owners Do

We shouldn’t be afraid of AI taking over humanity; we should fear the fact that our humanity hasn’t kept up with our technology



Thursday, May 09, 2024

Is AI a peer?

https://www.bespacific.com/researchers-warned-against-using-ai-to-peer-review-academic-papers/

Researchers warned against using AI to peer review academic papers

Semafor [read free on first click only]: “Researchers should not be using tools like ChatGPT to automatically peer review papers, warned organizers of top AI conferences and academic publishers worried about maintaining intellectual integrity. With recent advances in large language models, researchers have been increasingly using them to write peer reviews — a time-honored academic tradition that examines new research and assesses its merits, showing a person’s work has been vetted by other experts in the field. That’s why asking ChatGPT to analyze manuscripts and critique the research, without having read the papers, would undermine the peer review process. To tackle the problem, AI and machine learning conferences are now thinking about updating their policies, as some guidelines don’t explicitly ban the use of AI to process manuscripts, and the language can be fuzzy. The Conference and Workshop on Neural Information Processing Systems (NeurIPS) is considering setting up a committee to determine whether it should update its policies around using LLMs for peer review, a spokesperson told Semafor. At NeurIPS, researchers should not “share submissions with anyone without prior approval” for example, while the ethics code at the International Conference on Learning Representations (ICLR), whose annual confab kicked off Tuesday, states that “LLMs are not eligible for authorship.” Representatives from NeurIPS and ICLR said “anyone” includes AI, and that authorship covers both papers and peer review comments. A spokesperson for Springer Nature, an academic publishing company best known for its top research journal Nature, said that experts are required to evaluate research and leaving it to AI is risky. “Peer reviewers are accountable for the accuracy and views expressed in their reports and their expert evaluations help ensure the integrity, reproducibility and quality of the scientific record,” they said. “Their in-depth knowledge and expertise is irreplaceable and despite rapid progress, generative AI tools can lack up-to-date knowledge and may produce nonsensical, biased or false information.”





What if the AI does not agree? Will it be able to provide feedback?

https://venturebeat.com/ai/openai-posts-model-spec-revealing-how-it-wants-ai-to-behave/

OpenAI posts Model Spec revealing how it wants AI to behave

Today, it unveiled “Model Spec,” a framework document designed to shape the behavior of AI models used within the OpenAI application programming interface (API) and ChatGPT, and on which it is soliciting feedback from the public using a web form here, open till May 22.





Perspective.

https://www.infoworld.com/article/3715422/how-generative-ai-is-redefining-data-analytics.html

How generative AI is redefining data analytics 

Generative AI not only makes analytics tools easier to use, but also substantially improves the quality of automation that can be applied across the data analytics life cycle.

Our survey found that generative AI is already impacting the achievement of organizational goals at 80% of organizations. What led the way, as the #2 and #3 use cases, were analytics—both the creation of and the synthesis of new insights for the organization. These use cases trailed only content generation in terms of embrace.



Wednesday, May 08, 2024

This could get really complicated, unless there a few options offered. Out of a global library, how much value would be lost?

https://techcrunch.com/2024/05/07/openai-says-its-building-a-tool-to-let-content-creators-opt-out-of-ai-training/?guccounter=1

OpenAI says it’s building a tool to let content creators ‘opt out’ of AI training

OpenAI says that it’s developing a tool to let creators better control how their content’s used in training generative AI.

The tool, called Media Manager, will allow creators and content owners to identify their works to OpenAI and specify how they want those works to be included or excluded from AI research and training.

The goal is to have the tool in place by 2025, OpenAI says, as the company works with “creators, content owners and regulators” toward a standard — perhaps through the industry steering committee it recently joined.

This will require cutting-edge machine learning research to build a first-ever tool of its kind to help us identify copyrighted text, images, audio and video across multiple sources and reflect creator preferences,” OpenAI wrote in a blog post. “Over time, we plan to introduce additional choices and features.”





Put employees in a pot of cool water then gradually nudge up the heat.

https://sloanreview.mit.edu/article/the-hazards-of-putting-ethics-on-autopilot/

The Hazards of Putting Ethics on Autopilot

Our examination of the consequences of “nudging” techniques, used by companies to influence employees or customers to take certain actions, has implications for organizations adopting the new generation of chatbots and automated assistants. Companies implementing generative AI agents are encouraged to tailor them to increase managerial control. Microsoft, which has made copilots available across its suite of productivity software, offers a tool that enterprises can customize, thus allowing them to more precisely steer employee behavior. Such tools will make it much easier for companies to essentially put nudging on steroids — and based on our research into the effects of nudging, that may over time diminish individuals’ own willingness and capacity to reflect on the ethical dimension of their decisions.





Perspective.

https://www.bespacific.com/microsoft-linkedin-release-2024-work-trend-index-on-state-of-ai-at-work/

Microsoft LinkedIn release 2024 Work Trend Index on state of AI at work

On Wednesday May 8, 2024 Microsoft Corp. and LinkedIn released the 2024 Work Trend Index, a joint report on the state of AI at work titled, AI at work is here. Now comes the hard part.” The research — based on a survey of 31,000 people across 31 countries, labor and hiring trends on LinkedIn, trillions of Microsoft 365 productivity signals, and research with Fortune 500 customers — shows how, just one year in, AI is influencing the way people work, lead and hire around the world. Microsoft also announced new capabilities in Copilot for Microsoft 365, and LinkedIn made free more than 50 learning courses for LinkedIn Premium subscribers designed to empower professionals at all levels to advance their AI aptitude. The data is in: 2024 is the year AI at work gets real. Use of generative AI at work has nearly doubled in the past six months. LinkedIn is seeing a significant increase in professionals adding AI skills to their profiles, and most leaders say they wouldn’t hire someone without AI skills. But with many leaders worried their company lacks an AI vision, and employees bringing their own AI tools to work, leaders have reached the hard part of any tech disruption: moving from experimentation to tangible business impact.”…The report highlights three insights every leader and professional needs to know about AI’s impact on work and the labor market in the year ahead:

    • Employees want AI at work — and won’t wait for companies to catch up

    • For employees, AI raises the bar and breaks the career ceiling

    • The rise of the AI power user — and what they reveal about the future…”



Tuesday, May 07, 2024

Contacting potential voters online using AI has to be cheaper than going door to door. Can it also be more truthful?

https://apnews.com/article/ai-trump-campaign-2024-election-brad-parscale-3ff2c8eba34b87754cc25e96aa257c9d

Brad Parscale helped Trump win in 2016 using Facebook ads. Now he’s back, and an AI evangelist

Parscale, the digital campaign operative who helped engineer Trump’s 2016 presidential victory, vows that his new, AI-powered platform will dramatically overhaul not just polling, but campaigning. His AI-powered tools, he has boasted, will outperform big tech companies and usher in a wave of conservative victories worldwide.

It’s not the first time Parscale has proclaimed that new technologies will boost right-wing campaigns. He was the digital guru who teamed up with scandal-plagued Cambridge Analytica and helped propel Trump to the White House eight years ago. In 2020, he had a public blowup then a private falling out with his old boss after the Capitol riot. Now he’s back, playing an under-the-radar role to help Trump, the presumptive GOP nominee, in his race against Democratic President Joe Biden.

Parscale says his company Campaign Nucleus can use AI to help generate customized emails, parse oceans of data to gauge voter sentiment and find persuadable voters, then amplify the social media posts of “anti-woke” influencers, according to an Associated Press review of Parscale’s public statements, his company websites, slide decks, marketing materials and other documents not previously made public.



(Related) Imagine FDR proclaiming, “We have nothing to fear but Trump himself.”

https://restofworld.org/2024/dead-relatives-ai-deepfake-india/

Indian politicians are bringing the dead on the campaign trail, with help from AI

Digital rights activists have questioned the ethics of using “soft fakes” to resurrect the past and manage the future.





Even AI finds AI confusing…

https://www.wsj.com/tech/ai/openai-says-it-can-now-detect-images-spawned-by-its-softwaremost-of-the-time-83011149?st=hfix05fvfnpjfo1&reflink=desktopwebshare_permalink

OpenAI Says It Can Now Detect Images Spawned by Its Software—Most of the Time

Startup’s new tool detects 98% of pictures generated by its DALL-E 3 system, but success drops if the images are altered

OpenAI said its new tool is about 98% accurate in detecting content created by DALL-E 3 under most circumstances—if the image isn’t altered. When those images are screenshot or cropped, the classifier is slightly less successful, but can still often make an accurate identification.

The tool’s performance declines further under certain conditions, such as when the hue of those images is changed, said Sandhini Agarwal, an OpenAI researcher focused on policy in an interview.





Tools & Techniques. (I just couldn’t pass this one up)

https://www.kdnuggets.com/a-comprehensive-guide-to-essential-tools-for-data-analysts?utm_source=rss&utm_medium=rss&utm_campaign=a-comprehensive-guide-to-essential-tools-for-data-analysts

A Comprehensive Guide to Essential Tools for Data Analysts

Data analyst tools encompass programming languages, spreadsheets, BI, and big data tools. Here are 9ish tools that cover all the tasks of data analysts well.



Monday, May 06, 2024

Anything truly new here? We (parents or society) must do this when technologies change. Unfortunately, we do it poorly.

https://www.dailymail.co.uk/sciencetech/article-13371301/Children-starting-using-AI.html

Children should starting using AI at 6 years old so they don't become the lost generation of workers, expert recommends

Two-thirds of preschoolers will do jobs that don't currently exist. Trade jobs and being able to think without using the internet will be valued.



Sunday, May 05, 2024

Beyond a search for potential school shooters.

https://www.mdpi.com/2079-9292/13/9/1671

Artificial Intelligence in Social Media Forensics: A Comprehensive Survey and Analysis

Social media platforms have completely revolutionized human communication and social interactions. Their positive impacts are simply undeniable. What has also become undeniable is the prevalence of harmful antisocial behaviors on these platforms. Cyberbullying, misinformation, hate speech, radicalization, and extremist propaganda have caused significant harms to society and its most vulnerable populations. Thus, the social media forensics field was born to enable investigators and law enforcement agents to better investigate and prosecute these cybercrimes. This paper surveys the latest research works in the field to explore how artificial intelligence (AI) techniques are being utilized in social media forensics investigations. We examine how natural language processing can be used to identify extremist ideologies, detect online bullying, and analyze deceptive profiles. Additionally, we explore the literature on GNNs and how they are applied in social network modeling for forensic purposes. We conclude by discussing the key challenges in the field and suggest future research directions.