Thursday, May 09, 2024

Is AI a peer?

https://www.bespacific.com/researchers-warned-against-using-ai-to-peer-review-academic-papers/

Researchers warned against using AI to peer review academic papers

Semafor [read free on first click only]: “Researchers should not be using tools like ChatGPT to automatically peer review papers, warned organizers of top AI conferences and academic publishers worried about maintaining intellectual integrity. With recent advances in large language models, researchers have been increasingly using them to write peer reviews — a time-honored academic tradition that examines new research and assesses its merits, showing a person’s work has been vetted by other experts in the field. That’s why asking ChatGPT to analyze manuscripts and critique the research, without having read the papers, would undermine the peer review process. To tackle the problem, AI and machine learning conferences are now thinking about updating their policies, as some guidelines don’t explicitly ban the use of AI to process manuscripts, and the language can be fuzzy. The Conference and Workshop on Neural Information Processing Systems (NeurIPS) is considering setting up a committee to determine whether it should update its policies around using LLMs for peer review, a spokesperson told Semafor. At NeurIPS, researchers should not “share submissions with anyone without prior approval” for example, while the ethics code at the International Conference on Learning Representations (ICLR), whose annual confab kicked off Tuesday, states that “LLMs are not eligible for authorship.” Representatives from NeurIPS and ICLR said “anyone” includes AI, and that authorship covers both papers and peer review comments. A spokesperson for Springer Nature, an academic publishing company best known for its top research journal Nature, said that experts are required to evaluate research and leaving it to AI is risky. “Peer reviewers are accountable for the accuracy and views expressed in their reports and their expert evaluations help ensure the integrity, reproducibility and quality of the scientific record,” they said. “Their in-depth knowledge and expertise is irreplaceable and despite rapid progress, generative AI tools can lack up-to-date knowledge and may produce nonsensical, biased or false information.”





What if the AI does not agree? Will it be able to provide feedback?

https://venturebeat.com/ai/openai-posts-model-spec-revealing-how-it-wants-ai-to-behave/

OpenAI posts Model Spec revealing how it wants AI to behave

Today, it unveiled “Model Spec,” a framework document designed to shape the behavior of AI models used within the OpenAI application programming interface (API) and ChatGPT, and on which it is soliciting feedback from the public using a web form here, open till May 22.





Perspective.

https://www.infoworld.com/article/3715422/how-generative-ai-is-redefining-data-analytics.html

How generative AI is redefining data analytics 

Generative AI not only makes analytics tools easier to use, but also substantially improves the quality of automation that can be applied across the data analytics life cycle.

Our survey found that generative AI is already impacting the achievement of organizational goals at 80% of organizations. What led the way, as the #2 and #3 use cases, were analytics—both the creation of and the synthesis of new insights for the organization. These use cases trailed only content generation in terms of embrace.



No comments: