Thursday, November 30, 2023

Training data is all an AI knows. What if it’s the wrong data?

https://www.nytimes.com/2023/11/30/business/ai-data-standards.html

Big Companies Find a Way to Identify A.I. Data They Can Trust

Data is the fuel of artificial intelligence. It is also a bottleneck for big businesses, because they are reluctant to fully embrace the technology without knowing more about the data used to build A.I. programs.

Now, a consortium of companies has developed standards for describing the origin, history and legal rights to data. The standards are essentially a labeling system for where, when and how data was collected and generated, as well as its intended use and restrictions.

The data provenance standards, announced on Thursday, have been developed by the Data & Trust Alliance, a nonprofit group made up of two dozen mainly large companies and organizations, including American Express, Humana, IBM, Pfizer, UPS and Walmart, as well as a few start-ups.

This is a step toward managing data as an asset, which is what everyone in industry is trying to do today,” said Ken Finnerty, president for information technology and data analytics at UPS. “To do that, you have to know where the data was created, under what circumstances, its intended purpose and where it’s legal to use or not.”

Surveys point to the need for greater confidence in data and for improved efficiency in data handling. In one poll of corporate chief executives, a majority cited “concerns about data lineage or provenance” as a key barrier to A.I. adoption. And a survey of data scientists found that they spent nearly 40 percent of their time on data preparation tasks.



(Related)

https://sloanreview.mit.edu/article/the-working-limitations-of-large-language-models/

The Working Limitations of Large Language Models

But while LLMs are incredibly powerful, their ability to generate humanlike text can invite us to falsely credit them with other human capabilities, leading to misapplications of the technology. With a deeper understanding of how LLMs work and their fundamental limitations, managers can make more informed decisions about how LLMs are used in their organizations, addressing their shortcomings with a mix of complementary technologies and human governance.





One clear and present danger of AI.

https://www.csoonline.com/article/1249838/almost-all-developers-are-using-ai-despite-security-concerns-survey-suggests.html

Almost all developers are using AI despite security concerns, survey suggests

While more than half of developers acknowledge that generative AI tools commonly create insecure code, 96% of development teams are using the tools anyway, with more than half using the tools all the time, according to a report released Tuesday by Snyk, maker of a developer-first security platform.

The report, based on a survey of 537 software engineering and security team members and leaders, also revealed that 79.9% of the survey’s respondents said developers bypass security policies to use AI.





Perspective.

https://theconversation.com/a-year-of-chatgpt-5-ways-the-ai-marvel-has-changed-the-world-218805

A year of ChatGPT: 5 ways the AI marvel has changed the world

… We’ve never seen a technology roll out so quickly before. It took about a decade or so before most people started using the web. But this time the plumbing was already in place.

As a result, ChatGPT’s impact has gone way beyond writing poems about Carol’s retirement in the style of Shakespeare. It has given many people a taste of our AI-powered future. Here are five ways this technology has changed the world.



No comments: