Saturday, April 26, 2025

Unlikely.

https://www.zdnet.com/article/researchers-sound-alarm-how-a-few-secretive-ai-companies-could-crush-free-society/

Researchers sound alarm: How a few secretive AI companies could crush free society

"Throughout the last decade, the rate of progress in AI capabilities has been publicly visible and relatively predictable," write lead author Charlotte Stix and her team in the paper, "AI behind closed doors: A primer on the governance of internal deployment."

That public disclosure, they write, has allowed "some degree of extrapolation for the future and enabled consequent preparedness." In other words, the public spotlight has allowed society to discuss regulating AI.

But "automating AI R&D, on the other hand, could enable a version of runaway progress that significantly accelerates the already fast pace of progress."





Not again?

https://www.huffpost.com/entry/mike-lindell-mypillow-ai-lawsuit_n_680bf302e4b036223d52149f

MyPillow CEO's Lawyer Embarrassed In Court After Judge Grills Him Over Using AI In Legal Filing

The judge overseeing Mike Lindell's case said there were nearly 30 defects in the court filings, which at some points cited cases that do not exist.

Wang ordered Lindell’s attorneys to “show cause” as to why the court should not sanction Lindell and his companies. She also ordered his attorneys to justify why the court should not refer them to disciplinary proceedings.

Kachouroff responded to Wang’s order on Friday in a motion obtained by HuffPost, saying “there is nothing wrong with using AI when used properly.” He also claimed that the brief his team filed was not the final copy, but a previous draft submitted “mistakenly.”

The lawyer also described being questioned in court about the document.

The Court concluded by grilling me on whether the document was generated by AI,” he wrote. “I freely admitted that I used AI because it is a very helpful tool when used properly.”



Friday, April 25, 2025

It is better to look good than to feel good.

It is better to claim victory than to allow anyone to point out the facts.

https://www.bespacific.com/what-elon-musk-didnt-budget-for-firing-workers-costs-money-too/

What Elon Musk Didn’t Budget For: Firing Workers Costs Money, Too

The New York Times – “An expert on the federal work force estimates that the speed and chaos of Mr. Musk’s cuts to the bureaucracy will cost taxpayers $135 billion this fiscal year. President Trump and Elon Musk promised taxpayers big savings, maybe even a “DOGE dividend” check in their mailboxes, when the Department of Government Efficiency was let loose on the federal government. Now, as he prepares to step back from his presidential assignment to cut bureaucratic fat, Mr. Musk has said without providing details that DOGE is likely to save taxpayers only $150 billion. That is about 15 percent of the $1 trillion he pledged to save, less than 8 percent of the $2 trillion in savings he had originally promised and a fraction of the nearly $7 trillion the federal government spent in the 2024 fiscal year. The errors and obfuscations underlying DOGE’s claims of savings are well documented. Less known are the costs Mr. Musk incurred by taking what Mr. Trump called a “hatchet” to government and the resulting firings, agency lockouts and building seizures that mostly wound up in court. The Partnership for Public Service, a nonprofit organization that studies the federal work force, has used budget figures to produce a rough estimate that firings, re-hirings, lost productivity and paid leave of thousands of workers will cost upward of $135 billion this fiscal year. At the Internal Revenue Service, a DOGE-driven exodus of 22,000 employees would cost about $8.5 billion in revenue in 2026 alone, according to figures from the Budget Lab at Yale University. The total number of departures is expected to be as many as 32,000. Neither of these estimates includes the cost to taxpayers of defending DOGE’s moves in court. Of about 200 lawsuits and appeals related to Mr. Trump’s agenda, at least 30 implicate the department.

Not only is Musk vastly overinflating the money he has saved, he is not accounting for the exponentially larger waste that he is creating,” said Max Stier, the chief executive of the Partnership for Public Service. “He’s inflicted these costs on the American people, who will pay them for many years to come.”..





I hope no one volunteered…

https://pogowasright.org/trump-administration-texted-college-professors-personal-phones-to-ask-if-theyre-jewish/

Trump Administration Texted College Professors’ Personal Phones to Ask If They’re Jewish

Akela Lacy reports:

Most professors at Barnard College received text messages on Monday notifying them that a federal agency was reviewing the college’s employment practices, according to copies of the messages reviewed by The Intercept.
The messages, sent to most Barnard professors’ personal cellphones, asked them to complete a voluntary survey about their employment.
Please select all that apply,” said the second question in the Equal Employment Opportunity Commission, or EEOC, survey.
The choices followed: “I am Jewish”; “I am Israeli”; “I have shared Jewish/Israeli ancestry”; “I practice Judaism”; and “Other.”

Read more at The Intercept.





A very underutilized tool. Imagine seminars uploaded and availale for everyone.

https://www.bespacific.com/what-we-discovered-on-deep-youtube/

What We Discovered on ‘Deep YouTube’

The Atlantic [no paywall] – The video site isn’t just a platform. It’s infrastructure [The article is an important read especially in this turbulent time]. “Until last month, nobody outside of YouTube had a solid estimate for just how many videos are currently on the site.  Eight hundred millionOne billion? It turns out that the figure is more like 14 billion—more than one and a half videos for every person on the planet—and that’s counting strictly those that are publicly visible. I have that number not because YouTube maintains a public counter and not because the company issued a press release announcing it. I’m able to share it with you now only because I’m part of a small team of researchers at the University of Massachusetts at Amherst who spent a year figuring out how to calculate it. Our team’s paper, which was published last month, provides what we believe is the most comprehensive analysis of the world’s most important video-sharing platform to date. The viral videos and popular conspiracy theorists are, of course, important. But the reality is that the number and perhaps even importance of those videos are dwarfed by hours-long church services, condo-board meetings, and other miscellaneous clips that you’ll probably never see. Unlike stereotypical YouTube videos—personality-driven and edited to engage the broadest possible audience—these videos aren’t uploaded with profit in mind. Instead, they illustrate some of the ways that people rely on YouTube for a much wider range of activities than you would find while casually scrolling through its algorithmically driven recommendations. YouTube may have started as a video platform, but it has since become the backbone of one of the 21st century’s core forms of communication. Despite its global popularity, YouTube (which is owned by Google) veils its inner workings. When someone studies, for example, the proliferation of extreme speech on YouTube, they can tell us about a specific sample of videos—their content, view count, what other videos they link to, and so on. But that information exists in isolation; they cannot tell us how popular those videos are relative to the rest of YouTube. To make claims about YouTube in its entirety, we either need key information from YouTube’s databases, which isn’t realistic, or the ability to produce a big-enough, random sample of videos to represent the website. That is what we did. We used a complicated process that boils down to making billions upon billions of random guesses at YouTube IDs (the identifiers you see in the URL when watching videos). We call it “dialing for videos,” inspired by the “random digit dialing” used in polling. It took a sophisticated cluster of powerful computers at the University of Massachusetts months to collect a representative sample; we then spent another few months analyzing those videos to paint what we think is the best portrait to date of YouTube as a whole. (We use a related, slightly faster method at this website to keep regularly updated data.)

So much of YouTube is effectively dark matter. Videos with 10,000 or more views account for nearly 94 percent of the site’s traffic overall but less than 4 percent of total uploads. Just under 5 percent of videos have no views at all, nearly three-quarters have no comments, and even more have no likes. Popularity is almost entirely algorithmic: We found little correlation between subscribers and views, reflecting how YouTube recommendations, and not subscriptions, are the primary drivers of traffic on the site. In other words, people tend to watch just a sliver of what YouTube has to offer, and, on the whole, they follow what the algorithm serves to them.



 

Thursday, April 24, 2025

Why I am confused?

https://insight.kellogg.northwestern.edu/article/what-trump-wants-from-tariffs-and-what-the-us-might-get-instead#!

What Trump Wants From Tariffs … and What the U.S. Might Get Instead

The Trump administration’s tariffs, with their unprecedented scale and scope, have heightened global economic instability. The reasons behind the tariffs are twofold: returning manufacturing jobs to the United States and closing the trade deficit it currently runs. But these tariffs may not alleviate either issue. They also risk more-enduring effects to trust and trade partnerships.





Perhaps I need a new hobby…

https://www.nbcnews.com/tech/security/fbi-says-online-scams-raked-166-billion-last-year-rcna202358

FBI says online scams raked in $16.6 billion last year

Cybercriminals and online scammers stole a record $16.6 billion last year, the FBI said Wednesday.

The figure, from the FBI’s annual Internet Crime Complaint Center (IC3) report, is a sharp rise from the $12.5 billion reported in 2023, reflecting the increased prevalence of online scams, particularly ones including cryptocurrency and those targeting older Americans.

While the report is a leading look at how the United States is ravaged by cybercrime, its numbers are an undercount, as it only reflects people who take the time to file a report with the agency. The agency received 859,532 total complaints of scams and cybercrime last year.





Tools & Techniques.

https://www.techtonicjustice.org/resources/tips-for-identifying-ai-use

Tips for Identifying AI Use

Most of the time, government officials, landlords, employers, educators, and others who use AI to make decisions don’t announce it. This guide is meant to help you figure out if AI is being used and what you can do about it.



Wednesday, April 23, 2025

Do we understand the complete set of DOGE objectives?

https://pogowasright.org/doge-is-building-a-master-database-of-sensitive-information-top-oversight-democrat-says/

DOGE is building a ‘master database’ of sensitive information, top Oversight Democrat says

Natalie Alms reports:

The Department of Government Efficiency is building a single, cross-agency database of sensitive information from the IRS, Social Security Administration, Department of Health and Human Services and other agencies, according to new, whistleblower-informed oversight on Capitol Hill.
The effort is “unprecedented,” said a Thursday letter the top Democrat on the House Oversight and Accountability Committee, Rep. Gerry Connolly, D-Va., sent to SSA’s watchdog, whom he’s asking to open up an investigation.
DOGE’s work may run afoul of privacy law, the letter said. Experts that Nextgov/FCW spoke with agreed.
Already, associates of the government-slashing initiative led by Elon Musk have accessed sensitive data across numerous agencies even as federal employees object, resign or are fired in the process.
There are at least fourteen lawsuits alleging violations of federal privacy protections across agencies, according to the nonpartisan, nonprofit Center for Democracy and Technology.
Now, the DOGE team is building a single, cross-agency master database by combining sensitive information from various agencies, according to whistleblower information Democrats on the House’s oversight committee say they’ve received.
It’s terrifying,” said John Davisson, senior counsel and director of litigation at the Electronic Privacy Information Center, which sued the Office of Personnel Management and Treasury Department in February over personnel records and payment system data that was taken.

Read more at at NextGov.

(Note) Everything is so politicized these days that it’s really hard to know what reporting is accurate or credible and what has been distorted by political biases — Dissent.





Unintended consequences?

https://www.ft.com/content/6b041c11-fa31-4734-823e-e802c9d365f3?accessToken=zwAGM21fsEuokc9rBBwR-jFHNNOCPugCydNl8w.MEYCIQCEHC7M60_iE96sNrEc4EsBJFvMAumSCin3QfwNJS9BYgIhAMcT79FQiAPDzUblei5ARrp3oqkVpEYn7Ng66OhpIp-U&sharetype=gift&token=ba4f2dff-a8e5-4f73-9982-10824cbb314e

What would a US tariff on chips look like?

The US government will be “taking a look at Semiconductors and the WHOLE ELECTRONICS SUPPLY CHAIN”, President Donald Trump recently declared. Given his repeated promises to impose a tariff on imported chips, we must assume some action is coming. But what type, and to what end?

According to trade data, the US imports around $30bn in chips annually, largely from south-east Asia. Would tariffs lead companies to replace these imports with domestically made chips? Not necessarily. The US has hardly any of the labour-intensive assembly and packaging capabilities that have been offshored to Asia since the 1960s.

For that reason, if the US does impose a tariff on semiconductors, there is a chance that companies will respond by doing even more manufacturing offshore to offset the increased costs. Instead of importing chips and putting them into domestically manufactured appliances or cars, suppliers might move the entire process overseas. These finished products would still face a tariff, but at least the manufacturing would be low-cost.



Tuesday, April 22, 2025

Any kid could easily find a way around this.

https://www.zdnet.com/article/how-instagram-is-using-ai-to-uncover-teen-accounts-lying-about-their-age/

How Instagram is using AI to uncover teen accounts lying about their age

In a blog post, Instagram says that starting today it will use AI to detect users lying about their age and automatically move those accounts to one of the limited teen accounts that debuted last fall.

Meta offered several examples. First, it will monitor which profiles and content an account interacts with. Since people in the same age range generally enjoy similar content and interact with each other, if a non‑teen account interacts heavily with teen accounts and teen‑related content, Meta may flag that account.

Second, Meta will review what it calls "strong signals of age," or things like birthday messages -- for example, if another user posts something like "Screaming happy 15th birthday to my best friend."





Is AI creating a not-so-artificial police state?

https://www.techpolicy.press/five-findings-from-an-analysis-of-the-us-department-of-homeland-securitys-ai-inventory/

Five Findings from an Analysis of the US Department of Homeland Security’s AI Inventory

Starting in early 2024, Just Futures Law and Mijente researched the United States Department of Homeland Security’s (DHS) use of artificial intelligence (AI) amidst growing concerns—even before the start of the second Trump administration—about the lack of transparency and public information available on the inventory of AI tools DHS maintains. Our initial findings, presented in the 2024 report “Automating Deportation,” exposed details of the DHS AI armory—most of which had never been seen—and how Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) are using it to surveil the millions of migrants entering and residing in the United States.

In the course of our research, we also discovered that DHS was already violating existing policies and laws related to transparency, oversight, and its obligations to monitor its products for AI harm. Our team met with DHS to share our findings and organized letters demanding that DHS shutter AI to mitigate further harm. The pressure exerted by national civil rights groups led to the termination of some AI programs and DHS’s review and assessment of its AI inventory, including direct responses to our inquiries and public pages that publicly named AI tools and uses that had never previously been identified.

In the last days of the Biden Administration, DHS released its most complete inventory, revealing new AI uses that it had kept hidden. It was the only requirement that DHS was able to meet out of a long list of requirements from the Biden administration’s Executive Orders on AI directed to federal agencies, many of which were fast-tracking AI without considering whether it would hurt the public or violate civil rights protections. Just Futures Law went through the most recent DHS inventory to share these insights with the public.



Monday, April 21, 2025

Nothing has changed economically since Adam Smith’s day.

https://thedailyeconomy.org/article/adam-smith-vs-the-tariff-state-a-timeless-case-for-free-trade/

Adam Smith vs. the Tariff State: A Timeless Case for Free Trade

If you maintain that over time, the United States has been the best country at exemplifying the teachings of Adam Smith, you would get no argument from me.  

Sadly, that imagined crown no longer fits. By one calculation, with President Trump’s new tariffs, the United States “is about to have the highest tariff rate of any advanced economy” with a rate of “around 22 percent — up from 1.5 percent in 2022.”

Smith’s teachings on markets and human nature established the foundation for a free trade policy. It would seem the fate of humanity is to forget timeless truths, endure the consequences, and struggle to recover those truths.  





Perspective.

https://knightcolumbia.org/content/ai-as-normal-technology

AI as Normal Technology

We articulate a vision of artificial intelligence (AI) as normal technology. To view AI as normal is not to understate its impact—even transformative, general-purpose technologies such as electricity and the internet are “normal” in our conception. But it is in contrast to both utopian and dystopian visions of the future of AI which have a common tendency to treat it akin to a separate species, a highly autonomous, potentially superintelligent entity.

A note to readers. This essay has the unusual goal of stating a worldview rather than defending a proposition. The literature on AI superintelligence is copious. We have not tried to give a point-by-point response to potential counter arguments, as that would make the paper several times longer. This paper is merely the initial articulation of our views; we plan to elaborate on them in various follow ups.





Tools & Techniques.

https://www.kdnuggets.com/10-free-machine-learning-books-for-2025

10 Free Machine Learning Books For 2025

Are you interested in enhancing your machine learning skills? We have put together an outstanding list of free machine learning books to aid your learning journey!



Sunday, April 20, 2025

Got me thinking…

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5078574

Authorship.ai

When a "chatbot" produces text, who is the author? This question has vexed scholarly publication since the commercial release of ChatGPT. Copyright scholars, however, have been conducting a robust discussion of authorship issues raised by artificial intelligence for decades. With regard to the 2023-2024 wave of new generative AI tools such as ChatGPT, Claude, etc. there is an obviously correct answer. Generative AI outputs are authored by the prompter. So why does no one believe us?





AI as defendant?

https://ir.wgtn.ac.nz/items/abb2f249-3f50-4719-8286-484a6e19d501

Blame the Bot: an Assesment of Liability for Artificial Intelligence Defamation in New Zealand

Artificial intelligence (AI) defamation claims are appearing across common law jurisdictions, namely in the USA, Australia and Ireland. In the wake of these claims, it is worth assessing how a court would respond if a similar case arose in New Zealand. This paper evaluates the liability of an AI chatbot for defamation under New Zealand's law. Key issues are whether AI chatbots are publishers, whether any defences apply and whether harm is “more than minor.” On analysis, it is likely a plaintiff will succeed in proving defamation so long as they surpass the harm threshold. However, it is likely that in many instances, harm will be less than minor due to a lack of widespread publication. Following an assessment of liability, this paper then considers whether New Zealand should take any alternative action to respond to defamation harms caused by chatbots. An assessment of responses in the UK and the EU finds that Europe has not turned their mind to defamation harms. Alternative preventative methods to harm, such as stronger disclaimers or changes to the code provide options to manage harm but fail to balance to rights of AI firms. As New Zealand favours the innovation and use of AI, this paper concludes the best response to defamation harms is to use the courts as a mechanism for redress, as opposed to any regulatory or legislative amendment. The "more than minor" harm threshold bars trivial claims while allowing redress for the most serious cases.





Perspective.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5213562

Technology and Me and You: Getting Comfortable with AI

This short essay reflects on the author’s surprising dive into artificial intelligence (AI) despite his longstanding caution about adopting new technology. As a self-described tech-wary curmudgeon who avoids unnecessary upgrades and stays off social media, the author explores how AI – specifically, a custom-built RPS (Real Practice Systems) Negotiation and Mediation Coach – nonetheless has proved to be unexpectedly valuable.

Drawing from personal experience, the essay suggests how people can become comfortable using AI, suggesting how they can overcome hesitation and use AI productively. Rather than treating AI as a black box or magic solution, it emphasizes the importance of human control, iterative prompting, and critical judgment in generating useful results.

The insights in this essay are widely applicable to academics, practitioners, students, and others integrating AI in their work.





Tools & Techniques.

https://ojs.aaai.org/index.php/AAAI/article/view/35171

AI Toolkit: Libraries and Essays for Exploring the Technology and Ethics Behind AI

In this paper we describe the development and evaluation of AITK, the Artificial Intelligence Toolkit. This open-source project contains both Python libraries and computational essays (Jupyter notebooks) that together are designed to allow a diverse audience with little or no background in AI to interact with a variety AI tools, exploring in more depth how they function, visualizing their outcomes, and gaining a better understanding of their ethical implications. These notebooks have been piloted at multiple institutions in a variety of humanities courses centered on the theme of responsible AI. In addition, we conducted usability testing of AITK. Our pilot studies and usability testing results indicate that AITK is easy to navigate and effective at helping diverse users gain a better understanding of AI and its ethical implications. Our goal, in this time of rapid innovations in AI, is for AITK to provide an accessible resource for faculty from any discipline looking to incorporate AI topics into their courses and for anyone eager to learn more about AI on their own.