Saturday, August 26, 2023

I can see it now, “Vote for me because all the other candidates are space aliens wearing those Mission Impossible rubber masks!”

https://www.washingtonpost.com/technology/2023/08/25/political-conspiracies-facebook-youtube-elon-musk/

Following Elon Musk’s lead, Big Tech is surrendering to disinformation

Social media companies are receding from their role as watchdogs against political misinformation, abandoning their most aggressive efforts to police online falsehoods in a trend expected to profoundly affect the 2024 presidential election.

An array of circumstances is fueling the retreat: Mass layoffs at Meta and other major tech companies have gutted teams dedicated to promoting accurate information online. An aggressive legal battle over claims that the Biden administration pressured social media platforms to silence certain speech has blocked a key path to detecting election interference.

And X CEO Elon Musk has reset industry standards, rolling back strict rules against misinformation on the site formerly known as Twitter. In a sign of Musk’s influence, Meta briefly considered a plan last year to ban all political advertising on Facebook. The company shelved it after Musk announced plans to transform rival Twitter into a haven for free speech, according to two people familiar with the plans who spoke on the condition of anonymity to describe sensitive matters.





Because you like gotta, ya’know?

https://hbr.org/2023/08/how-to-reskill-your-workforce-in-the-age-of-ai

How to Reskill Your Workforce in the Age of AI

For this episode of our video series “The New World of Work”, HBR editor in chief Adi Ignatius sat down with Sadun, who wrote the HBR article, “Reskilling in the Age of AI,” to discuss:

  • How leaders should use GenAI to augment their own decision making, without entrusting it to make the actual decisions.

  • Even in the age of AI, the top management skills will be a mixture of technical (“hard”) and social (“soft”) skills. Those who excel will comprehend their organization’s complexity while communicating a clear vision to all employees.

  • Handling change management when everyone is uncertain about the future and regular employees are especially fearful.



Friday, August 25, 2023

I guess you can call it a hack if you keep doing what the law bans.

https://www.schneier.com/blog/archives/2023/08/hacking-food-labeling-laws.html

Hacking Food Labeling Laws

This article talks about new Mexican laws about food labeling, and the lengths to which food manufacturers are going to ensure that they are not effective. There are the typical high-pressure lobbying tactics and lawsuits. But there’s also examples of companies hacking the laws:

Companies like Coca-Cola and Kraft Heinz have begun designing their products so that their packages don’t have a true front or back, but rather two nearly identical labels—except for the fact that only one side has the required warning. As a result, supermarket clerks often place the products with the warning facing inward, effectively hiding it.
[…]
Other companies have gotten creative in finding ways to keep their mascots, even without reformulating their foods, as is required by law. Bimbo, the international bread company that owns brands in the United States such as Entenmann’s and Takis, for example, technically removed its mascot from its packaging. It instead printed the mascot on the actual food product—a ready to eat pancake—and made the packaging clear, so the mascot is still visible to consumers.





Contrary to what a lot of school systems are saying.

https://www.pogowasright.org/nys-report-risks-of-facial-recognition-technology-in-schools-likely-outweigh-the-benefits/

NYS Report: Risks of facial recognition technology in schools likely outweigh the benefits

Joie Tyrrell reports:

Risks associated with using facial recognition technology in schools likely outweigh the benefits of the biometrics tool, and educators should be cautious about its use, a report from the state’s Office of Information Technology Services found.
The report, produced with assistance from the state’s Education Department and released earlier this month, examined the use of “biometric identifying technology” — where physical characteristics, including facial recognition and fingerprints, can be used in schools whether for security, administrative or classroom purposes.
The state’s education commissioner, Betty A. Rosa, will consider the report and its recommendations in determining whether to authorize the purchase or utilization of the technology in public schools. A determination will be made within the next few weeks, the Education Department said.
Long Island educators are skeptical about the technology being used here on students.

Read more at Newsday.

The report, Use of Biometric Identifying Technology in Schools, can be found at https://its.ny.gov/system/files/documents/2023/08/biometrics-report-final-2023.pdf



Thursday, August 24, 2023

Then what are we afraid of?

https://skventures.substack.com/p/ai-isnt-good-enough

AI Isn’t Good Enough

As we’ve written previously, what’s important about this wave of automation is how it is more skewed toward jobs that can be described as requiring “tacit knowledge,” where we know what to do but can’t always create programmatic ways of doing things. These jobs are not assembly lines, so simply throwing capital (orthodox automation) at the problem doesn’t work.

That is why, in a sense, the current wave of AI has come along at the perfect time. It is the first automation technology to be applicable to tacit knowledge, to tasks where we can’t describe in a linear ABC way how inputs turn into outputs. To that way of thinking, we should embrace and not fear the current wave of technological change, in that it can help with the observed shortages in U.S. workers, and do so in a replicable and continuous way, not unlike traditional automation. 

The trouble is—not to put too fine a point on it—current-generation AI is mostly crap. Sure, it is terrific at using its statistical models to come up with textual passages that read better than the average human’s writing, but that’s not a particularly high hurdle. Most humans are terrible writers and have no interest in getting better. Similarly, current LLM-based AI is very good at comparing input text to rules-based models, impairing the livelihood of cascading stylesheet pedants who mostly shouted at people (fine, at Paul) on StackExchange and Reddit. Now you can just ask LLMs to write that code for you or check crap code you’ve created yourself.





A fallback position? Do we not have enough already?

https://www.bespacific.com/the-constitutional-case-for-barring-trump-from-the-presidency/

The Constitutional Case for Barring Trump from the Presidency

The New Yorker [free to read ]: “Earlier this month, two conservative law professors announced that they would be publishing an article, which will appear in the University of Pennsylvania Law Review next year, arguing that Donald Trump is ineligible for the Presidency. The professors, William Baude and Michael Stokes Paulsen, make the case that unless Congress grants Trump amnesty, he cannot run for or hold the office of the Presidency again because of his behavior surrounding the events of January 6th. The argument rests on Baude and Paulsen’s interpretation of Section 3 of the Fourteenth Amendment, which states that officeholders, such as the President, who have taken an oath to “support” the Constitution and “shall have engaged in insurrection or rebellion against the same, or given aid or comfort to the enemies thereof” will no longer hold such an office. Several days ago, Laurence Tribe, a liberal law professor, and J. Michael Luttig, a conservative former judge for the U.S. Court of Appeals, wrote an article for The Atlantic, in which they essentially endorsed the view advanced by Baude and Paulsen: “The former president’s efforts to overturn the 2020 presidential election, and the resulting attack on the U.S. Capitol, place him squarely within the ambit of the disqualification clause, and he is therefore ineligible to serve as president ever again.” …If you just step back from the legalese, the clause has its origins in the postbellum era. It was to disqualify persons who had previously taken an oath and then engaged in insurrection or rebellion. I have had so many people in the past forty-eight hours say to me, It just makes common sense, doesn’t it? And I say to them, Yes, I think it does in its application to the former President. He had taken an oath to support the Constitution, and he engaged in insurrection or rebellion, or he had provided assistance, aid, or comfort to a rebellion in or around January 6th, when he attempted to overturn the 2020 Presidential election. And he inspired and at least gave aid and comfort to the attack on the United States Capitol for the purpose of interfering with and preventing the joint session from counting the electoral votes for the Presidency, the former President knowing that the electoral votes had been cast for then candidate Joe Biden. That’s a classic understanding of an insurrection or rebellion against the authority of the United States…”



Tools & Techniques. Automating teachers?

https://www.bespacific.com/5-free-ai-sites-that-use-chatgpt-to-generate-custom-online-courses/

5 Free AI Sites That Use ChatGPT to Generate Custom Online Courses

MakeUseOf: “There are millions of free online courses to choose from online to study any subject. But if you want something very specific, you can also turn to ChatGPT. These websites use the power of GPT-4 to create a guided online course on any topic you ask, often complete with quizzes, tests, and exercises…

101 School already hosts a library of GPT-generated courses for you to browse and enroll in. They’re classified in categories such as science and mathematics, engineering and technology, arts, literature, communication, social sciences, legal, admin, personal services, health and safety, business and management, etc. Any course you start has a simple three-pane format. The first pane is the index, showing you each chapter or section. The middle pane displays the current unit’s content of text and images; so far, we haven’t found any GPT-generated course on 101 School with video content. The last pane is a ChatGPT window, where you can ask AI to give you a test on the material you just read, conduct practical exercises, or ask for further reading. Of course, you can also ask any other question using the best prompting techniques for ChatGPT. You can go through the whole course in one go or ask to get a daily, bi-daily, or weekly email containing the next unit. You’ll need to register for a free account to keep track of your progress in courses, or to create your own course…”



Wednesday, August 23, 2023

If you can’t trust AI, why use it at all?

https://www.bespacific.com/can-you-trust-ai-heres-why-you-shouldnt/

Can you trust AI? Here’s why you shouldn’t

Via LLRX Can you trust AI? Here’s why you shouldn’t Security expert Bruce Schneier and data scientist Nathan Sanders believe that people who come to rely on AIs will have to trust them implicitly to navigate daily life. That means they will need to be sure the AIs aren’t secretly working for someone else. Across the internet, devices and services that seem to work for you already secretly work against you. Smart TVs spy on you. Phone apps collect and sell your data. Many apps and websites manipulate you through dark patterns, design elements that deliberately mislead, coerce or deceive website visitors. This is surveillance capitalism, and AI is shaping up to be part of it.





Assume current technology? Is manual review ever justified?

https://www.bespacific.com/millions-of-pages-of-documents-is-no-reason-to-delay-trumps-january-6-trial/

Millions of Pages of Documents Is No Reason to Delay Trump’s January 6 Trial

The Atlantic [read free ] – We’ve litigated cases with far more paperwork than that. The task was manageable and, crucially, fair. By Norman L. Eisen and Andrew Weissmann. “Next Monday, Judge Tanya Chutkan is expected to decide the date of Donald Trump’s federal criminal trial for his attempt to overturn the 2020 presidential election. The two parties’ proposed dates are ages apart: Special Counsel Jack Smith has requested January 2024, and Trump has asked for more than two years later than that. Yesterday, Smith submitted a brief response to Trump’s filing. Both sides contend that their suggested schedule is what normal order requires. Smith has the better argument by far. Contemporary trials, civil and criminal, routinely involve the tsunami of data people create day in and day out, resulting in millions of pages of documents produced during discovery. As the government’s reply highlights, Trump’s argument, resting principally on the more than 11.5 million pages of evidence the government produced as an excuse for significant delay, is without merit. Based on our experience in this field, it is simply disingenuous to use 19th- and 20th-century standards for paper cases in the modern era. The chart that Trump’s lawyers produced in their brief—visualizing a tower of physical paper they would have to review in a six-month span—is misleading. We—attorneys both—would be laughed out of court if we suggested delays for our side because a page-by-page document review of all discovery would take three years. Under that approach, no major civil or criminal case would ever be tried for years and years—which may be the Trump team’s actual goal.”





This could become more useful as the 2024 election heats up.

https://www.makeuseof.com/find-video-source/?newsletter_popup=1

How to Find the Source of a Video on the Web

Have you seen a random video clip recently and want to find and watch the complete video? Or have you heard breaking news about an incident in a video and want to check its authenticity? No matter why you want to uncover the source of a video, there are several ways to go about it.





This should be fun: HHS misinterprets HIPAA?

https://www.pogowasright.org/is-ocr-correct-that-website-metadata-is-regulated-by-hipaa-chicago-federal-court-asks/

Is OCR Correct That Website Metadata Is Regulated by HIPAA? Chicago Federal Court Asks

Scott Lashway and Matthew Stein of Manatt, Phelps & Phillips write:

The plaintiff’s bar continues to bring new wiretapping claims over pixels and analytics programs in courts around the country, including against hospitals and other entities covered by the Health Insurance Portability and Accountability Act (HIPAA). This comes, in part, on the heels of the Department of Health and Human Services’ (HHS) December 2022 bulletin on tracking technologies and the more recent joint HHS–Federal Trade Commission (FTC) letter to website and application providers on the subject. The courts are now beginning to discuss how those materials impact litigation against HIPAA-covered entities.

Read more at JDSupra.



Tuesday, August 22, 2023

Yes, AI will change things. How? How fast?

https://www.bespacific.com/thomson-reuters-future-of-professionals-report/

Thomson Reuters Future of Professionals Report

Thomson Reuters a global content and technology company, today released its Future of Professionals Report. The survey of more than 1,200 individuals working internationally shares the predicted impact that generative AI will have on the future of professional work. The survey showed 67% of respondents believe AI will have a transformational or high impact on their profession in the next five years. What’s more, over half of the survey respondents (66%) predict AI will create new professional career paths, while 68% expect roles that do not require traditional legal or tax qualifications to increase over the next five years… 67% of respondents indicated their biggest personal motivator was “producing high-quality advice.” To continue this work in the era of generative AI, professionals need to reconsider and redefine what it means to be an advisor and evolve business models to prepare and service customers for tomorrow – not just today.

Generative AI will have a transformational impact on the work professionals do and how it is done, but it will never replace the human element when advising clients and stakeholders.”





A sign of things to come?

https://www.bespacific.com/scholarturbo/

ScholarTurbo

Use ChatGPT to chat with PDFs. ScholarTurbo empowers you to unleash the capabilities of ChatGPT for PDFs. Upload any PDF and start asking questions about it. Paid users get access to GPT-4, while free users utilize GPT-3.5, and use is limited to 100 questions per day, and 3 PDFs per day.





Tools & Techniques. (It’s not just for teachers.)

https://dailynous.com/2023/08/21/resources-for-teaching-in-the-age-of-chatgpt-other-llms/

Resources for Teaching in the Age of ChatGPT & other LLMs

How do large language models (LLMs) affect how we understand our job as teachers, and how does it affect what we should do in order to do that job well?

Zak Kopeikin, Ted Shear, and Julia Staffel (CU Boulder) have compiled some resources to help instructors teach well in an era in which college students have access to ChatGPT and other LLMs.

Please feel free to suggest others in the comments.



Monday, August 21, 2023

Think of the accountants who brought Visicalc (on their Apple computers) into organizations where the IT departments refused to recognize anything but mainframes as “real computers.”

https://www.bespacific.com/beware-the-emergence-of-shadow-ai/

Beware the Emergence of Shadow AI

Tech Policy Press: “The enthusiasm for generative AI systems has taken the world by storm. Organizations of all sorts– including businesses, governments, and nonprofit organizations– are excited about its applications, while regulators and policymakers show varying levels of desire to regulate and govern it. Old hands in the field of cybersecurity and governance, risk & compliance (GRC) functions see a much more practical challenge as organizations move to deploy ChatGPT, DALL-E 2, Midjourney, Stable Diffusion, and dozens of other products and services to accelerate their workflows and gain productivity. An upsurge of unreported and unsanctioned generative AI use has brought forth the next iteration of the classic “Shadow ITproblem: Shadow AI…

Shadow AI refers to the AI systems, solutions, and services used or developed within an organization without explicit organizational approval or oversight. It can include anything from using unsanctioned software and apps to developing AI-based solutions in a skunkworks-like fashion. Wharton School professor Ethan Mollick has called such users the hidden AI cyborgs.





The first of a new wave?

https://www.cnn.com/2023/08/21/tech/khan-academy-ai-tutor/index.html

Meet your new AI tutor

More than 8,000 teachers and students will test education nonprofit Khan Academy’s artificial intelligence tutor in the classroom this upcoming school year, toying with its interactive features and funneling feedback to Khan Academy if the AI botches an answer.

The chatbot, Khanmigo, offers individualized guidance to students on math, science and humanities problems; a debate tool with suggested topics like student debt cancellation and AI’s impact on the job market; and a writing tutor that helps the student craft a story, among other features.

First launched in March to an even smaller pilot program of around 800 educators and students, Khanmigo also allows students to chat with a growing list of AI-powered historical figures, from George Washington to Cleopatra and Martin Luther King Jr., as well as literary characters like Winnie the Pooh and Hamlet.



Sunday, August 20, 2023

Imagine running your business by asking ChatGPT, “What would Steven King do?”

https://www.theatlantic.com/technology/archive/2023/08/books3-ai-meta-llama-pirated-books/675063/

REVEALED: THE AUTHORS WHOSE PIRATED BOOKS ARE POWERING GENERATIVE AI

Stephen King, Zadie Smith, and Michael Pollan are among thousands of writers whose copyrighted works are being used to train large language models.





Let the AI lawyers do it first…

https://borisbabic.com/research/AppealingAI.pdf

How AI Can Learn from the Law: Putting Humans in the Loop Only on Appeal

While the literature on putting a “human in the loop” in artificial intelligence (AI) and machine learning (ML) has grown significantly, limited attention has been paid to how human expertise ought to be combined with AI/ML judgments. This design question arises because of the ubiquity and quantity of algorithmic decisions being made today in the face of widespread public reluctance to forgo human expert judgment. To resolve this conflict, we propose that human expert judges be included via appeals processes for review of algorithmic decisions. Thus, the human intervenes only in a limited number of cases and only after an initial AI/ML judgment has been made. Based on an analogy with appellate processes in judiciary decision-making, we argue that this is, in many respects, a more efficient way to divide the labor between a human and a machine. Human reviewers can add more nuanced clinical, moral, or legal reasoning, and they can consider case-specific information that is not easily quantified and, as such, not available to the AI/ML at an initial stage. In doing so, the human can serve as a crucial error correction check on the AI/ML, while retaining much of the efficiency of AI/ML’s use in the decision-making process. In this paper we develop these widely applicable arguments while focusing primarily on examples from the use of AI/ML in medicine, including organ allocation, fertility care, and hospital readmission.





An overview of everything?

https://www.researchgate.net/profile/Keshav-Singh-17/publication/372958765_Navigating_the_Promise_and_Perils_of_Artificial_Intelligence_A_Comprehensive_Analysis_of_Risks_and_Benefits/links/64d166a391fb036ba6d5cd4c/Navigating-the-Promise-and-Perils-of-Artificial-Intelligence-A-Comprehensive-Analysis-of-Risks-and-Benefits.pdf

Navigating the Promise and Perils of Artificial Intelligence: A Comprehensive Analysis of Risks and Benefits

Artificial intelligence (AI) has become a popular topic in recent years due to the rapid advancements in technology. With the rise of AI, there are many potential benefits that it can bring, such as increased efficiency, improved decisionmaking, and personalized experiences. However, there are also numerous risks associated with AI, such as job displacement, loss of privacy, and even potential safety concerns. This research paper will explore the ethical, legal, and social implications of AI and also address the various risks and benefits of AI and provide insights on how to mitigate the risks while maximizing the benefits. Humans have continuously produced and refined many technologies in their pursuit of sophistication. The purpose of this practise is to make sure that they can develop goods that can make it easier for them to carry out numerous ways [1]. Since the beginning of time, humans have engaged in a variety of behaviours in an effort to increase their chances of succeeding in the many situations they have encountered. The industrial revolution, which began in the early 1760s, would bring the practise to an end. Several nations at the time believed it was feasible to produce various goods for the general public in order to satisfy the need for diverse goods brought on by expanding populations. Since then, thanks to the development and widespread application of artificial intelligence, humans have advanced considerably.





Could be useful in a dispute.

https://www.degruyter.com/document/isbn/9781503637047/html

A History of Fake Things on the Internet

As all aspects of our social and informational lives increasingly migrate online, the line between what is "real" and what is digitally fabricated grows ever thinner—and that fake content has undeniable real-world consequences. A History of Fake Things on the Internet takes the long view of how advances in technology brought us to the point where faked texts, images, and video content are nearly indistinguishable from what is authentic or true.

Computer scientist Walter J. Scheirer takes a deep dive into the origins of fake news, conspiracy theories, reports of the paranormal, and other deviations from reality that have become part of mainstream culture, from image manipulation in the nineteenth-century darkroom to the literary stylings of large language models like ChatGPT. Scheirer investigates the origins of Internet fakes, from early hoaxes that traversed the globe via Bulletin Board Systems (BBSs), USENET, and a new messaging technology called email, to today's hyperrealistic, AI-generated Deepfakes. An expert in machine learning and recognition, Scheirer breaks down the technical advances that made new developments in digital deception possible, and shares behind-the-screens details of early Internet-era pranks that have become touchstones of hacker lore. His story introduces us to the visionaries and mischief-makers who first deployed digital fakery and continue to influence how digital manipulation works—and doesn't—today: computer hackers, digital artists, media forensics specialists, and AI researchers. Ultimately, Scheirer argues that problems associated with fake content are not intrinsic properties of the content itself, but rather stem from human behavior, demonstrating our capacity for both creativity and destruction.





Removing the ‘artificial’ will help AI learn ethics?

https://www.psychologytoday.com/us/blog/psychology-through-technology/202308/how-machine-learning-differs-from-human-learning

How Machine-Learning Differs from Human Learning

To instill values and morality into AI, the programmers might try to imitate the way children learn and develop notions of right and wrong. Children’s thinking seems to emerge in stages, sometimes undergoing remarkable mental leaps and growth spurts. It takes years for children to evolve adult-like thinking, emotional intelligence, theory of mind, and metacognition.

Most important, humans learn in the context of parents, teachers, peers, and others who adjust their helping behaviors to each child’s level and capacity (scaffolding). Should we even expect AI to think like a human or someday demonstrate empathy when they are not programmed to learn gradually, in stages, with human guidance? Can we ever expect AI to learn values, empathy, or develop morality, unless bots are carefully guided by others to think about “right vs. wrong” as human children do?

Furthermore, humans possess a unique natural curiosity. Children continually yearn to know more and strive to explore and understand the world and themselves. Therefore, it is not enough to simply program machines to learn. We must also endow AI with an innate curiosity—not just data hunger but something more similar to a human child’s biological drive to understand, organize, and adapt. Programmers are already working with deep-learning models to continually improve AI with human neurocognitive-inspired algorithms.(1)