Saturday, March 30, 2024

Nearing a tipping point?

https://thehill.com/policy/technology/4557248-nearly-a-third-of-employed-americans-under-30-used-chatgpt-for-work-poll/

Nearly a third of employed Americans under 30 used ChatGPT for work: Poll

More employed Americans have used the artificial intelligence (AI) tool ChatGPT for work since last year, with the biggest increase among the younger portion of the workforce, according to a Pew Research poll released Tuesday.

The survey found that 31 percent of employed Americans between 18 and 29 surveyed in February said they have used ChatGPT for tasks at work, up from 12 percent who said the same last March.

The number of employed Americans who said they use ChatGPT for work decreased by age group. Twenty-one percent of employed adults aged 30 to 49 said they use it, up from 8 percent last year, and just 10 percent aged 50 and older said the same, up from only 4 percent last year.





A model for future AI laws?

https://www.jdsupra.com/legalnews/utah-passes-artificial-intelligence-1386840/

Utah Passes Artificial Intelligence Legislation

Utah is among the first in the nation to pass legislation aimed at regulating the burgeoning field of artificial intelligence (AI). The bill (SB0149), known as the Artificial Intelligence Policy Act, was signed by Governor Cox on March 13, 2024, and is set to take effect on May 1, 2024. The new legislation is made part of Utah consumer protection laws and introduces certain disclosure obligations for entities and professionals using AI systems. It also establishes the Office of Artificial Intelligence Policy (Office) and Artificial Intelligence Learning Laboratory Program (Program), which is tasked with analyzing and researching AI technologies to inform the state regulatory framework.

Such disclosure obligations fall into two categories with respect to the use of generative AI. The first category is a responsive disclosure obligation that applies to any act administered and enforced by the Utah Division of Consumer Protection (e.g., telemarketing, charitable solicitations, and other acts covered by U.C.A 13-2-1). It requires that a person who “uses, prompts, or otherwise causes generative AI to interact with a person” to clearly and conspicuously disclose that the person is interacting with generative AI if asked or prompted by the person.

The second category applies to any person who provides services of a “regulated occupation” (a newly defined term), defined as occupations regulated by the Utah Department of Commerce that require such person to obtain a license or state certification to practice the occupation (e.g., accountants, architects, therapists, and healthcare professionals, just to name a few). Those who provide services falling into this category must proactively disclose when they are interacting with generative AI or looking at material created by generative AI, in the provision of such services verbally at the start of an oral exchange or conversation, or through electronic messaging before a written exchange.



Friday, March 29, 2024

No good deed goes unpunished?

https://www.latimes.com/california/story/2024-03-22/lego-asks-murrieta-police-department-to-stop-using-companys-toy-heads-in-mug-shots

Lego asks Murrieta Police Department to stop using company’s toy heads in mug shots

The maker of Lego toys has asked the Murrieta Police Department of Riverside County to stop using digitally added Lego heads to hide the identities of suspects in mug shots.

The request comes after the department posted a photo on social media earlier this week of suspects with their faces hidden by the yellow heads, writing that they did so to comply with a new state law.

The Murrieta Police Department prides itself in its transparency with the community, but also honors everyone’s rights & protections as afforded by law; even suspects,” the post said.





Toward an automated lawyer?

https://theconversation.com/generative-ai-is-changing-the-legal-profession-future-lawyers-need-to-know-how-to-use-it-225730

Generative AI is changing the legal profession – future lawyers need to know how to use it

Generative AI – technology such as ChatGPT that creates content when prompted – is affecting how solicitors, judges and barristers work. It’s also likely to change the work they are being asked to do.

This means that the way lawyers are trained needs to change, too. In education, there can be a tendency to see generative AI as a threat, including as a means for students to cheat. But if lawyers are using these technologies in practice, the training of future law graduates must reflect the demands of the profession.

Lord Justice Birss, a judge of the Court of Appeal of England and Wales specialising in intellectual property law, has described using ChatGPT to write part of a judgment, in particular, to generate a summary of a particular area of law. Finding the content generated acceptable, Lord Justice Birss described ChatGPT as “jolly useful” and explained that such technologies have “real potential”.

Specific generative AI technologies have been created for lawyers. Lexis+ AI can be used to draft legal advice and communications, and provides citations that link to legal authorities.

And as the use of AI grows, so too will the advice clients seek on AI-related legal issues. Areas of law already well established – such as liability or contract law – could be complicated by AI technologies.

For example, if generative AI is used to draft a contract, lawyers will have to be versed in how this works in order to address any disputes over the contract. It might, for instance, be inaccurate or lack important terminology.

It would be even more concerning if the generative AI had been used by a legal professional and the drafted contract not checked due to an over-reliance on the accuracy of the technology.



(Related) AI could be dangerous for young lawyers?

https://www.reuters.com/technology/canadian-school-boards-sue-social-media-giants-over-4-bln-damages-2024-03-28/

Canadian school boards sue social media giants for over C$4 bln in damages

Four Canadian school boards have sought more than C$4 billion ($2.96 billion) in damages from social media firms such as Meta Platforms, and Snap, in a lawsuit, alleging that their products harmed students.

The products are "negligently designed for compulsive use, have rewired the way children think, behave and learn", a joint statement by the boards said on Thursday.

That has caused learning and mental health crises in students, resulting in the schools having to invest more in support programs, they said.



(Related)

https://www.bespacific.com/using-chatgpt-for-homework-is-correlated-with-memory-loss-and-bad-grades/

Using ChatGPT for homework is correlated with memory loss and bad grades

Fast Company: The world is all abuzz about ChatGPT and the transformative powers it offers, but a new study published in the International Journal of Educational Technology in Higher Education warns that generative AI may not be a great tool for students. Study author Muhammad Abbas, an associate professor at the FAST School of Management at the National University of Computer and Emerging Sciences in Pakistan, told PsyPost that his inspiration for the research was based on his experiences as a professor. “For the last year, I observed an increasing, uncritical, reliance on generative-AI tools among my students for various assignments and projects ...” The researchers first developed a scale to measure ChatGPT use. Then they surveyed 494 university students in Pakistan on how much they used ChatGPT academically, their academic performance, procrastination, and memory loss. They conducted these surveys three times at an interval of one to two weeks…”





Tools & Techniques.

https://www.bespacific.com/ai-fact-checking-tools-updated-march-28-2024/

AI fact-checking tools Updated March 28, 2024

The Journalist Toolbox – AI. Highlights dozens of sources, applications, tools and services, as well as fact check training. Includes DeepFakes, Images and Multimedia Verification, AI search engines, Verification Handbook: How to Think About Deepfakes and Emerging Manipulation Technologies..



Thursday, March 28, 2024

Interesting. If the inventory is made public, I see a lot of time wasted on ‘justification.’

https://www.theverge.com/2024/3/28/24114105/federal-agencies-ai-responsible-guidance-omb-caio

Every US federal agency must hire a chief AI officer

New guidance from the Office of Management and Budget also requires a yearly inventory of all AI systems used by federal agencies.

Part of the responsibility of agencies’ AI officers and governance committees is to monitor their AI systems frequently. Young said agencies must submit an inventory of AI products an agency uses. If any AI systems are considered “sensitive” enough to leave off the list, the agency must publicly provide a reason for its exclusion. Agencies also have to independently evaluate the safety risk of each AI platform it uses.





Perspective.

https://sloanreview.mit.edu/article/how-ai-changes-your-workforce/

How AI Changes Your Workforce

In this concise video, Kiron and Altman discuss the impact AI will have on your organization’s work, workforce, and workforce ecosystem. Listen in and get a primer on the key issues, ahead of their deep dive on the topic at the upcoming MIT SMR Work/24 conference in May.



Wednesday, March 27, 2024

If I understand correctly, the DoJ would rather have Apple spend $77B on R&D so they can come up with products their competition can’t imagine?

https://www.ft.com/content/7cac2c98-6915-416c-9ddb-eb5c81003440

Apple’s buyback bonanza could be casualty of antitrust crackdown

Should companies be punished for handing money back to shareholders? The Department of Justice seems to think so.

In the midst of its landmark lawsuit against Apple, the antitrust enforcer noted disapprovingly of the company’s $77bn buyback scheme in 2023: not because it was a lazy deployment of capital but as an indicator of what the DoJ dubbed “anti-competitive and exclusionary” conduct.





This sounds logical since AI is trained on existing data. (Not just art...)

https://www.newscientist.com/article/2423087-artists-who-use-ai-are-more-productive-but-less-original/

Artists who use AI are more productive but less original

Using artificial intelligence to create artworks increases artists’ productivity and generates more positive reactions, according to a study involving submissions to a popular art-sharing website by more than 50,000 users.

However, generative AI works are more likely to display stereotypical themes and depictions, reducing the novelty of the artist’s work.





Tools & Techniques. (A good summary of techniques.)

https://www.techopedia.com/best-machine-learning-algorithms

12 Best Machine Learning Algorithms Data Scientists Should Know in 2024

what are the best algorithms for budding data scientists and AI enthusiasts to learn about today?

To help answer this question, Techopedia has compiled a list of the top machine learning algorithms that AI enthusiasts should know, including a cheat sheet of the most widely used supervised and unsupervised machine learning algorithms.



Tuesday, March 26, 2024

I do not agree. Electronic voting, where each voter is given a printout with a random number and a record of their vote can then be matched to the vote recorded on a website. Prove that my vote was counted.

https://www.schneier.com/blog/archives/2024/03/on-secure-voting-systems.html

On Secure Voting Systems

Andrew Appel shepherded a public comment signed by twenty election cybersecurity experts, including myself—on best practices for ballot marking devices and vote tabulation. It was written for the Pennsylvania legislature, but it’s general in nature.

From the executive summary:

We believe that no system is perfect, with each having trade-offs. Hand-marked and hand-counted ballots remove the uncertainty introduced by use of electronic machinery and the ability of bad actors to exploit electronic vulnerabilities to remotely alter the results. However, some portion of voters mistakenly mark paper ballots in a manner that will not be counted in the way the voter intended, or which even voids the ballot. Hand-counts delay timely reporting of results, and introduce the possibility for human error, bias, or misinterpretation.
Technology introduces the means of efficient tabulation, but also introduces a manifold increase in complexity and sophistication of the process. This places the understanding of the process beyond the average person’s understanding, which can foster distrust. It also opens the door to human or machine error, as well as exploitation by sophisticated and malicious actors.
Rather than assert that each component of the process can be made perfectly secure on its own, we believe the goal of each component of the elections process is to validate every other component.
Consequently, we believe that the hallmarks of a reliable and optimal election process are hand-marked paper ballots, which are optically scanned, separately and securely stored, and rigorously audited after the election but before certification. We recommend state legislators adopt policies consistent with these guiding principles, which are further developed below.



Monday, March 25, 2024

If I don’t claim to be a therapist I’m immune?

https://apnews.com/article/chatbots-mental-health-therapy-counseling-ai-73feb819ff52a51d53fee117c3207219

Ready or not, AI chatbots are here to help with Gen Z’s mental health struggles

Download the mental health chatbot Earkick and you’re greeted by a bandana-wearing panda who could easily fit into a kids’ cartoon.

Start talking or typing about anxiety and the app generates the kind of comforting, sympathetic statements therapists are trained to deliver. The panda might then suggest a guided breathing exercise, ways to reframe negative thoughts or stress-management tips.

It’s all part of a well-established approach used by therapists, but please don’t call it therapy, says Earkick co-founder Karin Andrea Stephan.

When people call us a form of therapy, that’s OK, but we don’t want to go out there and tout it,” says Stephan, a former professional musician and self-described serial entrepreneur. “We just don’t feel comfortable with that.”

Earkick is one of hundreds of free apps that are being pitched to address a crisis in mental health among teens and young adults. Because they don’t explicitly claim to diagnose or treat medical conditions, the apps aren’t regulated by the Food and Drug Administration. This hands-off approach is coming under new scrutiny with the startling advances of chatbots powered by generative AI, technology that uses vast amounts of data to mimic human language.



Sunday, March 24, 2024

Interesting. I intend to read this carefully.

https://www.preprints.org/manuscript/202403.1086/v1

Intention Recognition in Digital Forensics: Systematic Review

In this comprehensive review, we delve into the realm of intention recognition within the context of digital forensics and cybercrime. The rise of cybercrime has become a major concern for individuals, organizations, and governments worldwide. Digital forensics is a field that deals with the investigation and analysis of digital evidence in order to identify, preserve, and analyze information that can be used as evidence in a court of law. Whereas, Intention recognition is a subfield of artificial intelligence that deals with the identification of agents’ intentions based on their actions and change of states. In the context of cybercrime, intention recognition can be used to identify the intentions of cybercriminals and even to predict their future actions. Employing a meticulous six-step systematic review approach, we curated research articles from reputable journals and categorized them into three distinct modeling approaches: logic-based, classical machine learning-based, and deep learning-based. Notably, intention recognition has transcended its historical confinement to network security, now addressing critical challenges across various subdomains, including social engineering attacks, AI black box vulnerabilities, and physical security. While deep learning emerges as the dominant paradigm, its inherent lack of transparency poses unique challenges in the digital forensics landscape. We advocate for hybrid solutions that blend deep learning’s power with interpretability. Furthermore, we propose the creation of a comprehensive taxonomy to precisely define intention recognition, paving the way for future advancements in this pivotal field.





Better get ready.

https://papers.academic-conferences.org/index.php/iccws/article/view/2099

Deepfakes: The Legal Implications

The development of deepfakes began in 2017, when a software developer on the Reddit online platform began posting his creations in which he swapped the faces of Hollywood celebrities onto the faces of adult film artists, while in 2018, the comedic actor Jordan Peele posted a deepfake video of former U.S. President Obama insulting former U.S. President Trump and warning of the dangers of deepfake media. With the viral use of deepfakes by 2019, the U.S. House Intelligence Committee began hearings on the potential threats to U.S. security posed by deepfakes. Unfortunately, deepfakes have become even more sophisticated and difficult to detect. With easy accessibility to the applications of deepfakes, its usage has increased drastically over the last five years. Deepfakes are now designed to harass, intimidate, degrade, and threaten people and often leads to the creation and dissemination of misinformation as well as creating confusion about important state and non-state issues. A deepfake may also breach IP rights e.g., by unlawfully exploiting a specific line, trademark or label. Furthermore, deepfakes may cause more severe problems such as violation of the human rights, right of privacy, personal data protection rights apart from the copyright infringements. While just a few governments have approved AI regulations, the majority have not due to concerns around the freedom of speech. And while most online platforms such as YouTube have implemented a number of legal mechanisms to control the content posted on their platforms, it remains a time consuming and costly affair. A major challenge is that deep fakes often remain indetectable by the unaided human eye, which lead to the development by governments and private platform to develop deep-fake detecting technologies and regulations around their usage. This paper seeks to discuss the legal and ethical implications and responsibilities of the use of deepfake technologies as well as to highlight the various social and legal challenges which both regulators and the society face while considering the potential role of online content dissemination platforms and governments in addressing deep fakes.





An unlikely solution?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4759742

AI's Hippocratic Oath

Diagnosing diseases, creating artwork, offering companionship, analyzing data, and securing our infrastructure—artificial intelligence (AI) does it all. But it does not always do it well. AI can be wrong, biased, and manipulative. It has convinced people to commit suicide, starve themselves, arrest innocent people, discriminate based on race, radicalize in support of terrorist causes, and spread misinformation. All without betraying how it functions or what went wrong.

A burgeoning body of scholarship enumerates AI harms and proposes solutions. This Article diverges from that scholarship to argue that the heart of the problem is not the technology but its creators: AI engineers who either don’t know how to, or are told not to, build better systems. Today, AI engineers act at the behest of self-interested companies pursuing profit, not safe, socially beneficial products. The government lacks the agility and expertise to address bad AI engineering practices on its best day. On its worst day, the government falls prey to industry’s siren song. Litigation doesn’t fare much better; plaintiffs have had little success challenging technology companies in court.

This Article proposes another way: professionalizing AI engineering. Require AI engineers to obtain licenses to build commercial AI products, push them to collaborate on scientifically-supported, domain-specific technical standards, and charge them with policing themselves. This Article’s proposal addresses AI harms at their inception, influencing the very engineering decisions that give rise to them in the first place. By wresting control over information and system design away from companies and handing it to AI engineers, professionalization engenders trustworthy AI by design. Beyond recommending the specific policy solution of professionalization, this Article seeks to shift the discourse on AI away from an emphasis on light-touch, ex post solutions that address already-created products to a greater focus on ex ante controls that precede AI development. We’ve used this playbook before in fields requiring a high level of expertise where a duty to the public welfare must trump business motivations. What if, like doctors, AI engineers also vowed to do no harm?