Saturday, May 18, 2024

Did we get it right?

https://fpf.org/blog/colorado-enacts-first-comprehensive-u-s-law-governing-artificial-intelligence-systems/

COLORADO ENACTS FIRST COMPREHENSIVE U.S. LAW GOVERNING ARTIFICIAL INTELLIGENCE SYSTEMS

On May 17, Governor Polis signed the Colorado AI Act (CAIA) (SB-205) into law, establishing new individual rights and protections with respect to high-risk artificial intelligence systems. Building off the work of existing best practices and prior legislative efforts, the CAIA is the first comprehensive United States law to explicitly establish guardrails against discriminatory outcomes from the use of AI. The Act will take effect on February 1, 2026.





Watch the video. Note how “human” ChatGPT4o sounds. What other training might benefit from this technology?

https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2024/05/17/new-chatgpt-eyed-better-learning

AI’s New Conversation Skills Eyed for Education

… ChatGPT’s newest version, GPT-4o (the “o” standing for “omni,” meaning “all”), has a more realistic voice and quicker verbal response time, both aiming to sound more human. The version, which should be available to free ChatGPT users in coming weeks—a change also hailed by educators —allows people to interrupt it while it speaks, simulates more emotions with its voice and translates languages in real time. It also can understand instructions in text and images and has improved video capabilities.

https://www.youtube.com/watch?v=_nSmkyDNulk&embeds_referring_euri=https%3A%2F%2Fwww.insidehighered.com%2F&source_ve_path=OTY3MTQ&feature=emb_imp_woyt



Friday, May 17, 2024

A security heads-up! Now that you “can” it is inevitable that someone “will” point ChatGPT to your proprietary data.

https://venturebeat.com/ai/chatgpt-now-lets-you-import-files-directly-from-google-drive-microsoft-onedrive/

ChatGPT now lets you import files directly from Google Drive, Microsoft OneDrive

The news from OpenAI this week continues: today, the company announced it has updated its signature large language model (LLM) chatbot ChatGPT with the capability to import files directly from external cloud drives Google Drive and Microsoft OneDrive.

The capability is coming to paying subscribers to ChatGPT Plus, Team, and Enterprise users and will be available when using the new underlying GPT-4o model that OpenAI debuted on Monday, as well as older models.





What is the implied “or else” here? Sony will try to gain access to each LLM to determine what (trivial) percentage of the data comes from Sony. Then it will try to determine if that had an impact on the output. Then try to estimate the percentage of profits Sony is entitled to?

https://www.ft.com/content/c5b93b23-9f26-4e6b-9780-a5d3e5e7a409

Sony Music warns global tech and streamers over AI use of its artists

Sony Music is sending warning letters to more than 700 artificial intelligence developers and music streaming services globally in the latest salvo in the music industry’s battle against tech groups ripping off artists.

The Sony Music letter, which has been seen by the Financial Times, expressly prohibits AI developers from using its music — which includes artists such as Harry Styles, Adele and BeyoncĂ© — and opts out of any text and data mining of any of its content for any purposes such as training, developing or commercialising any AI system.

Sony Music is sending the letter to companies developing AI systems including OpenAI, Microsoft, Google, Suno and Udio, according to those close to the group.

The world’s second-largest music group is also sending separate letters to streaming platforms, including Spotify and Apple, asking them to adopt “best practice” measures to protect artists and songwriters and their music from scraping, mining and training by AI developers without consent or compensation. It has asked them to update their terms of service, making it clear that mining and training on its content is not permitted.





Did AI have a voice in this treaty?

https://www.coe.int/en/web/portal/-/council-of-europe-adopts-first-international-treaty-on-artificial-intelligence

Council of Europe adopts first international treaty on artificial intelligence

The Council of Europe has adopted the first-ever international legally binding treaty aimed at ensuring the respect of human rights, the rule of law and democracy legal standards in the use of artificial intelligence (AI) systems. The treaty, which is also open to non-European countries, sets out a legal framework that covers the entire lifecycle of AI systems and addresses the risks they may pose, while promoting responsible innovation. The convention adopts a risk-based approach to the design, development, use, and decommissioning of AI systems, which requires carefully considering any potential negative consequences of using AI systems.

The Council of Europe Framework Convention on artificial intelligence and human rights, democracy, and the rule of law was adopted in Strasbourg during the annual ministerial meeting of the Council of Europe's Committee of Ministers, which brings together the Ministers for Foreign Affairs of the 46 Council of Europe member states.



Thursday, May 16, 2024

The future of elections?

https://www.bbc.com/news/world-asia-india-68918330

AI and deepfakes blur reality in India elections

In November last year, Muralikrishnan Chinnadurai was watching a livestream of a Tamil-language event in the UK when he noticed something odd.

A woman introduced as Duwaraka, daughter of Velupillai Prabhakaran, the Tamil Tiger militant chief, was giving a speech.

The problem was that Duwaraka had died more than a decade earlier, in an airstrike in 2009 during the closing days of the Sri Lankan civil war. The then-23-year-old's body was never found.





Perspective. Will students form the same opinions?

https://www.pewresearch.org/short-reads/2024/05/15/a-quarter-of-u-s-teachers-say-ai-tools-do-more-harm-than-good-in-k-12-education/

A quarter of U.S. teachers say AI tools do more harm than good in K-12 education

As some teachers start to use artificial intelligence (AI) tools in their work, a majority are uncertain about or see downsides to the general use of AI tools in K-12 education, according to a Pew Research Center survey conducted in fall 2023. [How fast will it change? Bob]

A quarter of public K-12 teachers say using AI tools in K-12 education does more harm than good. About a third (32%) say there is about an equal mix of benefit and harm, while only 6% say it does more good than harm. Another 35% say they aren’t sure.



Wednesday, May 15, 2024

Toward the fully artificial lawyer?

https://www.forbes.com/sites/joshuadupuy/2024/05/15/neuro-symbolic-ai-could-redefine-legal-practices/?sh=1e76599d70f6

Neuro-Symbolic AI Could Redefine Legal Practices

In law school, grades are often viewed as predictors of future success: A students become law professors, B students become judges and C students become millionaires. But the adage may need updating. With neuro-symbolic AI, the coders and tech savants who master algorithms are poised to rule.

The pioneering developments in neuro-symbolic AI, exemplified by AlphaGeometry, serve as a promising blueprint for reshaping legal analysis. Unlike traditional legal AI systems constrained by keyword searches and static-rule applications, neuro-symbolic AI adopts a more nuanced and sophisticated approach. It integrates the robust data processing powers of deep learning with the precise logical structures of symbolic AI, laying the groundwork for devising legal strategies that are both insightful and systematically sound.





Resources.

https://www.makeuseof.com/generative-ai-courses-best/

The 5 Best Generative AI Courses



Tuesday, May 14, 2024

Perspective.

https://sloanreview.mit.edu/audio/ai-hype-and-skepticism-economist-paul-romer/

AI Hype and Skepticism: Economist Paul Romer

Paul Romer once considered himself the most optimistic economist. He rightfully predicted that technology would blow up as an economic driver coming out of the inflation of the 1970s but acknowledges he did not foresee the inequality that technology advances would lead to.

On this episode of the Me, Myself, and AI podcast, Paul shares his views on AI advances and their implications for society. Rather than pave the way for full automation, he is a proponent of keeping humans in the loop and believes that, rather than slowing down technology, it can be pointed in a direction for more meaningful and beneficial use, citing education as an area ripe to benefit from AI.





Tools & Techniques.

https://www.inc.com/ben-sherry/openai-says-new-gpt-4o-model-is-twice-as-fast-costs-half-as-much-for-businesses.html

OpenAI Says New GPT-4o Model Is Twice as Fast and Costs Half as Much for Businesses

The company that ushered in the generative AI revolution with ChatGPT just announced its newest flagship model, GPT-4o. The model is said to be faster and have enhanced capabilities across text, vision, and audio. The new model will be rolled out to all ChatGPT users, both free and paid, over the next few weeks.

In a live streamed presentation, OpenAI chief technology officer Mira Murati explained that the "o" in GPT-4o stands for "omnimodel," meaning that it is a multimodal tool with vision and audio capabilities natively built in. Previously, for ChatGPT to process images or audio, OpenAI would string multiple models together, all of which were specialized for a different media type, or "modality." Stringing these models together led to significant lag times, but by combining all the modalities into a single model, the process is made much faster. For example, the new model can translate a conversation between two people speaking different languages in real time, without any lag for loading or processing.



Monday, May 13, 2024

It used to be that only local weather conditions impacted planting. Now a solar storm stops everything.

https://www.404media.co/solar-storm-knocks-out-tractor-gps-systems-during-peak-planting-season/

Solar Storm Knocks Out Farmers' Tractor GPS Systems During Peak Planting Season

The solar storm that brought the aurora borealis to large parts of the United States this weekend also broke critical GPS and precision farming functionality in tractors and agricultural equipment during a critical point of the planting season, 404 Media has learned. These outages caused many farmers to fully stop their planting operations for the moment.

One chain of John Deere dealerships warned farmers that the accuracy of some of the systems used by tractors are “extremely compromised,” and that farmers who planted crops during periods of inaccuracy are going to face problems when they go to harvest, according to text messages obtained by 404 Media and an update posted by the dealership. The outages highlight how vulnerable modern tractors are to satellite disruptions, which experts have been warning about for years.



(Related) Could an enemy impact our agriculture?

https://www.ft.com/content/be9393db-cd63-4141-a4c8-c16b4fe1b6b0

How GPS warfare is playing havoc with civilian life

So-called GPS jamming and spoofing have largely been the preserve of militaries over the past two decades, used to defend sensitive sites against drone or missile attacks or mask their own activities.

But systematic interference by armed forces — particularly following Russia’s full-scale invasion of Ukraine and Israel’s offensive against Hamas in Gaza — has caused widespread issues for civilian populations as well. The footprint of corrupted signals has become vast.





Is the need to “control” moving us away from “On the Internet, nobody knows you’re a dog.”

https://fpf.org/blog/now-on-the-internet-will-everyone-know-if-youre-a-child/

NOW, ON THE INTERNET, WILL EVERYONE KNOW IF YOU’RE A CHILD?

As minors increasingly spend time online, lawmakers continue to introduce legislation to enhance the privacy and safety of kids’ and teens’ online experiences beyond the existing Children’s Online Privacy Protection Act (COPPA) framework. Proposals have proliferated in both the federal and state legislatures across the U.S. with varying approaches to minors’ privacy protections. Key pieces of this discussion are the age of individuals online, whether online sites and services know that an individual is a child, and how to balance kids’ and teens’ protections with anonymity online.



Sunday, May 12, 2024

I think he may have a point.

https://academic.oup.com/ojls/advance-article-abstract/doi/10.1093/ojls/gqae017/7665668

The Data Crowd as a Legal Stakeholder

This article identifies a new legal stakeholder in the data economy: the data crowd. A data crowd is a collective that: (i) is unorganised, non-deliberate and unable to form an agenda; (ii) relies on productive aggregation that creates an interdependency among participants; and (iii) is subjected to an external authority. Notable examples of crowds include users of a social network, users of a search engine and users of artificial intelligence-based applications. The law currently only protects users in the data economy as individuals, and in certain cases may address broad public concerns. However, it does not recognise the collective interests of the crowd of users and its unique vulnerability to platform power. The article presents and defends the crowd’s legal interests in a stable infrastructure for participation. It therefore reveals the need for a new approach to consumers’ rights in the data economy.





Tools & Techniques.

https://techcrunch.com/2024/05/11/u-k-agency-releases-tools-to-test-ai-model-safety/

U.K. agency releases tools to test AI model safety

The U.K. Safety Institute, the U.K.’s recently established AI safety body, has released a toolset designed to “strengthen AI safety” by making it easier for industry, research organizations and academia to develop AI evaluations.

Called Inspect, the toolset — which is available under an open source license, specifically an MIT License — aims to assess certain capabilities of AI models, including models’ core knowledge and ability to reason, and generate a score based on the results.

In a press release announcing the news on Friday, the Safety Institute claimed that Inspect marks “the first time that an AI safety testing platform which has been spearheaded by a state-backed body has been released for wider use.”