Inevitable.
https://www.elgaronline.com/monochap/book/9781035307555/book-part-9781035307555-6.xml
Chapter 1: General introduction to Taxing Artificial Intelligence
Apples and oranges?
https://cacm.acm.org/opinion/ethical-ai-is-not-about-ai/
Ethical AI is Not about AI
The equation Ethics + AI = Ethical AI is questionable.
… Additivity between two entities requires ontological likeness. Adding ethics and AI is based on ontological assumptions about what AI is and what ethics is, namely that the two entities are of the same nature or, at least, some of their components are.
Economics is best understood in hindsight.
https://reason.com/2024/03/16/seattle-law-mandating-higher-delivery-driver-pay-is-a-disaster/
Seattle Law Mandating Higher Delivery Driver Pay Is a Disaster
Just two weeks after the law went into effect, Seattleites had to contend with $26 coffees and $32 sandwiches.
In 2022, Seattle's City Council passed an ordinance mandating a minimum earnings floor for app-based food delivery drivers in the city. The law finally went into effect in January 2024, but so far the main result has been customers deleting their delivery apps en masse, food orders plummeting, and driver pay cratering.
We’ve got lots of information. Some of it may be true.
https://www.washingtontimes.com/newsletters/curated/threat-status/issue/54/
AI 'hallucinations' spark U.S. intel concern
U.S. intelligence says new Generative Artificial Intelligence (GAI) is providing opportunities for American spies collecting and analyzing open-source information, but also serious challenges given that those tools sometimes produce “hallucinations” — essentially inventing answers with demonstrably false information.
National Security Tech Reporter Ryan Lovelace has a deep-dive on the “Intelligence Community Open-Source Intelligence (IC-OCINT) Strategy for 2024-2026,” an unclassified version of which was made public this month by Director of National Intelligence Avril Haines and CIA Director William Burns. The document says the intelligence community needs new tradecraft and training to prepare analysts and operatives to combat the dangers of GAI, including the production of false information that some warn could trigger bad decisions or a global disaster.
“OSINT tradecraft and training must be updated and refined to mitigate the potential risks of GAI, including inaccuracies and hallucinations,” the strategy document states.
No comments:
Post a Comment