Tuesday, January 18, 2022

Consider other applications. Scan my classroom and zap inattentive students.

https://hackaday.com/2022/01/17/machine-learning-detects-distracted-politicians/

MACHINE LEARNING DETECTS DISTRACTED POLITICIANS

[Dries Depoorter] has a knack for highly technical projects with a solid artistic bent to them, and this piece is no exception. The Flemish Scrollers is a software system that watches live streamed sessions of the Flemish government, and uses Python and machine learning to identify and highlight politicians who pull out phones and start scrolling. The results? Pushed out live on Twitter and Instagram, naturally. The project started back in July 2021, and has been dutifully running ever since, so by now we expect that holding one’s phone where the camera can see it is probably considered a rookie mistake.



A different way to look at ethics.

https://theconversation.com/how-to-be-a-god-we-might-one-day-create-virtual-worlds-with-characters-as-intelligent-as-ourselves-174978

How to be a god: we might one day create virtual worlds with characters as intelligent as ourselves

Most research into the ethics of Artificial Intelligence (AI) concerns its use for weaponry, transport or profiling. Although the dangers presented by an autonomous, racist tank cannot be understated, there is another aspect to all this. What about our responsibilities to the AIs we create?

You want planet-sized computers? You can have them. You want computers made from human brain tissue? You can have them. Eventually, I believe we will have virtual worlds containing characters as smart as we are – if not smarter – and in full possession of free will. What will our responsibilities towards these beings be? We will after all be the literal gods of the realities in which they dwell, controlling the physics of their worlds. We can do anything we like to them.

So knowing all that…should we?


(Related) We need an AI ethicist.

https://thenextweb.com/news/why-giving-ai-human-ethics-probably-terrible-idea

Why giving AI ‘human ethics’ is probably a terrible idea

If you want artificial intelligence to have human ethics, you have to teach it to evolve ethics like we do. At least that’s what a pair of researchers from the International Institute of Information Technology in Bangalore, India proposed in a pre-print paper published today.

Titled “AI and the Sense of Self,” the paper describes a methodology called “elastic identity” by which the researchers say AI might learn to gain a greater sense of agency while simultaneously understanding how to avoid “collateral damage.”

In short, the researchers are suggesting that we teach AI to be more ethically-aligned with humans by allowing it to learn when it’s appropriate to optimize for self and when its necessary to optimize for the good of a community.



Perspective.

https://www.ft.com/content/e3f36d82-89b8-41df-ae11-0653ce7e7944

Artificial intelligence searches for the human touch

… It’s already clear that the impact of such systems cannot be understood simply by examining the underlying code or even the data used to build them. We must look to people for answers as well.

Two recent studies do exactly that. The first is an Ipsos Mori survey of more than 19,000 people across 28 countries on public attitudes to AI, the second a University of Tokyo study investigating Japanese people’s views on the morals and ethics of AI usage.


No comments: