Sunday, April 18, 2021

Has anyone got it right?

https://www.bricslawjournal.com/jour/article/view/452

Regulation of Artificial Intelligence in BRICS and the European Union

Global digitization and the emergence of Artificial Intelligence-based technologies pose challenges for all countries. The BRICS and European Union countries are no exception. BRICS as well as the European Union seek to strengthen their positions as leading actors on the world stage. At the present time, an essential means of doing so is for BRICS and the EU to implement smart policy and create suitable conditions for the development of digital technologies, including AI. For this reason, one of the most important tasks for BRICS and the EU is to develop an adequate approach to the regulation of AI-based technologies. This research paper is an analysis of the current approaches to the regulation of AI at the BRICS group level, in each of the BRICS countries, and in the European Union. The analysis is based on the application of comparative and formal juridical analysis of the legislation of the selected countries on AI and other digital technologies. The results of the analysis lead the authors to conclude that it is necessary to design a general approach to the regulation of these technologies for the BRICS countries similar to the approach chosen in the EU (the trustworthy approach) and to upgrade this legislation to achieve positive effects from digital transformation. The authors offer several suggestions for optimization of the provisions of the legislation, including designing a model legal act in the sphere of AI.





Reducing everything to that Cheshire Cat smile?

https://thenextweb.com/news/heres-why-we-should-never-trust-ai-to-identify-our-emotions-syndication?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+TheNextWeb+%28The+Next+Web+All+Stories%29

Here’s why we should never trust AI to identify our emotions

Imagine you are in a job interview. As you answer the recruiter’s questions, an artificial intelligence (AI) system scans your face, scoring you for nervousness, empathy and dependability. It may sound like science fiction, but these systems are increasingly used, often without people’s knowledge or consent.

Emotion recognition technology (ERT) is in fact a burgeoning multi-billion-dollar industry that aims to use AI to detect emotions from facial expressions. Yet the science behind emotion recognition systems is controversial: there are biases built into the systems.

Like other forms of facial recognition, ERT raises questions about bias, privacy and mass surveillance. But ERT raises another concern: the science of emotion behind it is controversial. Most ERT is based on the theory of “basic emotions” which holds that emotions are biologically hard-wired and expressed in the same way by people everywhere.

This is increasingly being challenged, however. Research in anthropology shows that emotions are expressed differently across cultures and societies. In 2019, the Association for Psychological Science conducted a review of the evidence, concluding that there is no scientific support for the common assumption that a person’s emotional state can be readily inferred from their facial movements. In short, ERT is built on shaky scientific ground.





Face it.

https://link.springer.com/article/10.1007/s00146-021-01199-9

The ethical application of biometric facial recognition technology

Biometric facial recognition is an artificial intelligence technology involving the automated comparison of facial features, used by law enforcement to identify unknown suspects from photographs and closed circuit television. Its capability is expanding rapidly in association with artificial intelligence and has great potential to solve crime. However, it also carries significant privacy and other ethical implications that require law and regulation. This article examines the rise of biometric facial recognition, current applications and legal developments, and conducts an ethical analysis of the issues that arise. Ethical principles are applied to mediate the potential conflicts in relation to this information technology that arise between security, on the one hand, and individual privacy and autonomy, and democratic accountability, on the other. These can be used to support appropriate law and regulation for the technology as it continues to develop.





Let’s start the SPCAI? This book was already in my local library.

https://www.theguardian.com/technology/2021/apr/17/ai-ethicist-kate-darling-robots-can-be-our-partners

AI ethicist Kate Darling: ‘Robots can be our partners’

Dr Kate Darling is a research specialist in human-robot interaction, robot ethics and intellectual property theory and policy at the Massachusetts Institute of Technology (MIT) Media Lab. In her new book, The New Breed, she argues that we would be better prepared for the future if we started thinking about robots and artificial intelligence (AI) like animals.





Perspective. Opportunity exists when only 14% like a format…

https://techcrunch.com/sponsor/kimo/dutch-ai-start-up-kimo-reinvents-online-learning/

Dutch AI start-up KIMO reinvents online learning

The World Economic Forum predicts that 50% of our global workforce needs reskilling in the coming decade.

Learning platforms today mostly sell MOOCs — Massive Open Online Courses — as a preferred way of studying. MOOCs are modeled after classic university courses, and are relatively easy to package, promote and sell.

However, only 14% of online students consider MOOCs as their preferred content type for learning online. What’s worse, MOOCs see a 95% dropout — with 52% of users never showing up after signing up.

KIMO, a name with Hawaiian origins, was built to provide guidance in the world of work in the 21st century. Realizing that MOOCs only appealed to some people, the founders set out to include other content formats (articles, videos, podcasts etc.) at every price point. Today, the KIMO system is able to self-organize the content into coherent topics and sub-topics, providing users with an instant overview of the domain and a choice on topics to include/exclude. Content can be personalized using filters for preferred content type, difficulty level, price point, content length, favourite industry and favourite sources.



No comments: