Sunday, February 14, 2021

Does it take an AI to audit an AI? My AI says, ‘No, trust me.’

https://arxiv.org/abs/2102.04661

Security and Privacy for Artificial Intelligence: Opportunities and Challenges

The increased adoption of Artificial Intelligence (AI) presents an opportunity to solve many socio-economic and environmental challenges; however, this cannot happen without securing AI-enabled technologies. In recent years, most AI models are vulnerable to advanced and sophisticated hacking techniques. This challenge has motivated concerted research efforts into adversarial AI, with the aim of developing robust machine and deep learning models that are resilient to different types of adversarial scenarios. In this paper, we present a holistic cyber security review that demonstrates adversarial attacks against AI applications, including aspects such as adversarial knowledge and capabilities, as well as existing methods for generating adversarial examples and existing cyber defence models. We explain mathematical AI models, especially new variants of reinforcement and federated learning, to demonstrate how attack vectors would exploit vulnerabilities of AI models. We also propose a systematic framework for demonstrating attack techniques against AI applications and reviewed several cyber defences that would protect AI applications against those attacks. We also highlight the importance of understanding the adversarial goals and their capabilities, especially the recent attacks against industry applications, to develop adaptive defences that assess to secure AI applications. Finally, we describe the main challenges and future research directions in the domain of security and privacy of AI technologies.



(Related)

https://link.springer.com/article/10.1007/s43681-021-00039-2

AI auditing and impact assessment: according to the UK information commissioner’s office

As the use of data and artificial intelligence systems becomes crucial to core services and business, it increasingly demands a multi-stakeholder and complex governance approach. The Information Commissioner's Office’s ‘Guidance on the AI auditing framework: Draft guidance for consultation’ is a move forward in AI governance. The aim of this initiative is toward producing guidance that encompasses both technical (e.g. system impact assessments) and non-engineering (e.g. human oversight) components to governance and represents a significant milestone in the movement towards standardising AI governance. This paper will summarise and critically evaluate the ICO effort and try to anticipate future debates and present some general recommendations.





Curious to see how they plan to do this. Do they have a plan? As I read it, the bill simply says “don’t buy, build or use” AI that discriminates.

https://www.geekwire.com/2021/washington-state-lawmakers-seek-ban-government-using-ai-tech-discriminates/

Washington state lawmakers seek to ban government from using discriminatory AI tech

Washington state could become a national leader in regulating the technologies of the future, thanks in part to a bill up for debate that would establish new guardrails on government use of artificial intelligence.

On the heels of Washington’s landmark facial recognition bill enacted last year, state lawmakers and civil rights advocates are demanding new rules that ban discrimination from automated decision-making by public agencies. The bill would establish new regulations for government departments that use “automated decisions systems,” a category that includes any algorithm that analyzes data to make or support government decisions.



(Related) Even non-AI governance is difficult.

https://www.theverge.com/22273071/podcast-moderation-apple-spotify-podbean-steve-bannon?scrolla=5eb6d68b7fedc32c19ef33b4

CAN ANYONE MODERATE PODCASTS?

Apple, Spotify, and the impossible problem of moderating shows





Can they detect my skepticism?

https://venturebeat.com/2021/02/13/thought-detection-ai-has-infiltrated-our-last-bastion-of-privacy/

Thought-detection: AI has infiltrated our last bastion of privacy

Research published last week from Queen Mary University in London describes an application of a deep neural network that can determine a person’s emotional state by analyzing wireless signals that are used like radar. In this research, participants in the study watched a video while radio signals were sent towards them and measured when they bounced back. Analysis of body movements revealed “hidden” information about an individual’s heart and breathing rates. From these findings, the algorithm can determine one of four basic emotion types: anger, sadness, joy, and pleasure. The researchers proposed this work could help with the management of health and wellbeing and be used to perform tasks like detecting depressive states.





A field to study.

https://books.google.com/books?hl=en&lr=&id=J-IaEAAAQBAJ&oi=fnd&pg=PA155&dq=%22artificial+intelligence%22++%2Bprivacy&ots=_S3k8bTYv3&sig=VZp45-2WOXFoydJy-zCck9cQJK4#v=onepage&q&f=false

AI and Deep Learning in Biometric Security: Trends, Potential, and Challenges



No comments: