Sunday, January 16, 2022

Weaponizing malware. (I told you they were just practicing…)

https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/

Destructive malware targeting Ukrainian organizations

Microsoft Threat Intelligence Center (MSTIC) has identified evidence of a destructive malware operation targeting multiple organizations in Ukraine. This malware first appeared on victim systems in Ukraine on January 13, 2022. Microsoft is aware of the ongoing geopolitical events in Ukraine and surrounding region and encourages organizations to use the information in this post to proactively protect from any malicious activity.

While our investigation is continuing, MSTIC has not found any notable associations between this observed activity, tracked as DEV-0586, and other known activity groups. MSTIC assesses that the malware, which is designed to look like ransomware but lacking a ransom recovery mechanism, is intended to be destructive and designed to render targeted devices inoperable rather than to obtain a ransom.



This reverses my argument.

https://link.springer.com/article/10.1007/s00146-021-01384-w

Legal personhood for the integration of AI systems in the social context: a study hypothesis

In this paper, I shall set out the pros and cons of assigning legal personhood on artificial intelligence systems (AIs) under civil law. More specifically, I will provide arguments supporting a functionalist justification for conferring personhood on AIs, and I will try to identify what content this legal status might have from a regulatory perspective. Being a person in law implies the entitlement to one or more legal positions. I will mainly focus on liability as it is one of the main grounds for the attribution of legal personhood, like for collective legal entities. A better distribution of responsibilities resulting from unpredictably illegal and/or harmful behaviour may be one of the main reasons to justify the attribution of personhood also for AI systems. This means an efficient allocation of the risks and social costs associated with the use of AIs, ensuring the protection of victims, incentives for production, and technological innovation. However, the paper also considers other legal positions triggered by personhood in addition to responsibility: specific competencies and powers such as, for example, financial autonomy, the ability to hold property, make contracts, sue (and be sued).



A GDPR failure? Interesting.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4004716

Data Privacy, Human Rights, and Algorithmic Opacity

Decades ago, it was difficult to imagine a reality in which artificial intelligence (AI) could penetrate every corner of our lives to monitor our innermost selves for commercial interests. Within a few decades, the private sector has seen a wild proliferation of AI systems, many of which are more powerful and penetrating than anticipated. In many cases, machine-learning-based AI systems have become “the power behind the throne,” tracking user activities and making fateful decisions through predictive analysis of personal information. However, machine-learning algorithms can be technically complex and legally claimed as trade secrets, creating an opacity that hinders oversight of AI systems. Accordingly, many AI-based services and products have been found to be invasive, manipulative, and biased, eroding privacy rules and human rights in modern society.

The emergence of advanced AI systems thus generates a deeper tension between algorithmic secrecy and data privacy. Yet, in today’s policy debate, algorithmic transparency in a privacy context is an issue that is equally important but managerially disregarded, commercially evasive, and legally unactualized. This Note illustrates how regulators should rethink strategies regarding transparency for privacy protection through the interplay of human rights, disclosure regulations, and whistleblowing systems. It discusses how machine-learning algorithms threaten privacy protection through algorithmic opacity, assesses the effectiveness of the EU’s response to privacy issues raised by opaque AI systems, demonstrates the GDPR’s inadequacy in addressing privacy issues caused by algorithmic opacity, and proposes new algorithmic transparency strategies toward privacy protection, along with a broad array of policy implications and suggested moves. The analytical results indicate that in a world where algorithmic opacity has become a strategic tool for firms to escape accountability, [Is that true? Bob] regulators in the EU, the US, and elsewhere should adopt a human-rights-based approach to impose a social transparency duty on firms deploying high-risk AI techniques.



Would this incentivize the board to understand AI? I doubt it.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4002876

A Disclosure-Based Approach to Regulating AI in Corporate Governance

The use of technology, including artificial intelligence (AI), in corporate governance has been expanding, as corporations have begun to use AI systems for various governance functions such as effecting board appointments, enabling board monitoring by processing large amounts of data and even helping with whistle blowing, all of which address the agency problems present in modern corporations. On the other hand, the use of AI in corporate governance also presents significant risks. These include privacy and security issues, the 'black box problem' or the lack of transparency with AI decision-making, and undue power conferred on those who control decision-making regarding the deployment of specific AI technologies.

In this paper, we explore the possibility of deploying a disclosure-based approach as a regulatory tool to address the risks emanating from the use of AI in corporate governance. Specifically, we examine whether existing securities laws mandate corporate boards to disclose whether they rely on AI in their decision-making process. Not only could such disclosure obligations ensure adequate transparency for the various corporate constituents, but they may also incentivize boards to pay sufficient regard to the limitations or risks of AI in corporate governance. At the same time, such a requirement will not constrain companies from experimenting with the potential uses of AI in corporate governance. Normatively, and given the likelihood of greater use of AI in corporate governance moving forward, we also explore the merits of devising a specific disclosure regime targeting the intersection between AI and corporate governance.



Does this sound familiar?

https://www.orfonline.org/expert-speak/the-future-of-the-battle-for-minds/

The future of the battle for minds

This piece is part of the series, Technology and Governance: Competing Interests

The Enlightenment arguably brought about the greatest changes to human life. While initially limited to Europe, through means, both legitimate, such as trade and commerce; or illegitimate, such as colonialism, the ideas of this revolution have permeated world over, impacting thought processes and understandings of the world. An important distinguishing feature of the post enlightenment period is the emphasis upon individual free will, in addition to the ability to critically think and reason. Information became a resource for liberation. However, the human mind, akin to any machine in existence, can be “hacked”.

While such notions were initially conceived as mere science fiction, presented through the prism of cinema, the rise of surveillance capitalism has fostered a system where commercial entities are incentivised to track, personalise experiences, predict and continuously perform experimentation upon users through technologies such as behavioural analytics, ArtificiaI Intelligence, Machine Learning, and Big Data Analytics. Such technologies have been used by a variety of players, from commercial entities attempting to provide “creepy” specific personalised ads, to political parties in countries like Kenya and the UK, attempting to leverage technology to win elections. All of these have one thing in common: They attempt to colonise one’s ability to critically think, while selling one a narrative.


No comments: