Thursday, August 05, 2021

Interesting argument. If your hacker is a foreign government (or their unacknowledged criminal allies) are you immune from security negligence claims?

https://www.theregister.com/2021/08/04/solarwinds_lawsuit_shareholders_motion_dismiss/

SolarWinds urges US judge to toss out crap infosec sueball: We got pwned by actual Russia, give us a break

SolarWinds is urging a US federal judge to throw out a lawsuit brought against it by aggrieved shareholders who say they were misled about its security posture in advance of the infamous Russian attack on the business.

Insisting that it was "the victim of the most sophisticated cyberattack in history" in a court filing, SolarWinds described a lawsuit from some of its smaller shareholders as an attempt to "convert this sophisticated cyber-crime" into an unrelated securities fraud court case.

"The Court should dismiss the Complaint because it fails to satisfy the heightened standards for pleading a Section 10(b) claim imposed by the Private Securities Litigation Reform Act," it said [PDF].





A podcast (and transcript) for once and future crooks.

https://www.trendmicro.com/en_us/ciso/21/h/cybercrime-today-and-the-future.html

Cybercrime: Today and the Future

Trend Micro Research experts Erin Sindelar and Rik Ferguson use current trends and data to paint a picture of cybercrime in 2021 and shine a light on what it could look like in 2030.





You have to have a child abuse photo to locate copies of that photo. It’s easy to see what the next (AI driven?) step must be.

https://9to5mac.com/2021/08/05/report-apple-photos-casm-content-scanning/

Report: Apple to announce client-side photo hashing system to detect child abuse images in user’s photos libraries

Apple is reportedly set to announce new photo identification features that will use hashing algorithms to match the content of photos in user’s photo libraries with known child abuse materials, such as child pornography.

Apple’s system will happen on the client — on the user’s device — in the name of privacy, so the iPhone would download a set of fingerprints representing illegal content and then check each photo in the user’s camera roll against that list. Presumably, any matches would then be reported for human review.





Another unhackable tool gets hacked?

https://gizmodo.com/master-face-researchers-say-theyve-found-a-wildly-succ-1847420710/amp

'Master Face': Researchers Say They've Found a Wildly Successful Bypass for Face Recognition Tech

In addition to helping police arrest the wrong person or monitor how often you visit the Gap, facial recognition is increasingly used by companies as a routine security procedure: it’s a way to unlock your phone or log into social media, for example. This practice comes with an exchange of privacy for the promise of comfort and security but, according to a recent study, that promise is basically bullshit.

… “Our results imply that face-based authentication is extremely vulnerable, even if there is no information on the target identity,” researchers write in their study. “In order to provide a more secure solution for face recognition systems, anti-spoofing methods are usually applied. Our method might be combined with additional existing methods to bypass such defenses,” they add.

According to the study, the vulnerability being exploited here is the fact that facial recognition systems use broad sets of markers to identify specific individuals. By creating facial templates that match many of those markers, a sort of omni-face can be created that is capable of fooling a high percentage of security systems. In essence, the attack is successful because it generates “faces that are similar to a large portion of the population.”





Another anti-manipulation law. When everything is flagged, we’ll only find unchanged images suspicious.

https://www.makeuseof.com/what-is-norway-photo-retouching-law/

What Is Norway's New Photo Retouching Law?

Norway issued a new law on retouching photos to improve mental health. Here's everything you need to know about the latest regulations.

The internet is full of models exhibiting their flawless and unrealistic bodies, which can exacerbate body insecurities.

In an attempt to mitigate these unrealistic beauty standards, Norway has passed a law requiring influencers and advertisers to label their retouched photos. We're going to be taking a look at what that law is, and how it affects you.

The new law passed by the Norwegian government requires influencers sponsored for social media posts and brands to disclose any modification on their photos using a ministry-approved label. Essentially, you'll now be told any time an image has been edited.





At first glance, rather vanilla.

https://www.meritalk.com/articles/dhs-st-releases-strategic-plan-for-ai-ml/

DHS S&T Releases Strategic Plan for AI & ML

The Department of Homeland Security (DHS) Science and Technology Directorate (S&T) released an artificial intelligence (AI) and machine learning (ML) strategic plan that will look to outline the DHS approach to using these emerging technologies.

The plan has three goals: to “drive next-generation AI/ML technologies” for use across DHS, facilitate the use of AI and ML in the DHS missions, and build up an AI and ML workforce that is interdisciplinary.



Wednesday, August 04, 2021

Yes, be concerned about what they took. Be more concerned about what they left behind.

https://news.softpedia.com/news/chinese-military-hackers-launch-three-pronged-attack-on-major-telecom-carriers-533652.shtml

Chinese Military Hackers Launch Tripple Cyberattack on Major Telecom Carriers

Emissary Panda (APT27), Naikon, and Soft Cell are the organizations that carried out various hacking activities on the same telecom carriers in Southeast Asia at the same time, according to Cybereason.

Once compromised, the hackers gained access to the sensitive information contained in key network resources such as Domain Controllers (DC), high-level corporate resources such as billing servers that contain call detail record data (CDR), as well as key network components such as telecom carriers' billing servers.





A very good overview!

https://fpf.org/blog/now-on-the-internet-everyone-knows-youre-a-dog/

NOW, ON THE INTERNET, EVERYONE KNOWS YOU’RE A DOG

An Introduction to Digital Identity



(Related) On a slippery slope toward “Papers, citizen!”

https://www.theverge.com/2021/8/3/22607690/microsoft-proof-vaccination-covid-19-us-buildings-office-reopening?scrolla=5eb6d68b7fedc32c19ef33b4

Microsoft will require proof of COVID-19 vaccination to enter buildings in the US

Microsoft has informed employees that it will require proof of vaccination for anyone entering a Microsoft building in the US starting in September.





Perhaps Amazon’s next technological breakthrough will be a cure for constipation, available in suppository form. An interesting way to ‘opt in.’

https://www.cpomagazine.com/data-privacy/does-amazons-sleep-tracking-technology-invade-bedroom-privacy-concerns-raised-about-data-sharing-opacity-of-intentions-for-collected-information/

Does Amazon’s Sleep Tracking Technology Invade Bedroom Privacy? Concerns Raised About Data Sharing, Opacity of Intentions for Collected Information

Amazon’s new sleep tracking technology proposes to cast an “electromagnetic bubble” over customers, monitoring their movements throughout the night in an attempt to improve quality of rest. Critics have already raised multiple concerns, from exactly what Amazon intends to do with the data it collects about sleep habits to the amount of radiation it would need to emit to function.

The apparent market demand for Amazon’s new sleep tracking tech stems from reports of common sleep disturbance during the Covid-19 pandemic; studies find that as many as half of all respondents are saying that they have been having trouble getting a full night of rest since early last year.





Explaining AI.

https://fpf.org/blog/the-spectrum-of-ai-companion-to-the-fpf-ai-infographic/

THE SPECTRUM OF AI: COMPANION TO THE FPF AI INFOGRAPHIC

In December of 2020, FPF published the Spectrum of Artificial Intelligence – An Infographic Tool. designed to visually display the variety and complexity of Artificial Intelligence (AI) systems, the fields this science is based on, and a small sample of the use cases these technologies support for consumers. Today, we are releasing the white paper: The Spectrum of Artificial Intelligence – Companion to the FPF AI Infographic to expand on the information included in this educational resource, and describe in more detail how the graphic can be used as an aide in education or in developing legislation or other regulatory guidance around AI-based systems. We identify additional, specific use cases for various AI technologies and explain how the differing algorithmic architecture and data demands present varying risks and benefits. We discuss the spectrum of algorithmic technology and demonstrate how design factors, data use, and model training processes should be considered for specific regulatory approaches.





Until we make lawyers obsolete, I suppose we have to train them.

https://www.bespacific.com/explainable-artificial-intelligence-lawyers-perspective/

Explainable artificial intelligence, lawyer’s perspective

Explainable artificial intelligence, lawyer’s perspective. Authors: Łukasz Górski, Shashishekar Ramakrishna ICAIL ’21: Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law June 2021 Pages 60–68 https://doi.org/10.1145/3462757.3466145 Published:21 June 2021

Explainable artificial intelligence (XAI) is a research direction that was already put under scrutiny, in particular in the AI&Law community. Whilst there were notable developments in the area of (general, not necessarily legal) XAI, user experience studies regarding such methods, as well as more general studies pertaining to the concept of explainability among the users are still lagging behind. This paper firstly, assesses the performance of different explainability methods (Grad-CAM, LIME, SHAP), in explaining the predictions for a legal text classification problem; those explanations were then judged by legal professionals according to their accuracy. Secondly, the same respondents were asked to give their opinion on the desired qualities of (explainable) artificial intelligence (AI) legal decision system and to present their general understanding of the term XAI. This part was treated as a pilot study for a more pronounced one regarding the lawyer’s position on AI, and XAI in particular.”



(Related)

https://www.bespacific.com/a-dataset-for-evaluating-legal-question-answering-on-private-international-law/

A dataset for evaluating legal question answering on private international law

A dataset for evaluating legal question answering on private international law. Francesco Sovrano, Monica Palmirani, Biagio Distefano,Salvatore Sapienza, Fabio Vitali. ICAIL ’21: Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law June 2021 Pages 230–234 https://doi.org/10.1145/3462757.3466094 Published: 21 June 2021.

International Private Law (PIL) is a complex legal domain that presents frequent conflicting norms between the hierarchy of legal sources, legal domains, and the adopted procedures. Scientific research on PIL reveals the need to create a bridge between European and national laws. In this context, legal experts have to access heterogeneous sources, being able to recall all the norms and to combine them using case-laws and following the principles of interpretation theory. This clearly poses a daunting challenge to humans, whenever Regulations change frequently or are big-enough in size. Automated reasoning over legal texts is not a trivial task, because legal language is very specific and in many ways different from a commonly used natural language. When applying state-of-the-art language models to legalese understanding, one of the challenges is always to figure how to optimally use the available amount of data. This makes hard to apply state-of-the-art sub-symbolic question answering algorithms on legislative texts, especially the PIL ones, because of data scarcity. In this paper we try to expand previous works on legal question answering, publishing a larger and more curated dataset for the evaluation of automated question answering on PIL.”





Something to consider when it comes to Colorado.

https://www.zdnet.com/article/intel-dell-bring-ai-for-workforce-program-to-18-community-colleges/

Intel, Dell bring "AI for Workforce" program to 18 community colleges

Intel on Tuesday announced that it's partnering with Dell Technologies to expand its AI for Workforce Program, which helps community colleges develop AI certificates, augment existing courses or launch full AI associate degree programs. With Dell providing technical and infrastructure expertise, the program will expand to 18 schools across 11 states.

So far, more than 80 community college professors have received professional development from Intel and have been certified as Intel AI trainers. Dell is helping the schools configure AI labs for teaching in-person, hybrid and online students.

Intel has plans to expand to 50 more community and vocational colleges in 2022.



Tuesday, August 03, 2021

Looking into your encrypted messages, we think you might be interested in a good criminal lawyer...

https://www.theinformation.com/articles/facebook-researchers-hope-to-bring-together-two-foes-encryption-and-ads

Facebook Researchers Hope to Bring Together Two Foes: Encryption and Ads

Facebook is bulking up a team of artificial intelligence researchers, including a key hire from Microsoft, to study ways of analyzing encrypted data without decrypting it, the company confirmed. The research could allow Facebook to target ads based on encrypted messages on its WhatsApp messenger, or to encrypt the data it collects on billions of users without hurting its ad-targeting capabilities, outside experts say.

Facebook is one of several technology giants, including cloud computing providers Microsoft, Amazon and Google, now researching an emerging field known as homomorphic encryption. Researchers hope the technology will allow companies to analyze personal information, including medical records and financial data, while keeping the information encrypted





Because we can? Because encryption equals criminal?

https://www.eff.org/deeplinks/2021/08/cryptocurrency-surveillance-provision-buried-infrastructure-bill-disaster-digital

The Cryptocurrency Surveillance Provision Buried in the Infrastructure Bill is a Disaster for Digital Privacy

The forthcoming Senate draft of Biden's infrastructure bill—a 2,000+ page bill designed to update the United States’ roads, highways, and digital infrastructure—contains a poorly crafted provision that could create new surveillance requirements for many within the blockchain ecosystem. This could include developers and others who do not control digital assets on behalf of users.

While the language is still evolving, the proposal would seek to expand the definition of “broker” under section 6045(c)(1) of the Internal Revenue Code of 1986 to include anyone who is “responsible for and regularly providing any service effectuating transfers of digital assets” on behalf of another person. These newly defined brokers would be required to comply with IRS reporting requirements for brokers, including filing form 1099s with the IRS. That means they would have to collect user data, including users’ names and addresses.





Utility overrules privacy?

https://www.pogowasright.org/amazon-will-pay-you-10-in-credit-for-your-palm-print-biometrics/

Amazon will pay you $10 in credit for your palm print biometrics

Zack Whittaker reports:

How much is your palm print worth? If you ask Amazon, it’s about $10 in promotional credit if you enroll your palm prints in its checkout-free stores and link it to your Amazon account.
Last year, Amazon introduced its new biometric palm print scanners, Amazon One, so customers can pay for goods in some stores by waving their palm prints over one of these scanners.

Read more on TechCrunch.

Do you really need this site to tell you what we think of this idea or why in years from now, you might regret any cooperation with it?





Could be useful.

https://www.bespacific.com/law-society-lawtech-and-ethics-principles-july-2021/

Law Society Lawtech and Ethics Principles July 2021

UK Law Society – Law Society Lawtech and Ethics Principles, July 2021 – “The world has evolved – it is changing still. By some estimates there have been 5.3 years of digital transformation in the last year. Thankfully, our jurisdiction is one of flexibility where the regulatory environment has enabled the legal services community to adapt to challenges, serve the public and provide trust in the wider economy. However, digital transformation can only be successful when the capabilities of people are built and the functionality, limits and benefits of tools are understood. Over the last year, we have interviewed the country’s largest law firms to understand how they have transformed, assessed solutions and navigated ethical considerations. We cannot thank the contributors enough for so willingly sharing insights and expertise with us. This paper’s main aim is to empower our profession to understand the main considerations they should make when designing, developing or deploying Lawtech, and aims to encourage greater dialogue between the profession and Lawtech providers in the development of future products and services. Although applicable to the whole profession, we hope that the framework, guidance and model procurement process in the paper will be of particular value to those firms and sole practitioners who do not have much experience of procuring Lawtech, and want support on how to get started. The paper helps solicitors to unlock the benefits brought by digital transformation by providing a starting point to assess the compatibility of Lawtech products and services against professional duties. Likewise, it also aims to help Lawtech providers understand the regulatory parameters of solicitors’ practice, embed trust and build market ready solutions…”





Tools & Techniques.

https://www.bespacific.com/how-to-download-any-video-from-the-internet-20-free-methods/

How to Download Any Video From the Internet: 20 Free Methods

Make Use Of: “Do you want to download videos from the internet? If you see a video you like on Facebook, YouTube, Vimeo, or any of the other leading video sites, you might want to create a copy so you can keep it forever. Thankfully, downloading videos off the internet is surprisingly easy. And here are the best free ways to download any video off the internet...



Monday, August 02, 2021

Too popular too fast? More like they didn’t think things through.

https://www.reuters.com/technology/zoom-reaches-85-mln-settlement-lawsuit-over-user-privacy-zoombombing-2021-08-01/

Zoom reaches $85 mln settlement over user privacy, 'Zoombombing'

Zoom Video Communications Inc agreed to pay $85 million and bolster its security practices to settle a lawsuit claiming it violated users' privacy rights by sharing personal data with Facebook, Google and LinkedIn, and letting hackers disrupt Zoom meetings in a practice called Zoombombing.

A preliminary settlement filed on Saturday afternoon requires approval by U.S. District Judge Lucy Koh in San Jose, California.

Subscribers in the proposed class action would be eligible for 15% refunds on their core subscriptions or $25, whichever is larger, while others could receive up to $15.

Though Zoom collected about $1.3 billion in Zoom Meetings subscriptions from class members, the plaintiffs' lawyers called the $85 million settlement reasonable given the litigation risks. They intend to seek up to $21.25 million for legal fees.

Zoombombing is where outsiders hijack Zoom meetings and display pornography, use racist language or post other disturbing content.

Koh said Zoom was "mostly" immune for Zoombombing under Section 230 of the federal Communications Decency Act, which shields online platforms from liability over user content.





Keeping up with a changing world.

https://www.huntonprivacyblog.com/2021/08/02/new-connecticut-breach-notification-requirements-and-cybersecurity-safe-harbor-effective-october-2021/

New Connecticut Breach Notification Requirements and Cybersecurity Safe Harbor Effective October 2021

Connecticut recently passed two cybersecurity laws that will become effective on October 1, 2021. The newly passed laws modify Connecticut’s existing breach notification requirements and establish a safe harbor for businesses that create and maintain a written cybersecurity program that complies with applicable state or federal law or industry-recognized security frameworks.

HB 5310, An Act Concerning Data Privacy Breaches.

HB 6607, An Act Incentivizing the Adoption of Cybersecurity Standards for Businesses.





Imagine an AI programmed to subtly influence children or voters…

https://www.latimes.com/opinion/story/2021-08-02/artificial-intelligence-morality-technology-corruption

Op-Ed: How AI’s growing influence can make humans less moral

The question of how to make AI ethical is front and center in the public debate. For starters, the machine itself must not make unethical decisions: ones that reinforce existing racial and gender biases in hiring, lending, judicial sentencing and in facial detection software deployed by police and other public agencies.

What is less discussed, however, are the ways in which machines might make humans themselves less ethical.

People behave unethically when they can justify it to others, when they observe or believe that others cut ethical corners too, and when they can do so jointly with others (versus alone). In short, the magnetic field of social influence strongly sways people’s moral compass.

Al can also influence people as an advisor that recommends unethical action. Research shows that people will follow dishonesty-promoting advice provided by AI systems as much as they follow similar advice from humans.





Worth reading.

https://venturebeat.com/2021/08/01/4-conversations-every-company-needs-to-be-having-about-ai/

4 conversations every company needs to be having about AI

All the sessions from Transform 2021 are available on-demand now. Watch now.

In a recent survey of 700 IT pros across the globe, a whopping 95% said they believe their companies would benefit from embedding AI into daily operations, products, and services, and 88% want to use AI as much as possible.

In the trenches, IT staffers see AI as a way to help them do their jobs faster and better, and they’re gravitating toward it as naturally as consumers have gratitated toward smart speakers at home.

However, a mere 6% of C-level leaders who responded to the survey reported actual adoption of AI-powered solutions across their company.

Why is there so much challenge to adopting AI and making it stick, then? An AI implementation strategy has many moving parts, and no doubt some companies feel overwhelmed by what may feel like multi-faceted obstacles to adoption. But, in fact, riding the AI wave doesn’t have to be that hard. Kick-starting AI efforts is a lot easier if companies can ask and answer four key questions.





Perspective.

https://www.politico.com/news/2021/08/02/trump-gettr-social-media-isis-502078

Jihadists flood pro-Trump social network with propaganda

Just weeks after its launch, the pro-Trump social network GETTR is inundated with terrorist propaganda spread by supporters of Islamic State, according to a POLITICO review of online activity on the fledgling platform.

The social network — started a month ago by members of former President Donald Trump’s inner circle — features reams of jihadi-related material, including graphic videos of beheadings, viral memes that promote violence against the West and even memes of a militant executing Trump in an orange jumpsuit similar to those used in Guantanamo Bay.

The rapid proliferation of such material is placing GETTR in the awkward position of providing a safe haven for jihadi extremists online as it attempts to establish itself as a free speech MAGA-alternative to sites like Facebook and Twitter.





Perspective. An entire industry that grew under my radar.

https://www.reuters.com/technology/square-buy-australias-afterpay-29-billion-2021-08-01/

Twitter's Dorsey leads $29 billion buyout of lending pioneer Afterpay

Square Inc, the payments firm of Twitter Inc (TWTR.N) co-founder Jack Dorsey, will purchase buy now, pay later (BNPL) pioneer Afterpay Ltd for $29 billion, creating a global transactions giant in the biggest buyout of an Australian firm.

The takeover underscores the popularity of a business model that has upended consumer credit by charging merchants a fee to offer small point-of-sale loans which their shoppers repay in interest-free instalments, bypassing credit checks.

It also locks in a remarkable share-price run for Afterpay, whose stock traded below A$10 in early 2020 and has since soared as the COVID-19 pandemic - and stimulus payments to a workforce stuck at home - saw a rapid shift to shopping online.

The all-stock buyout would value the shares at A$126.21 ($92.65), the companies said in a joint statement on Monday.



Sunday, August 01, 2021

Like “SWATting” but with more serious weapons?

https://www.wired.com/story/fake-warships-ais-signals-russia-crimea/

Phantom Warships Are Courting Chaos in Conflict Zones

According to analysis conducted by conservation technology nonprofit SkyTruth and Global Fishing Watch, over 100 warships from at least 14 European countries, Russia, and the US appear to have had their locations faked, sometimes for days at a time, since August 2020. Some of these tracks show the warships approaching foreign naval bases or intruding into disputed waters, activities that could escalate tension in hot spots like the Black Sea and the Baltic. Only a few of these fake tracks have previously been reported, and all share characteristics that suggest a common perpetrator.





Will the US follow the EU, again?

https://researchportal.helsinki.fi/en/publications/damages-liability-for-harm-caused-by-artificial-intelligence-eu-l

Damages Liability for Harm Caused by Artificial Intelligence – EU Law in Flux

Artificial intelligence (AI) is an integral part of our everyday lives, able to perform a multitude of tasks with little to no human intervention. Many legal issues related to this phenomenon have not been comprehensively resolved yet. In that context, the question arises whether the existing legal rules on damages liability are sufficient for resolving cases involving AI. The EU institutions have started evaluating if and to what extent new legislation regarding AI is needed, envisioning a European approach to avoid fragmentation of the Single Market. This article critically analyses the most relevant preparatory documents and proposals with regard to civil liability for AI issued by EU legislators. In addition, we discuss the adequacy of existing legal doctrines on private liability in terms of resolving cases where AI is involved. While existing national laws on damages liability can be applied to AI-related harm, the risk exists that case outcomes are unpredictable and divergent, or, in some instances, unjust. The envisioned level playing field throughout the Single Market justifies harmonisation of many aspects of damages liability for AI-related harm. In the process, particular AI characteristics should be carefully considered in terms of questions such as causation and burden of proof.





Who would have thought politicians could be moral?

https://journals.aom.org/doi/abs/10.5465/AMBPP.2021.13567abstract

Moral legitimisation in science, technology and innovation policies

Worldwide, governments and institutions are formulating AI strategies that try to square the aspiration of exploiting the potentials of machine learning with safeguarding their communities against the perceived ills of unchecked artificial systems. We make the claim that these new class of documents are an interesting showcase for a recent turn in policy work and formulation, that increasingly tries to intertwine moral sentiment with strategic dimensions. This process of moralizing is interesting and unprecedented coming from governmental actors, as these documents are guidance documents but not law. Given the significant leeway in development trajectories of open meta-technologies such as artificial intelligence, we argue that these more moralizing elements within policy documents are illustrative of a new class of policy writing, meant to catalyze and shape public opinion and thus by proxy development trajectories.



(Related)

https://hrcak.srce.hr/ojs/index.php/eclic/article/view/18352

EU LEGAL SYSTEM AND CLAUSULA REBUS SIC STANTIBUS

We are witnesses and participants of Copernican changes in the world which result in major crises/challenges (economic, political, social, climate, demographic, migratory, MORAL) that significantly change “normal” circumstances. The law, as a large regulatory system, must find answers to these challenges.

We believe that the most current definition of law is that = law is the negation of the negation of morality. It follows that morality is the most important category of social development. Legitimacy, and then legality, relies on morality. In other words, the rules of conduct must be highly correlated with morality - legitimacy - legality. What is legal follows the rules, what is lawful follows the moral substance and ethical permissibility. Therefore, only a fair and intelligent mastery of a highly professional and ethical teleological interpretation of law is a conditio sine qua non for overcoming current anomalies of social development. The juridical code of legal and illegal is a transformation of moral, legitimate and legal into YES, and immoral, illegitimate and illegal into NO. The future of education aims to generate a program for global action and a discussion on learning and knowledge for the future of humanity and the planet in a world of increasing complexity, uncertainty and insecurity.





Perhaps it isn’t moral, but social?

https://arxiv.org/abs/2107.12977

The social dilemma in AI development and why we have to solve it

While the demand for ethical artificial intelligence (AI) systems increases, the number of unethical uses of AI accelerates, even though there is no shortage of ethical guidelines. We argue that a main underlying cause for this is that AI developers face a social dilemma in AI development ethics, preventing the widespread adaptation of ethical best practices. We define the social dilemma for AI development and describe why the current crisis in AI development ethics cannot be solved without relieving AI developers of their social dilemma. We argue that AI development must be professionalised to overcome the social dilemma, and discuss how medicine can be used as a template in this process.





There may be a reflexive response to perceived threats to privacy?

https://arxiv.org/abs/2107.11029

User Perception of Privacy with Ubiquitous Devices

Privacy is important for all individuals in everyday life. With emerging technologies, smartphones with AR, various social networking applications and artificial intelligence driven modes of surveillance, they tend to intrude privacy. This study aimed to explore and discover various concerns related to perception of privacy in this era of ubiquitous technologies. It employed online survey questionnaire to study user perspectives of privacy. Purposive sampling was used to collect data from 60 participants. Inductive thematic analysis was used to analyze data. Our study discovered key themes like attitude towards privacy in public and private spaces, privacy awareness, consent seeking, dilemmas/confusions related to various technologies, impact of attitude and beliefs on individuals actions regarding how to protect oneself from invasion of privacy in both public and private spaces. These themes interacted amongst themselves and influenced formation of various actions. They were like core principles that molded actions that prevented invasion of privacy for both participant and bystander. Findings of this study would be helpful to improve privacy and personalization of various emerging technologies. This study contributes to privacy by design and positive design by considering psychological needs of users. This is suggestive that the findings can be applied in the areas of experience design, positive technologies, social computing and behavioral interventions.





Just because these articles on eliminating lawyers (and judges) amuse me.

https://repositorio.uautonoma.cl/handle/20.500.12728/9128

¿Judges robots? Artificial intelligence and law

The increasing application of artificial intelligence in our day to day, highlights the extraordinary development of this technology. Thus, today we can observe the use of the intelligence artificial for the assistance and execution of tasks of the most diverse nature, including the legal field. There are already in use different programs or platforms nurtured with information of legal interest that allows solving legal cases in a short time. The latter raises the possibility of going one step further and incorporating robot judges for the administration of justice, which raises as many concerns as expectations, issues that this research analyze.



(Related)

https://lida.hse.ru/article/view/12791

On the Prospects of Digitalization of Justice

The article considers the problem of digitalization of judicial activities in the Russian Federation and abroad. Given the fact that in the modern world elements of digital (electronic) justice are gaining widespread adoption, the article presents an analysis of its fundamental principles and distinguishes between electronic methods of ensuring procedural activity and digitalization of justice as an independent direction of transformation of public relations at the present stage. As a demonstration of the implementation of the first direction, the article presents the experience of foreign countries, Russian legislative approaches and currently being developed legislative initiatives in terms of improving the interaction of participants in the procedure through the use of information technologies. The authors come to the conclusion that the implemented approaches and proposed amendments are intended only to modernize the form of administration of justice with new opportunities to carry out the same actions (identification of persons participating in the case, notification, participation in the court session, etc.) without changing the essential characteristics of the proceedings. The second direction, related to electronic (digital) justice, is highlighted from the point of view of the prospects and risks of using artificial intelligence technologies to make legally significant decisions on the merits. At the same time, the authors argue that the digitalization of justice requires the development and implementation of the category of justice in machine-readable law, as well as special security measures of both technological and legal nature.





I’m going to suggest that having a human in the loop slows down the decision process and delays taking action. Does that not increase liability?

https://www.tandfonline.com/doi/full/10.1080/13600834.2021.1958860

Approaching the human in the loop – legal perspectives on hybrid human/algorithmic decision-making in three contexts

Public and private organizations are increasingly implementing various algorithmic decision-making systems. Through legal and practical incentives, humans will often need to be kept in the loop of such decision-making to maintain human agency and accountability, provide legal safeguards, or perform quality control. Introducing such human oversight results in various forms of semi-automated, or hybrid decision-making – where algorithmic and human agents interact. Building on previous research we illustrate the legal dependencies forming an impetus for hybrid decision-making in the policing, social welfare, and online moderation contexts. We highlight the further need to situate hybrid decision-making in a wider legal environment of data protection, constitutional and administrative legal principles, as well as the need for contextual analysis of such principles. Finally, we outline a research agenda to capture contextual legal dependencies of hybrid decision-making, pointing to the need to go beyond legal doctrinal studies by adopting socio-technical perspectives and empirical studies.



(Related) You can never fully rely on the machine?

https://journals.aom.org/doi/abs/10.5465/AMBPP.2021.14636abstract

Artificial Intelligence and Business Ethics: Goal Setting and Value Alignment as Management Concerns

Rapid advances in the development and use of artificial intelligence (AI) is having a profound effect both on organizations and society at large. While already used extensively in organizations to promote rational decision making, its emergence is also giving rise to profound ethical concerns. As such, we argue that organizational research has a crucial role to play in promoting beneficial development and use of this technology. Given the rapidly increasing autonomy and impact of AI systems, we draw attention to the fundamental importance goal setting and value alignment in determining the ethical desirability of outcomes from this development. Importantly, we claim that goal setting necessitates ethical considerations that are not amenable to technology. Further, while the pursuit of goals can be partially delegated to AI, challenges relating to the representation of goals and (ethical) constraints imply that human involvement is crucial in preventing unforeseen consequences. Finally, we discuss issues relating to the malicious misuse and heedless overuse of AI, arguing that the importance of human agency and inclusion in decision making in fact increases with adoption of the technology due to an escalating scale and impact of decisions made. Given the profound impact of these decisions on numerous stakeholders, we suggest that organizational research stands to contribute to the relevance, comprehensiveness, and integrity of AI ethics in the face of this revolutionary technology.





Microsoft is being a bad boy? Imagine that!

https://www.theatlantic.com/ideas/archive/2021/07/microsofts-antitrust/619599/?scrolla=5eb6d68b7fedc32c19ef33b4

The Invisible Tech Behemoth

Microsoft is the company that could truly test the Biden-era commitment to anti-bigness, and, as a lawyer friend of mine put it, define the limiting principle of new Federal Trade Commission Chair Lina Khan’s antitrust theory. Since its own brush with antitrust regulation decades ago, Microsoft has slipped past significant scrutiny. The company is reluctantly guilty of the sin of bigness, yes, but it is benevolent, don’t you see? Reformed, even! No need to cast your pen over here!