Saturday, January 22, 2022

Do you know an act of war when you see it?

https://news.bloomberglaw.com/privacy-and-data-security/mercks-1-4-billion-insurance-win-splits-cyber-from-act-of-war

Merck’s $1.4 Billion Insurance Win Splits Cyber From ‘Act of War’

Merck & Co.‘s victory in a legal dispute with insurers over coverage for $1.4 billion in losses from malware known as NotPetya is expected to force insurance policies to more clearly confront responsibility for the fallout from nation-state cyberattacks.

The multinational pharmaceutical company sued its insurers who had denied coverage for NotPetya’s impacts to its computer systems, citing a policy exclusion for acts of war. The 2017 malware attack was attributed to Russia’s military intelligence agency, deployed as part of a conflict with Ukraine.

New Jersey Superior Court Judge Thomas J. Walsh ruled Jan. 13 that Merck’s insurers can’t claim the war exclusion because its language is meant to apply to armed conflict. The ruling noted that insurers didn’t change the war language to put companies like Merck “on notice” that cyberattacks wouldn’t be covered, despite a trend of attacks by countries like Russia hitting private sector companies.

The case is Merck & Co. Inc. vs. Ace American Insurance Co. et al, N.J. Super. Ct., No. L-002682-18, summary judgment 1/13/22.



I would be a bit upset too.

https://www.pogowasright.org/returning-travellers-made-to-hand-over-phones-and-passcodes-to-australian-border-force/

Returning travellers made to hand over phones and passcodes to Australian Border Force

Josh Taylor reports:

A man who was forced to hand over his phone and passcode to Australian Border Force after returning to Sydney from holiday has labelled the tactic “an absolute gross violation of privacy”, as tech advocates call for transparency and stronger privacy protections for people’s devices as they enter the country.
Software developer James and his partner returned from a 10-day holiday in Fiji earlier this month and were stopped by border force officials at Sydney airport. They were taken aside, and after emptying their suitcases, an official asked them to write their phone passcodes on a piece of paper, before taking their phones into another room.

Read more at The Guardian.

[From the article:

Under the Customs Act, ABF officers can force people to hand over their passcodes [How much force? Bob] to allow a phone search, as part of their powers to examine people’s belongings at the border, including documents and photos on mobile phones.

The spokesperson said people can be questioned and their phone searched “if they suspect the person may be of interest for immigration, customs, biosecurity, health, law-enforcement or national security reasons”.



This national ID card is only to protect your health, citizen.

https://www.pogowasright.org/illinois-lawmakers-considering-mandatory-vaccine-registry-elsewhere-campus-vaccine-and-mask-surveillance-upsets-students/

Illinois lawmakers considering mandatory vaccine registry; Elsewhere, campus vaccine and mask surveillance upsets students

Steve Korris reports:

Registration of immunizations in Illinois would change from voluntary to mandatory under a bill Rep. Bob Morgan (D-Highwood) introduced last week.
Critics of HB 4244 include more than 10,000 citizens who have filed opposition slips describing the measure as “asinine,” and “unconstitutional.” Many say they are “disgusted” that lawmakers are even considering the action.
The House committee on Health and Human Services planned to take it up on Wednesday, Jan. 19.

Read more at Madison-St. Clair Record.

But vaccine surveillance isn’t the only type of surveillance that is feeling oppressive these days. This week, Peter Cordi reported on student reactions to mask and vaccine mandates in colleges across the country:

Campus Reform spoke to a number of students about their school’s COVID restrictions and health surveillance measures.
Daniel Cona, a junior at College at Brockport, State University of New York (SUNY Brockport) told Campus Reform, “Having to submit proof of my vaccinations to Brockport feels like an invasion of my privacy.”
Michael Gannon, who attends Stony Brook University, which is part of the SUNY system, said he has “considered dropping out so I don’t have to inject myself with drugs that I don’t want in my body.”

Read more at Campus Reform.



Use of AI is becoming more visible.

https://chicago.suntimes.com/cubs/2022/1/21/22895308/robot-umps-lets-leave-baseball-to-real-live-human-beings-instant-replays-artificial-intelligence-mlb

Robot umpires? Let’s leave baseball to real, live human beings

The latest assault on our humanity came Thursday, when news broke that Major League Baseball would use an automated strike zone at Triple-A this season. It means robot umpires will be one heartbeat from the big leagues — a ‘‘heartbeat’’ being that thing once used to deduce whether a ‘‘person’’ was alive.



Not funny.

https://dilbert.com/strip/2022-01-22



Friday, January 21, 2022

Is data the same as cash?

https://techcrunch.com/2022/01/20/mercedes-future-vehicles-will-have-luminar-lidar-under-new-deal/

Mercedes’ future vehicles will have Luminar lidar under new deal

Mercedes-Benz said Thursday it plans to use Luminar’s lidar technology in future vehicles as part of a broader deal that includes data sharing and the automaker taking a small stake in the company.

Daimler will share certain data from development and production vehicles with Luminar’s lidars to be used for continuous product improvement and updates.



Excellent summary?

https://www.gibsondunn.com/2021-artificial-intelligence-and-automated-systems-annual-legal-review/

2021 Artificial Intelligence and Automated Systems Annual Legal Review

2021 was a busy year for policy proposals and lawmaking related to artificial intelligence (“AI”) and automated technologies. The OECD identified 700 AI policy initiatives in 60 countries, and many domestic legal frameworks are taking shape. With the new Artificial Intelligence Act, which is expected to be finalized in 2022, it is likely that high-risk AI systems will be explicitly and comprehensively regulated in the EU. While there have been various AI legislative proposals introduced in Congress, the United States has not embraced a comprehensive approach to AI regulation as proposed by the European Commission, instead focusing on defense and infrastructure investment to harness the growth of AI.

Nonetheless —mirroring recent developments in data privacy laws—there are some tentative signs of convergence in US and European policymaking, emphasizing a risk-based approach to regulation and a growing focus on ethics and “trustworthy” AI, as well as enforcement avenues for consumers. In the U.S., President Biden’s administration announced the development of an “AI bill of rights.” Moreover, the U.S. Federal Trade Commission (“FTC”) has signaled a particular zeal in regulating consumer products and services involving automated technologies and large data volumes, and appears poised to ramp up both rulemaking and enforcement activity in the coming year. Additionally, the new California Privacy Protection Agency will likely be charged with issuing regulations governing AI by 2023, which can be expected to have far-reaching impact. Finally, governance principles and technical standards for ensuring trustworthy AI and ML are beginning to emerge, although it remains to be seen to what extent global regulators will reach consensus on key benchmarks across national borders.



Perspective.

https://www.theguardian.com/technology/2022/jan/20/facebook-second-life-the-unstoppable-rise-of-the-tech-company-in-africa

Facebook’s second life: the unstoppable rise of the tech company in Africa

Across Africa, Facebook is the internet. Businesses and consumers depend heavily on it because access to the app and site are free on many African telecoms networks, meaning you don’t need any phone credit to use it. In 2015, Facebook launched Free Basics, an internet service that gives users credit-free access to the platform. Designed to work on low-cost mobile phones, which make up the vast majority of devices on the continent, it offers a limited format, with no audio, photo and video content. Over the past five years, Free Basics has been rolled out in 32 African countries. Facebook’s ambition does not end there. Where there are no telecoms providers to partner with, or where infrastructure is poor, the company has been developing satellites that can beam internet access to remote areas.



Perspective. Outcomes are not equal. Worth reading.

https://www.brookings.edu/blog/up-front/2022/01/20/gone-digital-technology-diffusion-in-the-digital-era/

Gone digital: Technology diffusion in the digital era

Productivity growth allows economies to increase output without increasing inputs and is a key driver of economic growth and of income per capita. However, productivity growth has been slowing in recent decades, depressing economic growth. This might appear paradoxical given the fast advancement in technological progress and the spread of digital technologies.

Firms’ varied performances during this period of digital transformation help explain this puzzling paradox. While firms at the global frontier of productivity have continued to increase their productivity steadily, the rest of the business population has not kept pace.



Tools & Techniques.

https://www.makeuseof.com/apps-for-recorded-lectures/

The 4 Best Apps and Services for Recording a Lecture Remotely


Thursday, January 20, 2022

Oh, the horror! You could lose your Internet connection!

https://gizmodo.com/senate-weighs-bill-to-protect-satellites-from-getting-h-1848384237

Senators Introduce Bill to Protect Satellites From Getting Hacked

A newly proposed law would enhance cybersecurity for commercial satellites—a move designed to protect them from criminal hacking, which is, apparently, a real threat we need to worry about now.

The Satellite Cybersecurity Act, introduced by Senators Gary Peters (D-Michigan) and John Cornyn (R-Texas), would empower the U.S. Cybersecurity and Infrastructure Security Agency (CISA) to develop “voluntary satellite cybersecurity recommendations” for the private sector, essentially providing a list of “best practices” for how to keep systems secure.



Apparently there is not yet a “Best Practice” model for legislation.

https://www.databreaches.net/pa-senate-passes-bills-aimed-at-ransomware-data-breaches/

PA Senate passes bills aimed at ransomware, data breaches

AP reports:

Pennsylvania’s state Senate passed a package of legislation on Wednesday aimed at preventing data security breaches and requiring victims and law enforcement officials to be notified when they do happen.
The bills’ passage comes barely two weeks after the state’s unemployment compensation system acknowledged that hackers changed bank account information in some recipients’ accounts, so that payments went to the hackers instead.

Read more at New Canaan Advertiser.

[From the article:

One bill would require the state to develop a strategy to prevent and respond to ransomware attacks. It also would bar state and local governments from using public money to pay for an extortion attempt during a ransomware attack.

It includes an exception for the governor to allow it while a disaster emergency declaration is in force.

The bill, however, does allow state agencies to buy insurance coverage for ransomware attacks. The bill also sets criminal penalties for perpetrators and allows victims to sue for damages.



Will this require the NSA to look at domestic targets?

https://www.wsj.com/articles/biden-to-expand-national-security-agency-role-in-government-cybersecurity-11642604412?mod=djemalertNEWS

Biden to Expand National Security Agency Role in Government Cybersecurity

President Biden on Wednesday expanded the National Security Agency’s role in protecting the U.S. government’s most sensitive computer networks, issuing a directive intended to bolster cybersecurity within the Defense Department and intelligence agencies.

It effectively aligns the cybersecurity standards imposed on national security agencies with those previously established for civilian agencies under an executive order Mr. Biden signed last May.



Your government loves to see your face.

https://krebsonsecurity.com/2022/01/irs-will-soon-require-selfies-for-online-access/

IRS Will Soon Require Selfies for Online Access

If you created an online account to manage your tax records with the U.S. Internal Revenue Service (IRS), those login credentials will cease to work later this year. The agency says that by the summer of 2022, the only way to log in to irs.gov will be through ID.me, an online identity verification service that requires applicants to submit copies of bills and identity documents, as well as a live video feed of their faces via a mobile device.

Some 27 states already use ID.me to screen for identity thieves applying for benefits in someone else’s name, and now the IRS is joining them. The service requires applicants to supply a great deal more information than typically requested for online verification schemes, such as scans of their driver’s license or other government-issued ID, copies of utility or insurance bills, and details about their mobile phone service.

When an applicant doesn’t have one or more of the above — or if something about their application triggers potential fraud flags — ID.me may require a recorded, live video chat with the person applying for benefits.



Perspective. I guess I missed the law requiring surveillance of plants.

https://hightimes.com/sponsored/cloudastructures-artificial-intelligence-is-revolutionizing-cannabis-surveillance/

Cloudastructure’s Artificial Intelligence is Revolutionizing Cannabis Surveillance

State and federal laws require that businesses keep their daily operations under constant video surveillance so criminality and malpractice can be stopped in their tracks.

Cloudastructure’s cloud-based AI video surveillance platform means footage is uploaded via a secure Cloud Video Recorder (CVR) that is inaccessible to the outside world. Once the data is secured in the cloud, a multi-tiered permissions system enables credentialed staff to download the footage from their mobile phone or any device with internet access within minutes. The software is intuitively designed and user-friendly, and Cloudastructure’s employees are ready to help with both the installation and any queries. The company even offers turnkey solutions that ensure the cameras not only meet even the most obscure compliance guidelines, but ensure the cameras maximize their usefulness in detecting crime.

Best of all is Cloudastructure’s Artificial Intelligence (AI), which the company has integrated in such a way so that cannabis entrepreneurs can monitor everything that goes on at their facilities, from dispensaries to farms and warehouses. Their rapid search capabilities include facial recognition, advanced license plate recognition, “elastic searches” which combine those powers, and much more. Thanks to their advanced AI algorithms, Cloudastructure’s surveillance systems can do things their competitors struggle to offer: identifying and tracking suspicious or after-hours visitors, and notifying you immediately of a breach of your facility.

Cloudastructure’s surveillance not only tells you who is on your property, but also how many people there are at any given time. The cameras are trained to pick up on other important details as well, such as a person’s age and gender.



Old does not mean out of touch, but it might mean we need translators.

https://www.bespacific.com/what-the-kids-are-reading/

What the Kids Are Reading

Paul Musgrave – Engaging with the new generation and its media consumption: “…The idea that today’s college students are digital natives who can seamlessly navigate the online world is, well, doubtful. But even more dubious is the unthinking presumption a lot of faculty and institutions fall into of assuming that they were familiar with the now-vanished world of three broadcast networks and Time magazine. The students are fluent in a different world that has nothing to do with the world that we olds were inducted into, and little to do with the digital world that we middle-olds inhabit. And it’s an open question about what world we should be helping students join—or whether we even know much about their world to begin with. When I say the generations live in different worlds, I mean it seriously. Here’s a pair of graphs from recent polls showing two ways of measuring preferences for news sources by generation. Senior citizens live in a world of cable television and even print. Middles live in a world of transition. Yutes live in a post-TV, post-print world of digital sources…”



Does simple law require simple lawyers?

https://www.bespacific.com/legal-simplification-and-ai/

Legal Simplification and AI

Eliot, Lance, Legal Simplification and AI (November 3, 2021). Available at SSRN: https://ssrn.com/abstract=3955411 or http://dx.doi.org/10.2139/ssrn.3955411

“Many advocates are urging that our existing laws are overly complex and need to be simplified. Interestingly, the advent of AI in the law can readily enter into this same discourse. It might be easier to intertwine AI and the law if the law was simplified, and/or the act of intertwining AI with the law could spur the simplification of the law. Tradeoffs though must be considered.”



Tools & Techniques. Follow your favorite Twit?

https://www.bespacific.com/how-to-search-twitter-with-r/

How to search Twitter with R

YouTube – “See how to search, filter, and sort tweets using R and the rtweet package. It’s a great way to follow conference hashtags. If you want to follow along with the optional code for creating clickable URL links, make sure to install the purrr package if it’s not already on your system.


Wednesday, January 19, 2022

Possible roadmap for cyberwar?

https://www.csoonline.com/article/3647072/a-timeline-of-russian-linked-cyberattacks-on-ukraine.html#tk.rss_all

Russia-linked cyberattacks on Ukraine: A timeline

Cyber incidents are playing a central role in the Russia-Ukraine conflict. Here's how events are unfolding along with unanswered questions.



Perspective. Think globally, act locally.

https://www.defenseone.com/ideas/2022/01/china-watching-ukraine-lot-interest/360774/

China Is Watching Ukraine With a Lot of Interest

Biden’s handling of Putin may tell Xi Jinping how resolutely the U.S. would defend Taiwan.



Ready or not, plug me in!

https://fpf.org/blog/brain-computer-interfaces-data-protection-understanding-the-technology-and-data-flows/

BRAIN-COMPUTER INTERFACES & DATA PROTECTION: UNDERSTANDING THE TECHNOLOGY AND DATA FLOWS

This post is the first in a four-part series on Brain-Computer Interfaces (BCIs), providing an overview of the technology, use cases, privacy risks, and proposed recommendations for promoting privacy and mitigating risks associated with BCIs.

Click here for FPF and IBM’s full report: Privacy and the Connected Mind. Additionally, FPF-curated resources, including policy & regulatory documents, academic papers, thought pieces, and technical analyses regarding brain-computer interfaces are here.

Today, Brain-Computer Interfaces (BCIs) are primarily used in the health-care context for purposes including rehabilitation, diagnosis, symptom management, and accessibility. While BCI technologies are not yet widely adopted in the consumer space, there is increasing interest and proliferation of new direct-to-consumer neurotechnologies from gaming to education. It is important to understand how these technologies use data to provide services to individuals and institutions, as well as how the emergence of such technologies across sectors can create privacy risks. As organizations work to build BCIs while mitigating privacy risks, it is paramount for policymakers, consumers, and other stakeholders to understand the state of the technology today and associated neurodata and its flows.



Should also work for your organization.

https://www.defenseone.com/technology/2022/01/new-report-offers-glimpse-how-ai-will-remake-spywork/360872/

New Report Offers Glimpse Of How AI Will Remake Spywork

Unless the intelligence community changes the way it defines intelligence and adopts cloud computing, it will wind up behind adversaries, private interests, and even the public in knowing what might happen, according to a new report from the Center for Strategic and International Studies.

Intelligence collection to predict broad geopolitical and military events has historically been the job of well-funded and expertly staffed government agencies like the CIA or the NSA. But, the report argues, the same institutional elements that allowed the government to create those agencies are now slowing them down in a time of large publicly-available datasets and enterprise cloud capabilities.

The report, scheduled to be released Wednesday, looks at a hypothetical “open-source, cloud-based, AI-enabled reporting,” or OSCAR, tool for the intelligence community, a tool that could help the community much more rapidly detect and act on clues about major geopolitical or security events. The report lists the various procedural, bureaucratic, and cultural barriers within the intelligence community that block its development and use by U.S. spy agencies.



The results are too addictive to ignore…

https://breachmedia.ca/canadian-police-expanding-surveillance-powers-via-new-digital-operations-centres/

Canadian police expanding surveillance powers via new digital “operations centres”

Canadian police have been establishing municipal surveillance centres to support law enforcement, deploying digital technologies that expand surveillance powers with the help of major US corporations, according to government documents seen by The Breach.

Working around-the-clock in special rooms or wings of police stations, these so-called “real-time operations centres” are the cornerstone of a shift to confront what police call the “new challenges” of a digital age.

They are intended to provide “virtual backup” for police officers in any situation, supplying them with information drawn from deep social media monitoring, private and public closed-circuit televisions (CCTV), open-ended data collection, and algorithmic mining.

According to government documents, the centres are modeled after fusion centres created by the U.S. Department of Homeland Security post-9/11. The U.S. fusion centres, which began with a focus on combatting terrorism but later expanded to criminal and political activity, have been criticizedfor indiscriminate surveillance and civil rights violations.



How do I surveil thee, let me count the ways…

https://www.zdnet.com/article/how-todays-technologies-become-weapons-in-modern-domestic-abuse/#ftag=RSSbaffb68

How tech is a weapon in modern domestic abuse -- and how to protect yourself

Through technology, it is possible to stalk someone with little effort. This can involve anything from sleuthing to find out information about your Tinder date to checking a potential work candidate's social profiles to planting spyware on your partner's phone.

In short, technology has provided new avenues for stalking to take place.


(Related)

https://www.pogowasright.org/california-and-florida-contribute-to-web-of-state-genetic-privacy-protections/

California and Florida contribute to web of state genetic privacy protections

Melissa Bianchi, Scott Loughlin, Melissa Levine, and Fleur Oke of Hogan Lovells write:

California’s Genetic Information Privacy Act (“GIPA”), which came into effect on January 1, 2022, imposes obligations on direct-to-consumer (“DTC”) genetic testing companies and others that collect and process genetic information. These new obligations, combined with the many differing obligations in other states, may require all organizations processing genetic information to reevaluate their genetic information policies and practices.
[…]
Florida’s Protecting DNA Privacy Act, which came into effect in October, amends its previous genetic privacy law and regulates the use, retention, disclosure, or transfer of a person’s DNA samples or analysis results. Under the law as revised, it is unlawful to collect, retain, submit for analysis, analyze, sell or transfer a person’s DNA sample, or sell or transfer a person’s DNA analysis results without that person’s express consent. Florida’s law is unique in that it imposes criminal penalties when a person engages in any of the following activities without obtaining the express consent of the individual….

Read more about the two state laws at Engage.



Perspective. If this keeps improving, eventually AI might explain science to politicians! (Or summarizing the law for juries?)

https://www.theverge.com/2022/1/18/22889180/ai-language-summary-scientific-research-tldr-papers

A NEW USE FOR AI: SUMMARIZING SCIENTIFIC RESEARCH FOR SEVEN-YEAR-OLDS

Academic writing often has a reputation for being hard to follow, but what if you could use machine learning to summarize arguments in scientific papers so that even a seven-year-old could understand them? That’s the idea behind tl;dr papers — a project that leverages recent advances in AI language processing to simplify science.

Work on the site began two years ago by university friends Yash Dani and Cindy Wu as a way to “learn more about software development,” Dani tells The Verge, but the service went viral on Twitter over the weekend when academics started sharing AI summaries of their research. The AI-generated results are sometimes inaccurate or simplified to the point of idiocy. But just as often, they are satisfyingly and surprisingly concise, cutting through academic jargon to deliver what could be mistaken for child-like wisdom.


Tuesday, January 18, 2022

Consider other applications. Scan my classroom and zap inattentive students.

https://hackaday.com/2022/01/17/machine-learning-detects-distracted-politicians/

MACHINE LEARNING DETECTS DISTRACTED POLITICIANS

[Dries Depoorter] has a knack for highly technical projects with a solid artistic bent to them, and this piece is no exception. The Flemish Scrollers is a software system that watches live streamed sessions of the Flemish government, and uses Python and machine learning to identify and highlight politicians who pull out phones and start scrolling. The results? Pushed out live on Twitter and Instagram, naturally. The project started back in July 2021, and has been dutifully running ever since, so by now we expect that holding one’s phone where the camera can see it is probably considered a rookie mistake.



A different way to look at ethics.

https://theconversation.com/how-to-be-a-god-we-might-one-day-create-virtual-worlds-with-characters-as-intelligent-as-ourselves-174978

How to be a god: we might one day create virtual worlds with characters as intelligent as ourselves

Most research into the ethics of Artificial Intelligence (AI) concerns its use for weaponry, transport or profiling. Although the dangers presented by an autonomous, racist tank cannot be understated, there is another aspect to all this. What about our responsibilities to the AIs we create?

You want planet-sized computers? You can have them. You want computers made from human brain tissue? You can have them. Eventually, I believe we will have virtual worlds containing characters as smart as we are – if not smarter – and in full possession of free will. What will our responsibilities towards these beings be? We will after all be the literal gods of the realities in which they dwell, controlling the physics of their worlds. We can do anything we like to them.

So knowing all that…should we?


(Related) We need an AI ethicist.

https://thenextweb.com/news/why-giving-ai-human-ethics-probably-terrible-idea

Why giving AI ‘human ethics’ is probably a terrible idea

If you want artificial intelligence to have human ethics, you have to teach it to evolve ethics like we do. At least that’s what a pair of researchers from the International Institute of Information Technology in Bangalore, India proposed in a pre-print paper published today.

Titled “AI and the Sense of Self,” the paper describes a methodology called “elastic identity” by which the researchers say AI might learn to gain a greater sense of agency while simultaneously understanding how to avoid “collateral damage.”

In short, the researchers are suggesting that we teach AI to be more ethically-aligned with humans by allowing it to learn when it’s appropriate to optimize for self and when its necessary to optimize for the good of a community.



Perspective.

https://www.ft.com/content/e3f36d82-89b8-41df-ae11-0653ce7e7944

Artificial intelligence searches for the human touch

… It’s already clear that the impact of such systems cannot be understood simply by examining the underlying code or even the data used to build them. We must look to people for answers as well.

Two recent studies do exactly that. The first is an Ipsos Mori survey of more than 19,000 people across 28 countries on public attitudes to AI, the second a University of Tokyo study investigating Japanese people’s views on the morals and ethics of AI usage.


Monday, January 17, 2022

What is the best response? “Why do you want to spy on children?” “Stop spending my tax dollars (pounds) on lies!”

https://www.rollingstone.com/culture/culture-news/revealed-uk-government-publicity-blitz-to-undermine-privacy-encryption-1285453/

Revealed: UK Gov’t Plans Publicity Blitz to Undermine Privacy of Your Chats

The Home Office has hired a high-end ad agency to mobilize public opinion against encrypted communications — with plans that include some shockingly manipulative tactics

The UK government is set to launch a multi-pronged publicity attack on end-to-end encryption, Rolling Stone has learned. One key objective: mobilizing public opinion against Facebook's decision to encrypt its Messenger app.

… “We have engaged M&C Saatchi to bring together the many organisations who share our concerns about the impact end-to-end encryption would have on our ability to keep children safe,” a Home Office spokesperson said in a statement.

Successive Home Secretaries of different political parties have taken strong anti-encryption stances, claiming the technology — which is essential for online privacy and security — will diminish the effectiveness of UK bulk surveillance capabilities, make fighting organized crime more difficult, and hamper the ability to stop terror attacks. The American FBI has made similar arguments in recent years — claims which have been widely debunked by technologists and civil libertarians on both sides of the Atlantic.



This could be real handy.

https://scholar.googleblog.com/

Save papers to read later

Found an interesting paper and don’t have time to read it right now? Today we are adding a reading list to your Scholar Library to help you save papers and read them later.

You can also use it to save papers you find off-campus but want to read on-campus where you have access to the full text, or papers you find on your smartphone but want to read on a larger screen.

To add a paper to your reading list, click “Save” and add the “Reading list” label. To use this feature, you need to be signed in to your Google account.


Sunday, January 16, 2022

Weaponizing malware. (I told you they were just practicing…)

https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/

Destructive malware targeting Ukrainian organizations

Microsoft Threat Intelligence Center (MSTIC) has identified evidence of a destructive malware operation targeting multiple organizations in Ukraine. This malware first appeared on victim systems in Ukraine on January 13, 2022. Microsoft is aware of the ongoing geopolitical events in Ukraine and surrounding region and encourages organizations to use the information in this post to proactively protect from any malicious activity.

While our investigation is continuing, MSTIC has not found any notable associations between this observed activity, tracked as DEV-0586, and other known activity groups. MSTIC assesses that the malware, which is designed to look like ransomware but lacking a ransom recovery mechanism, is intended to be destructive and designed to render targeted devices inoperable rather than to obtain a ransom.



This reverses my argument.

https://link.springer.com/article/10.1007/s00146-021-01384-w

Legal personhood for the integration of AI systems in the social context: a study hypothesis

In this paper, I shall set out the pros and cons of assigning legal personhood on artificial intelligence systems (AIs) under civil law. More specifically, I will provide arguments supporting a functionalist justification for conferring personhood on AIs, and I will try to identify what content this legal status might have from a regulatory perspective. Being a person in law implies the entitlement to one or more legal positions. I will mainly focus on liability as it is one of the main grounds for the attribution of legal personhood, like for collective legal entities. A better distribution of responsibilities resulting from unpredictably illegal and/or harmful behaviour may be one of the main reasons to justify the attribution of personhood also for AI systems. This means an efficient allocation of the risks and social costs associated with the use of AIs, ensuring the protection of victims, incentives for production, and technological innovation. However, the paper also considers other legal positions triggered by personhood in addition to responsibility: specific competencies and powers such as, for example, financial autonomy, the ability to hold property, make contracts, sue (and be sued).



A GDPR failure? Interesting.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4004716

Data Privacy, Human Rights, and Algorithmic Opacity

Decades ago, it was difficult to imagine a reality in which artificial intelligence (AI) could penetrate every corner of our lives to monitor our innermost selves for commercial interests. Within a few decades, the private sector has seen a wild proliferation of AI systems, many of which are more powerful and penetrating than anticipated. In many cases, machine-learning-based AI systems have become “the power behind the throne,” tracking user activities and making fateful decisions through predictive analysis of personal information. However, machine-learning algorithms can be technically complex and legally claimed as trade secrets, creating an opacity that hinders oversight of AI systems. Accordingly, many AI-based services and products have been found to be invasive, manipulative, and biased, eroding privacy rules and human rights in modern society.

The emergence of advanced AI systems thus generates a deeper tension between algorithmic secrecy and data privacy. Yet, in today’s policy debate, algorithmic transparency in a privacy context is an issue that is equally important but managerially disregarded, commercially evasive, and legally unactualized. This Note illustrates how regulators should rethink strategies regarding transparency for privacy protection through the interplay of human rights, disclosure regulations, and whistleblowing systems. It discusses how machine-learning algorithms threaten privacy protection through algorithmic opacity, assesses the effectiveness of the EU’s response to privacy issues raised by opaque AI systems, demonstrates the GDPR’s inadequacy in addressing privacy issues caused by algorithmic opacity, and proposes new algorithmic transparency strategies toward privacy protection, along with a broad array of policy implications and suggested moves. The analytical results indicate that in a world where algorithmic opacity has become a strategic tool for firms to escape accountability, [Is that true? Bob] regulators in the EU, the US, and elsewhere should adopt a human-rights-based approach to impose a social transparency duty on firms deploying high-risk AI techniques.



Would this incentivize the board to understand AI? I doubt it.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4002876

A Disclosure-Based Approach to Regulating AI in Corporate Governance

The use of technology, including artificial intelligence (AI), in corporate governance has been expanding, as corporations have begun to use AI systems for various governance functions such as effecting board appointments, enabling board monitoring by processing large amounts of data and even helping with whistle blowing, all of which address the agency problems present in modern corporations. On the other hand, the use of AI in corporate governance also presents significant risks. These include privacy and security issues, the 'black box problem' or the lack of transparency with AI decision-making, and undue power conferred on those who control decision-making regarding the deployment of specific AI technologies.

In this paper, we explore the possibility of deploying a disclosure-based approach as a regulatory tool to address the risks emanating from the use of AI in corporate governance. Specifically, we examine whether existing securities laws mandate corporate boards to disclose whether they rely on AI in their decision-making process. Not only could such disclosure obligations ensure adequate transparency for the various corporate constituents, but they may also incentivize boards to pay sufficient regard to the limitations or risks of AI in corporate governance. At the same time, such a requirement will not constrain companies from experimenting with the potential uses of AI in corporate governance. Normatively, and given the likelihood of greater use of AI in corporate governance moving forward, we also explore the merits of devising a specific disclosure regime targeting the intersection between AI and corporate governance.



Does this sound familiar?

https://www.orfonline.org/expert-speak/the-future-of-the-battle-for-minds/

The future of the battle for minds

This piece is part of the series, Technology and Governance: Competing Interests

The Enlightenment arguably brought about the greatest changes to human life. While initially limited to Europe, through means, both legitimate, such as trade and commerce; or illegitimate, such as colonialism, the ideas of this revolution have permeated world over, impacting thought processes and understandings of the world. An important distinguishing feature of the post enlightenment period is the emphasis upon individual free will, in addition to the ability to critically think and reason. Information became a resource for liberation. However, the human mind, akin to any machine in existence, can be “hacked”.

While such notions were initially conceived as mere science fiction, presented through the prism of cinema, the rise of surveillance capitalism has fostered a system where commercial entities are incentivised to track, personalise experiences, predict and continuously perform experimentation upon users through technologies such as behavioural analytics, ArtificiaI Intelligence, Machine Learning, and Big Data Analytics. Such technologies have been used by a variety of players, from commercial entities attempting to provide “creepy” specific personalised ads, to political parties in countries like Kenya and the UK, attempting to leverage technology to win elections. All of these have one thing in common: They attempt to colonise one’s ability to critically think, while selling one a narrative.