Saturday, July 23, 2022

I know we were training people not to do this as far back as the 1960’s. This is why we include history in our security classes. There is always some young whipper-snapper who thinks people can be trusted.

https://www.bleepingcomputer.com/news/security/atlassian-confluence-hardcoded-password-was-leaked-patch-now/

Atlassian: Confluence hardcoded password was leaked, patch now!

Australian software firm Atlassian warned customers to immediately patch a critical vulnerability that provides remote attackers with hardcoded credentials to log into unpatched Confluence Server and Data Center servers.

As the company revealed this week, the Questions for Confluence app (installed on over 8,000 servers ) creates a disabledsystemuser account with a hardcoded password to help admins migrate data from the app to the Confluence Cloud.

One day after releasing security updates to address the vulnerability (tracked as CVE-2022-26138 ), Atlassian warned admins to patch their servers as soon as possible, given that the hardcoded password had been found and shared online.





You still need a deep understanding of the field to be sure…

https://news.mit.edu/2022/explained-how-tell-if-artificial-intelligence-working-way-we-want-0722

Explained: How to tell if artificial intelligence is working the way we want it to

The most popular types of local explanation methods fall into three broad categories.

The first and most widely used type of explanation method is known as feature attribution. Feature attribution methods show which features were most important when the model made a specific decision.

Features are the input variables that are fed to a machine-learning model and used in its prediction. When the data are tabular, features are drawn from the columns in a dataset (they are transformed using a variety of techniques so the model can process the raw data). For image-processing tasks, on the other hand, every pixel in an image is a feature. If a model predicts that an X-ray image shows cancer, for instance, the feature attribution method would highlight the pixels in that specific X-ray that were most important for the model’s prediction.

Essentially, feature attribution methods show what the model pays the most attention to when it makes a prediction.

Using this feature attribution explanation, you can check to see whether a spurious correlation is a concern. For instance, it will show if the pixels in a watermark are highlighted or if the pixels in an actual tumor are highlighted,” says Zhou.

A second type of explanation method is known as a counterfactual explanation. Given an input and a model’s prediction, these methods show how to change that input so it falls into another class. For instance, if a machine-learning model predicts that a borrower would be denied a loan, the counterfactual explanation shows what factors need to change so her loan application is accepted. Perhaps her credit score or income, both features used in the model’s prediction, need to be higher for her to be approved.

The good thing about this explanation method is it tells you exactly how you need to change the input to flip the decision, which could have practical usage. For someone who is applying for a mortgage and didn’t get it, this explanation would tell them what they need to do to achieve their desired outcome,” he says.

The third category of explanation methods are known as sample importance explanations. Unlike the others, this method requires access to the data that were used to train the model.

A sample importance explanation will show which training sample a model relied on most when it made a specific prediction; ideally, this is the most similar sample to the input data. This type of explanation is particularly useful if one observes a seemingly irrational prediction. There may have been a data entry error that affected a particular sample that was used to train the model. With this knowledge, one could fix that sample and retrain the model to improve its accuracy.



Friday, July 22, 2022

Is the next step mandatory insurance?

https://www.cpomagazine.com/cyber-security/obtain-and-keep-cyber-insurance-with-two-magic-words-zero-trust/

Obtain (And Keep) Cyber Insurance With Two Magic Words: Zero Trust

Businesses looking to obtain cyber insurance would be wise to adhere to the principles of Zero Trust Architecture (ZTA). The concept of ZTA is simple: ‘never trust, always verify.’ Underwriters are scrutinizing business’ security protocols to make sure they have proper identity verification solutions in place. For example, multifactor authentication (MFA), a key component of ZTA, is now a requirement for cyber insurance coverage.

Demand for cyber insurance is skyrocketing – growing by 46% in 2020 alone, according to a study by the Government Accountability Office. And to add fuel to the fire, insurance premiums have shot through the roof, while the coverage offered by insurers has gone down. In 2020, insurance costs surged in price, up 29% from the prior year.





As these laws spread from state to state will the only information be offshore?

https://www.bespacific.com/south-carolina-bill-outlaws-websites-that-tell-how-to-get-an-abortion/

South Carolina bill outlaws websites that tell how to get an abortion

Washington Post: “Shortly after the Supreme Court ruling that overturned the right to abortion in June, South Carolina state senators introduced legislation that would make it illegal to “aid, abet or conspire with someone” to obtain an abortion. The bill aims to block more than abortion: Provisions would outlaw providing information over the internet or phone about how to obtain an abortion. It would also make it illegal to host a website or “[provide] an internet service” with information that is “reasonably likely to be used for an abortion” and directed at pregnant people in the state. Legal scholars say the proposal is likely a harbinger of other state measures, which may restrict communication and speech as they seek to curtail abortion. The June proposal, S. 1373, is modeled off a blueprint created by the National Right to Life Committee (NRLC), an antiabortion group, and designed to be replicated by lawmakers across the country…”

These are not going to be one-offs,” said Michelle Goodwin, the director of the Center for Biotechnology and Global Health Policy at the University of California at Irvine Law School. “These are going to be laws that spread like wildfire through states that have shown hostility to abortion.”..




Good idea or better than nothing or the least they can do?

https://thehill.com/opinion/congress-blog/3568525-federal-privacy-legislation-that-protects-civil-rights-is-critical-for-all-americans/

Federal privacy legislation that protects civil rights is critical for all Americans

We should celebrate the fact that Congress is considering legislation that would give all Americans robust privacy protections. Equally important, pending privacy legislation would implement the first significant, nationwide expansion of civil rights protections in over a decade. In addition to provisions that would give individuals more control over their information, the bill would bar businesses and nonprofits from using personal data in a manner that discriminates on the basis of race, color, religion, national origin, sex, or disability. While work remains to ensure Congress’s efforts are protective and practical, we must seize the opportunity to advance the civil rights of all Americans in this digital era. Risks stemming from digital services have never been more complex, and the need for meaningful safeguards has never been more urgent.



(Related) With the Privacy bill we don’t need the FCC?

https://arstechnica.com/tech-policy/2022/07/fcc-orders-top-carriers-to-explain-how-they-use-and-share-phone-location-data/

FCC chair tries to find out how carriers use phone geolocation data

Federal Communications Commission Chairwoman Jessica Rosenworcel has ordered mobile carriers to explain what geolocation data they collect from customers and how they use it. Rosenworcel's probe could be the first step toward stronger action—but the agency's authority in this area is in peril because Congress is debating a data privacy law that could preempt the FCC from regulating carriers' privacy practices.



(Related) California will find a new way to lead...

https://fpf.org/blog/adppa-will-surpass-californias-laws-but-improvements-remain/

ADPPA WOULD SURPASS CALIFORNIA’S LAWS, BUT IMPROVEMENTS REMAIN

The American Data Privacy and Protection Act (ADPPA) was passed yesterday through the House Energy and Commerce Committee, a proposal which experts and advocates agree is long overdue. However, objections from California leaders may threaten the bill’s passage.

Stacey Gray, the FPF’s Director of Legislative Research & Analysis, argues otherwise in a new editorial for Lawfare. Gray explains how the ADPPA compares to – and surpasses – state privacy protections established by California’s Privacy Protection Agency (CPPA) and Privacy Rights Act (CPRA).

To learn more, read Stacey’s op-ed here.





For reasons which remain unclear. Perhaps certain faces were recognized in certain places?

https://www.wwltv.com/article/news/crime/new-orleans-city-council-facial-recognition-technology-approved/289-9c6bf4ee-e249-43b1-b945-434d0c18cfdd

New Orleans Council votes to allow facial recognition software by police

After three hours of debate on Thursday, the New Orleans City Council reversed a ban on a controversial tool to fight crime.

The ordinance that passed in a 4-2 vote allows police to use facial recognition to assist in the investigation of certain crimes.

… "We keep hearing NOPD needs this. This is the tool. This is the silver bullet that's going to stop crime. This facial recognition," Harris said, "But you have no data sitting here today telling me that this actually works."

… An amendment to protect the surveillance from being used against those seeking abortions and same sex couples failed to pass.

The amendment also required NOPD to provide data on the effectiveness of the technology.

"That amendment for those guardrails failed and I'm quite frankly disappointed," Harris said.

Meanwhile, Councilmember Green said amendments can be added later.



Thursday, July 21, 2022

Because we all know what “serious” means...

https://helenair.com/news/state-and-regional/govt-and-politics/montana-agencies-dont-handcuff-investigations-with-facial-recognition-regulations/article_c3250568-a9ba-59cc-8e28-506879d6eaa1.html

Montana agencies: Don't 'handcuff' investigations with facial recognition regulations

Law enforcement officials on Tuesday urged state lawmakers to remain even-handed as they begin developing new regulations for facial recognition technology.

The draft legislation still under construction by the committee would restrict state government agencies' use of the software, but does allow law enforcement to use the facial recognition technology while investigating a "serious crime." Such crimes included in the draft bill range from assault with a weapon to deliberate homicide.





Come on Bob, try to keep up!

https://www.bespacific.com/congress-might-pass-an-actually-good-privacy-bill/

Congress Might Pass an Actually Good Privacy Bill

Wired:Usually, when Congress is working on major tech legislation, the inboxes of tech reporters get flooded with PR emails from politicians and nonprofits either denouncing or trumpeting the proposed statute. Not so with the American Data Privacy and Protection Act. A first draft of the bill seemed to pop up out of nowhere in June. Over the next month, it went through so many changes that no one could say for sure what it was even designed to do. For such an important topic, the bill’s progress has been surprisingly under the radar. Now comes an even bigger surprise: A new version of the ADPPA has taken shape, and privacy advocates are mostly jazzed about it. It just might have enough bipartisan support to become law—meaning that, after decades of inaction, the United States could soon have a real federal privacy statute. Perhaps the most distinctive feature of the new bill is that it focuses on what’s known as data minimization. Generally, companies would only be allowed to collect and make use of user data if it’s necessary for one of 17 permitted purposes spelled out in the bill—things like authenticating users, preventing fraud, and completing transactions. Everything else is simply prohibited. Contrast this with the type of online privacy regime most people are familiar with, which is all based on consent: an endless stream of annoying privacy pop-ups that most people click “yes” on because it’s easier than going to the trouble of turning off cookies. That’s pretty much how the European Union’s privacy law, the GDPR, has played out…”



Wednesday, July 20, 2022

I bet the Chinese sold these cheap. I would have…

https://techcrunch.com/2022/07/19/micodus-gps-tracker-exposing-vehicle-locations/

Security flaws in a popular GPS tracker are exposing a million vehicle locations

Security vulnerabilities in a popular Chinese-built GPS vehicle tracker can be easily exploited to track and remotely cut the engines of at least a million vehicles around the world, according to new research. Worse, the company that makes the GPS trackers has made no effort to fix them.

Cybersecurity startup BitSight said it found six vulnerabilities in the MV720, a hardwired GPS tracker built by MiCODUS, a Shenzhen-based electronics maker, which claims more than 1.5 million GPS trackers in use today across more than 420,000 customers worldwide, including companies with fleets of vehicles, law enforcement agencies, militaries and national governments. BitSight said in its report that it also found the GPS trackers used by Fortune 50 companies and a nuclear power plant operator.

But the security flaws can be easily and remotely exploited to track any vehicle in real time, access past routes and cut the engines of vehicles in motion.





Global warming. (Al Gore strikes again!) What temperature triggers the same thing in your neighborhood?

https://www.theregister.com/2022/07/19/google_oracle_cloud/

Google, Oracle cloud servers wilt in UK heatwave, take down websites

Cloud services and servers hosted by Google and Oracle in the UK have dropped offline due to cooling issues as the nation experiences a record-breaking heatwave.

When the mercury hit 40.3C (104.5F) in eastern England, the highest ever registered by a country not used to these conditions, datacenters couldn't take the heat. Selected machines were powered off to avoid long-term damage, causing some resources, services, and virtual machines to become unavailable, taking down unlucky websites and the like.





Perspective.

https://www.huntonprivacyblog.com/2022/07/19/california-privacy-protection-agency-issues-memo-opposing-federal-privacy-legislation-and-california-democrats-join-the-cause/

California Privacy Protection Agency Issues Memo Opposing Federal Privacy Legislation, and California Democrats Join the Cause

On July 1, 2022, the California Privacy Protection Agency (“CPPA”) sent U.S. House of Representatives Speaker Nancy Pelosi a memo outlining how H.R. 8152, the bipartisan American Data Privacy and Protection Act (“ADPPA” or the “Act”), would lessen privacy protections for Californians, and California Democrats have joined the cause.





Fairly basic security. Select a program at random. Can IT tell you what application it belongs to? When it was last updated? Are there multiple copies used in many applications?

https://www.csoonline.com/article/3667309/what-is-an-sbom-software-bill-of-materials-explained.html#tk.rss_all

What is an SBOM? Software bill of materials explained

An SBOM is a detailed guide to what's inside your software. It helps vendors and buyers alike keep track of software components for better software supply chain security.

An SBOM is a formal, structured record that not only details the components of a software product, but also describes their supply chain relationship. An SBOM outlines both what packages and libraries went into your application and the relationship between those packages and libraries and other upstream projects—something that's of particular importance when it comes to reused code and open source.





Resources. (Search for AI)

https://www.bespacific.com/mit-press-opens-access-to-3480-books/

MIT Press opens access to 3480 books

Via @RobertaArielli, https://www.robertadalessandro.it/@mitpress as opened the access to 3480 books, within the MIT Press Direct program. There are 196 #linguistics books, and 3480 books in all disciplines, and counting. https://direct.mit.edu/books/search-r





Tools & Techniques.

https://scitechdaily.com/a-beginners-guide-to-quantum-programming/

A Beginner’s Guide to Quantum Programming

As quantum computers proliferate and become more widely available, would-be quantum programmers are left scratching their brains over how to get started in the field. A new beginner’s guide offers a complete introduction to quantum algorithms and their implementation on existing hardware.

Writing quantum algorithms is radically different from writing classical computing programs and requires some understanding of quantum principles and the mathematics behind them,” said Andrey Y. Lokhov, a scientist at Los Alamos National Laboratory and lead author of the recently published guide in ACM Transactions on Quantum Computing. “Our guide helps quantum programmers get started in the field, which is bound to grow as more and more quantum computers with more and more qubits become commonplace.”



Tuesday, July 19, 2022

We should have this element of security down pat. How did it drop off of our checklists?

https://www.csoonline.com/article/3667279/unauthorized-access-jumped-4x-in-2021.html#tk.rss_all

Unauthorized access jumped 4x in 2021

The 2022 Consumer Identity and Breach Report from ForgeRock found unauthorized access to be the leading infection vector in 2021

Security breaches from issues associated with supply chain and third-party suppliers have recorded an unprecedented jump of 297%, representing about a fourth of all the security breaches in 2021 in the US, according to a study by digital identity and access management platform ForgeRock.

The 2022 Consumer Identity and Breach Report found unauthorized access to be the leading infection vector for the breaches, accounting for 50% of all records compromised in 2021.

The average cost of a breach in the US, according to the report, was $9.5 million, which is the highest in the world and up 16% from $8.2 million in 2020.





Perhaps I could sell an App that tells you how suspicious your search keywords are? But searching for it is really suspect.

https://www.cpomagazine.com/data-privacy/reverse-google-searches-face-increased-scrutiny-as-fears-of-keyword-warrants-for-abortion-seekers-grow/

Reverse Google Searches Face Increased Scrutiny as Fears of Keyword Warrants for Abortion Seekers Grow

The Roe v. Wade decision has put “keyword warrants” back in the spotlight, as fears grow that law enforcement will comb through Google searches to identify women seeking abortions.

Law enforcement is able to issue warrants for specific Google searches, potentially sweeping up the queries of hundreds or thousands of unrelated individuals. But the practice is facing new legal arguments that it is a violation of Constitutional rights protecting against arbitrary and unreasonable searches.

Keyword warrants are facing their first direct challenge in federal court, but the case does not involve abortion. Still, privacy and abortion advocates are keeping careful tabs on it as the eventual ruling could determine the extent to which law enforcement is allowed to go on “fishing expeditions” for abortion seekers.

The case involves a group of teenagers charged with a residential arson in Denver that killed a family of five. The teenagers were identified by police via Google searches for the address at which the arson took place. Lawyers for the teenagers are arguing that this is a violation of Fourth Amendment protection against unreasonable searches, as police would have had to trawl an unknown amount of Google searches from unrelated parties to hit upon this information.





But we knew this, right? If you are not an elected official or you do not carry a badge and gun, you must be a second class citizen and we need to keep an eye on you. (Like China does.)

https://www.nbcnews.com/politics/immigration/dhs-spent-millions-cellphone-data-track-americans-foreigners-us-says-a-rcna38684

DHS spent millions on cellphone data to track Americans and foreigners inside and outside U.S., ACLU report says

The Department of Homeland Security has paid millions of dollars since 2017 to purchase, without warrants, cellphone location data from two companies to track the movements of both Americans and foreigners inside the U.S., at U.S. borders and abroad, according to a new report released by the American Civil Liberties Union on Monday.

The report published a large collection of contracts between U.S. Customs and Border Protection, Immigration and Customs Enforcement and other parts of DHS to buy location data collected by companies Venntel and Babel Street. The contracts and other documents were obtained via the Freedom of Information Act.



Monday, July 18, 2022

I wonder what security checklist they used that did not include such basics?

https://www.wsj.com/articles/alibaba-executives-called-in-by-china-authorities-as-it-investigates-historic-data-heist-11657812800?mod=djemalertNEWS

Alibaba Executives Called In by China Authorities as It Investigates Historic Data Heist

Cybersecurity companies say Alibaba’s cloud platform that hosted Shanghai’s police database used outdated systems that didn’t offer ability to set a password

… Cybersecurity researchers said a dashboard for managing the database had been left open on the public internet without a password for more than a year, making it easy to pilfer and erase its contents.





Inevitable perhaps, fast unlikely.

https://www.cpomagazine.com/data-privacy/different-approaches-to-data-privacy-why-eu-us-privacy-alignment-in-the-months-to-come-is-inevitable/

Different Approaches to Data Privacy: Why EU-US Privacy Alignment in the Months To Come Is Inevitable

Even though it is hardly disputable that origins of modern data privacy, as well as computer technology, are to be found in the US, it is currently the EU with its GDPR that sets the global tone in terms of what is the generally accepted privacy standard, especially for multinational companies operating worldwide.

The reasons for this are many, but in brief the US still does not have a comprehensive, federal privacy law for the private sector. It is discussed for many years now, but there are no signs for anything definite just yet, even though substantial progress is being made in the recent months. Having said that, FTC enforcement against companies failing to protect personally identifiable information, as well as a plethora of state laws, most notably California Consumer Privacy Act, result in de facto privacy standard which in some ways meets or exceeds EU practices. One interesting example would be with the NIST standards and frameworks which, even though primarily intended for federal agencies, are widely adopted on a voluntary basis by private organizations and enable a very refined and mature ways to govern privacy and cybersecurity. Of course, there are still many areas where US privacy falls behind its UE counterpart.

So why is the EU-US privacy alignment in the immediate future not only possible but de facto inevitable?





Change one little law, impact many others?

https://www.pogowasright.org/anonymization-v-de-identification-post-dobbs-rumblings-from-the-ftc/

Anonymization v. De-Identification, Post-Dobbs; Rumblings from the FTC

Christopher Escobedo Hart of Foley Hoag writes:

When is personal data “anonymized”? The answer to this question has largely been based on jurisdiction. If your business is in the U.S., so long as HIPAA or the CCPA does not govern, then generally aggregated or de-identified data could often be considered “anonymized” for legal compliance purposes. (Both HIPAA and the CCPA have specific requirements for what counts as “de-identified” data.) Under the GDPR, the story has been much more complicated: merely “de-identified” data is not the same as “anonymous” data, and is still governed by the GDPR as “pseudonymous” data in many instances. The point, under the GDPR, is that if it’s still possible to combine or analyze that aggregated or de-identified data in such a way that allows for identification of an individual, then it cannot be truly anonymous.
But businesses should be aware that, post-Dobbs v. Jackson Women’s Health Org. (overturning Roe v. Wade), the U.S. might look more like Europe where the differences between anonymization and de-identification are concerned.

Read more at Security, Privacy and the Law



Sunday, July 17, 2022

Is they is or is they ain’t the same? Isn’t that the goal?

https://link.springer.com/chapter/10.1007/978-94-6265-523-2_2

Artificial Intelligence Versus Biological Intelligence: A Historical Overview

The discipline of artificial intelligence originally aimed to replicate human-level intelligence in a machine. It could be argued that the best way to replicate the behavior of a system is to emulate the mechanisms producing this behavior. But whether we should try to replicate the human brain or the cognitive faculties to accomplish this is unclear. Early symbol-based AI systems paid little regard to neuroscience and were rather successful. However, since the 1980s, artificial neural networks have become a powerful AI technique that show remarkable resemblance to what we know about the human brain. In this chapter, we highlight some of the similarities and differences between artificial and human intelligence, the history of their interconnection, what they both excel at, and what the future may hold for artificial general intelligence.





I’ll keep gathering ideas...

https://link.springer.com/article/10.1007/s43681-022-00194-0

Reconsidering the regulation of facial recognition in public spaces

This paper contributes to the discussion on effective regulation of facial recognition technologies (FRT) in public spaces. In response to the growing universalization of FRT in the United States and Europe as merely intrusive technology, we propose to distinguish scenarios in which the ethical and social risks of using FRT are unattainable from other scenarios in which FRT can be adjusted to improve our everyday lives. We suggest that the general ban of FRT technologies in public spaces is not an inevitable solution. Instead, we advocate for a risk-based approach with emphasis on different use-cases that weighs moral risks and identifies appropriate countermeasures. We introduce four use-cases that focus on presence of FRT on entrances to public spaces (1) Checking identities in airports (2) Authorisation to enter office buildings (3) Checking visitors in stadiums (4) Monitoring passers-by on open streets, to illustrate the diverse ethical and social concerns and possible responses to them. Based on the different levels of ethical and societal risks and applicability of respective countermeasures, we call for a distinction of public spaces between semi-open public spaces and open public spaces. We suggest that this distinction of public spaces could not only be helpful in more effective regulation and assessment of FRT in public spaces, but also that the knowledge of different risks and countermeasures will lead to better transparency and public awareness of FRT in diverse scenarios.



(Related)

https://ojs.victoria.ac.nz/wfeess/article/view/7645

Ethics of Facial Recognition Technology in Law Enforcement: A Case Study

Facial Recognition Technology (FRT) has promising applications in law enforcement due to its efficiency and cost-effectiveness. However, this technology poses significant ethical concerns that overshadow its benefits. Responsible use of FRT requires consideration of these ethical concerns that legislation fails to cover. This study investigates the ethical issues of FRT use and relevant ethical frameworks and principles designed to combat these issues. Drawing on this, we propose and discuss a code of ethics for FRT to ensure its ethical use in the context of New Zealand law enforcement.





Similar to what ICE and TSA are doing?

https://obiter.mandela.ac.za/article/view/14254

THE LEGAL ISSUES REGARDING THE USE OF ARTIFICIAL INTELLIGENCE TO SCREEN SOCIAL MEDIA PROFILES FOR THE HIRING OF PROSPECTIVE EMPLOYEES

The fourth industrial revolution has introduced advancement in technologies that have affected many commercial sectors in South Africa, and the employment sector is no exception. One of these advancements is the creation of artificial intelligence technologies that can assist humans to make everyday tasks quicker and more efficient. It has become common for organisations to screen social media profiles in order to gain information about a prospective employee. With the aid of artificial intelligence, employers can use such systems to easily sift through social media profiles and access the data it needs. Although these technological creations have many successful outcomes, artificial intelligence systems can also have drawbacks, such as inadvertently discriminating against certain groups of people when data is collected, processed and stored. Issues surrounding privacy breaches are also raised where artificial intelligent systems seek to access personal information from social media profiles. Prospective employees will need to be informed that their social media profiles are being screened and the artificial intelligence system needs to be programmed properly to ensure that data is correctly and fairly processed and collected.





The Terminator on trial?

https://link.springer.com/chapter/10.1007/978-94-6265-523-2_8

Prosecuting Killer Robots: Allocating Criminal Responsibilities for Grave Breaches of International Humanitarian Law Committed by Lethal Autonomous Weapon Systems

The fast-growing development of highly automated and autonomous weapon systems has become one of the most controversial sources of discussion in the international sphere. One of the many concerns that surface with this technology is the existence of an accountability gap. This fear stems from the complexity of holding a human operator criminally responsible for a potential failure of the weapon system. Thus, the question on who is to be held criminally liable for grave breaches to international humanitarian law when these crimes are not intentional arises. This chapter explains how we will need to rethink the responsibilities, command structure, and everyday operations within our military when engaging in the use of fully autonomous weapon systems to allow our existing legal framework to assign criminal responsibility. For this purpose, this chapter analyses the different types of criminal responsibilities that converge in the process of employing lethal autonomous weapons and determine which of them is the most appropriate for grave breaches of international humanitarian law in this case.



(Related) Who programmed the Terminator?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4159762

Are programmers in or 'out of' control? The individual criminal responsibility of programmers of autonomous weapons and self-driving cars

The increasing use of autonomous systems technology in cars and weapons could lead to a rise of harmful incidents on the roads and in the battlefield potentially amounting to crimes. Such a rise has led to questions as to who is criminally responsible for these crimes – be it the users or the programmers? This chapter seeks to clarify the role of programmers in crimes committed with autonomous systems by focusing on the use of autonomous vehicles and autonomous weapons. In assessing whether a programmer could be criminally responsible for crimes committed with autonomous technology, it is necessary to determine whether the programmer had control over this technology. Risks inherent in the use of these autonomous technologies may allow for a programmer to escape criminal liability but some risks may be foreseeable and thus considered under the programmer’s control. The central question is whether programmers exercise causal control over a chain of events leading to the commission of a crime. This chapter contends that programmers’ control begins at the initial stage of the autonomous system development process but continues in the use phase, extending to the behaviour and effects of autonomous systems technology. Based on criminal responsibility requirements and causation theories, this chapter develops a notion of meaningful human control (MHC) that may function to trace back responsibility to the programmers who could understand, foresee, and anticipate the risk of a crime being committed with autonomous systems technology.





It’s not my fault, the computer did it!

https://link.springer.com/chapter/10.1007/978-94-6265-523-2_14

Contractual Liability for the Use of AI under Dutch Law and EU Legislative Proposals

In this chapter, the contractual liability of a company (the ‘user’) using an AI system to perform its contractual obligations is analysed from a Dutch law and EU law perspective. In particular, we discuss three defences which, in the event of a breach, the user can put forward against the attribution of that breach to such user and which relate to the characteristics of AI systems, especially their capacity for autonomous activity and self-learning:

(1) the AI system was state-of-the-art when deployed,

(2) the user had no control over the AI system, and

(3) an AI system is not a tangible object and its use in the performance of contractual obligations can thus not give rise to strict liability under Article 6:77 of the Dutch Civil Code.

Following a classical legal analysis of these defences under Dutch law and in light of EU legislative proposals, the following conclusions are reached. Firstly, the user is strictly liable, subject to an exception based on unreasonableness, if the AI system was unsuitable for the purpose for which it was deployed as at the time of deployment. Advancements in scientific knowledge play no role in determining suitability. Secondly, a legislative proposal by the European Parliament allows the user to escape liability for damage caused by a non-high-risk AI system if the user took due care with respect to the selection, monitoring and maintenance of that system. Thirdly, the defence that the user is not liable because an AI system is not a tangible object is unlikely to hold.





Bigger must mean better?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4152035

Big Data Policing Capacity Measurement

Big data, algorithms, and computing technologies are revolutionizing policing. Cell phone data. Transportation data. Purchasing data. Social media and internet data. Facial recognition and biometric data. Use of these and other forms of data to investigate, and even predict, criminal activity is law enforcement’s presumptive future. Indeed, law enforcement in several major cities have already begun to develop a big data policing mindset, and new forms of data have played a central role in high-profile matters featured in the “Serial” and “To Live and Die in LA” podcasts, as well as in the Supreme Court’s leading Carpenter v. U.S. opinion. Although the ascendancy of big data policing appears inevitable, important empirical questions on local law enforcement agency capacity remain insufficiently answered. For example, do agencies have adequate capacity to facilitate big data policing? If not, how can policymakers best target resources to address capacity shortfalls? Are certain categories of agencies in a comparatively stronger position in terms of capacity? Answering questions such as these requires empirical measurement of phenomena that are notoriously difficult to measure. This Article presents a novel multidimensional measure of big data policing capacity in U.S. local law enforcement agencies: the Big Data Policing Capacity Index (BDPCI). Analysis of the BDPCI provides three principal contributions. First, it offers an overall summary of more than 2,000 local agencies’ inadequacy in big data policing capacity using a large-N dataset. Second, it identifies factors that are driving lack of capacity in agencies. Third, it illustrates how differences between groups of Agencies might be analyzed based on size and location, including an illustrative ranking of the fifty U.S. states. This Article is meant to inform stakeholders on agencies’ current positions, advise on how best to improve such positions, and drive further research into empirical measurement and big data policing.





Should your CPO be an AI?

https://aisel.aisnet.org/amcis2022/sig_sec/sig_sec/8/

Exploring the Characteristics and Needs of the Chief Privacy Officer in Organizations

Over the past two decades, the growth in technology (i.e. social networking, big data, smartphones, Internet of Things, artificial intelligence, etc.) and increased collection of customer data mixed with various data breaches has increased the need to focus more on information privacy. Various laws and regulations have been established, such as the GDPR in Europe and various state level regulations in the United States, to ensure the protection of customers and their data. The Chief Privacy Officer role was established in the 1990’s with a strong research focus in the early 2000s. However, little attention has been given to the role of the CPO in the past decade. Due to the increases in technology, private data collections, breaches, and privacy regulations, there is a need to reevaluate the role of the CPO and the evolving responsibilities it entails.





Looking at what we’re looking at.

https://link.springer.com/chapter/10.1007/978-94-6265-523-2_23

Ask the Data: A Machine Learning Analysis of the Legal Scholarship on Artificial Intelligence

In the last decades, the study of the legal implications of artificial intelligence (AI) has increasingly attracted the attention of the scholarly community. The proliferation of articles on the regulation of algorithms has gone hand in hand with the acknowledgment of the existence of substantial risks associated with current applications of AI. These relate to the widening of inequality, the deployment of discriminatory practices, the potential breach of fundamental rights such as privacy, and the use of AI-powered tools to surveil people and workers. This chapter aims to map the existing legal debate on AI and robotics by means of bibliometric analysis and unsupervised machine learning. By using structural topic modeling (STM) on abstracts of 1298 articles published in peer-reviewed legal journals from 1982 to 2020, the chapter explores what the dominant topics of discussion are and how the academic debate on AI has evolved over the years. The analysis results in a systematic computation of 13 topics of interest among legal scholars, showing trends of research and potential areas for future research.