Monday, May 23, 2022

What could you do with an Australian drivers license?

https://www.schneier.com/blog/archives/2022/05/forging-australian-drivers.html

Forging Australian Driver’s Licenses

The New South Wales digital driver’s license has multiple implementation flaws that allow for easy forgeries.

This file is encrypted using AES-256-CBC encryption combined with Base64 encoding.
A 4-digit application PIN (which gets set during the initial onboarding when a user first instals the application) is the encryption password used to protect or encrypt the licence data.
The problem here is that an attacker who has access to the encrypted licence data (whether that be through accessing a phone backup, direct access to the device or remote compromise) could easily brute-force this 4-digit PIN by using a script that would try all 10,000 combinations….
[…]
The second design flaw that is favourable for attackers is that the Digital Driver Licence data is never validated against the back-end authority which is the Service NSW API/database.
This means that the application has no native method to validate the Digital Driver Licence data that exists on the phone and thus cannot perform further actions such as warn users when this data has been modified.
As the Digital Licence is stored on the client’s device, validation should take place to ensure the local copy of the data actually matches the Digital Driver’s Licence data that was originally downloaded from the Service NSW API.
As this verification does not take place, an attacker is able to display the edited data on the Service NSW application without any preventative factors.

There’s a lot more in the blog post.





I saw this as inevitable, even though I have been waiting over 40 years for it.

https://www.cpomagazine.com/cyber-security/personal-liability-for-directors-who-disregard-cybersecurity/

Personal Liability for Directors Who Disregard Cybersecurity

In recent months, a trend has begun to emerge among plaintiffs’ lawyers seeking to file cybersecurity incident-related shareholder derivative lawsuits – attorneys are increasingly now filing claims specifically based on failures surrounding duty of oversight. In November of 2021, a shareholder derivative lawsuit was filed against T-Mobile USA’s board of directors, pointing to a lack of monitoring and acting upon obvious red flags. Kevin M. Lacroix excellently outlines this trend in The D&O Diary. Directors should take notice.





I can see by your face that you live in the UK...

https://www.theverge.com/2022/5/23/23137603/clearview-ai-ordered-delete-data-uk-residents-ico-fine

Clearview AI ordered to delete facial recognition data belonging to UK residents

Controversial facial recognition company Clearview AI has been ordered to delete all data belonging to UK residents by the country’s privacy watchdog, the Information Commissioner’s Office (ICO). The ICO also fined Clearview £7.5 million ($9.4 million) for failing to follow the UK’s data protection laws.

It’s the fourth time Clearview has been ordered to delete national data in this way, following similar orders and fines issued in Australia, France, and Italy.

However, although ICO has issued a fine against Clearview and ordered the company to delete UK data, it’s unclear how this might be enforced if Clearview has no business or customers in the country to sanction. In response to a similar deletion order and fine issued in Italy under EU law earlier this year, Clearview’s CEO Hoan Ton-That responded that the US-based company was simply not subject to EU legislation. We’ve reached out to both the ICO and Clearview for further clarity on these points.





More intrusive than I had imagined, but I guess even a simple license plate reader wants to offer more and better technology.

https://www.stltoday.com/news/local/crime-and-courts/meet-the-falcon-ai-powered-license-readers-multiply-as-police-tool-in-st-louis-suburbs/article_25ee76f8-836a-5610-9d0e-613be652c55c.html

Meet the Falcon: AI-powered license readers multiply as police tool in St. Louis suburbs

In the hours after Metro bus driver Jonathan Cobb was shot on Dec. 3, detectives started with a broad lead: The shooter drove a red or maroon PT Cruiser.

Cobb was shot and critically injured seemingly at random that night while ferrying a bus full of passengers in the Normandy area.

… Falcon cameras from Atlanta-based startup Flock Safety have over the past three years proliferated on area roadways. They record license plates, but also use artificial intelligence to collect what the company calls a “vehicle fingerprint” — the make, model, color and identifying features from each passing car.

In the Metro shooting case, Flock allowed police to search for every PT Cruiser matching the description that passed a growing network of Falcon cameras in the St. Louis suburbs

… Martin is among the law enforcement officials pushing for more Flock cameras in the region. He said license plate readers have been used in 75% of homicide arrests in the cooperative’s jurisdiction since 2018.

Critics worry the databases Falcon cameras create of each passing car are invasive and ripe for abuse by police and private entities with access.

Law enforcement will always cite stories where the tool saves the day, but I think that as citizens we have to go beyond the question: Will this ever solve crimes?” said Jay Stanley, a senior policy analyst with the American Civil Liberties Union who wrote a paper published in March on the spread of Flock Safety.There’s no question that if you record everyone all the time you could solve more crimes. We could solve crimes if you let the government put cameras in everybody’s bedrooms, but we’re not willing to go there. Are we willing to let cameras change the nature of our public spaces?”

Last month, Flock introduced the Raven, a gunshot audio detection tool that competes with ShotSpotter, which is already used to cover miles of areas with high crime rates in St. Louis and St. Louis County.

Flock says the Raven can use artificial intelligence to identify “sounds that indicate crimes in progress,” including screeching tires, the sawing of catalytic converters or the breaking of glass.





This raises another question, What other “crimes” might attract similar increases in surveillance. Entering a mosque? Registering to vote? Eating a vegan diet?

https://www.csoonline.com/article/3661689/data-protection-concerns-spike-as-states-get-ready-to-outlaw-abortion.html#tk.rss_all

Data protection concerns spike as states get ready to outlaw abortion

The use of personal data from brokers, apps, smartphones, and browsers to identify those seeking an abortion raises new data protection and privacy risks.

… Enforcement of the law will likely hinge on increased digital surveillance by authorities to more efficiently identify, arrest, and prosecute pregnant people who contemplate or seek abortions.





Resource. (I hope to improve all the way up to “Not too bad!”)

https://www.makeuseof.com/reedsy-poetry-next-level/

How Reedsy Can Help You Take Your Poetry to the Next Level

Reedsy can guide you in three key areas: understanding poetry formats, practicing the creative process, and publishing your work.



Sunday, May 22, 2022

Want always comes before need?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4112042

Ai-Powered Public Surveillance Systems: Why We (Might) Need Them and How We Want Them

In this article, we address the introduction of AI-powered surveillance systems in our society by looking at the deployment of real-time facial recognition technologies (FRT) in public spaces and public health surveillance technologies, in particular contact tracing applications. Both cases of surveillance technologies assist public authorities in the enforcement of the law by allowing the tracking of individual movements and extrapolating results towards monitoring and predicting social behavior. Therefore, they are considered as potentially useful tools in response to societal crises, such as those generated by crime and health related pandemics. To approach the assessment of the potentials and threats of such tools, we offer a framework with three dimensions: a function dimension, examines the type, quality and quantity of data the system needs to employ to work effectively; the consent dimension considers the user’s right to be informed about and reject the use of surveillance, questioning whether consent is achievable and whether the user can decide fully autonomously/independently; and a societal dimension that frames vulnerabilities and the impacts of the increased empowerment of established political regimes through new means to control populations based on data surveillance. Our analysis framework can assist public authorities in their decisions on how to design and deploy public surveillance tools in a way that enables compliance with the law while highlighting individual and societal tradeoffs.





Probably not probable?

https://scholarlycommons.law.wlu.edu/wlulr/vol79/iss2/7/

The Computer Got It Wrong: Facial Recognition Technology and Establishing Probable Cause to Arrest

Facial recognition technology (FRT) is a popular tool among police, who use it to identify suspects using photographs or still-images from videos. The technology is far from perfect. Recent studies highlight that many FRT systems are less effective at identifying people of color, women, older people, and children. These race, gender, and age biases arise because FRT is often “trained” using non-diverse faces. As a result, police have wrongfully arrested Black men based on mistaken FRT identifications. This Note explores the intersection of facial recognition technology and probable cause to arrest.

Courts rarely, if ever, examine FRT’s role in establishing probable cause. This Note suggests a framework for how courts can evaluate FRT and probable cause. Case law about drug-sniffing dogs provides a starting point for assessing what role an FRT identification should play in probable cause determinations. But drug dogs are not a perfect analogue for FRT. Two important differences between these two policing tools warrant treating FRT with greater scrutiny than drug dogs. First, FRT has baked-in racial, gender, and age biases that drug dogs lack. Second, FRT is a digital policing tool, which recent Supreme Court precedent suggests merits more judicial scrutiny than non-digital police tools like dogs.

Giving FRT a closer look leads to the conclusion that an FRT identification alone is insufficient to establish probable cause. FRT relies on flawed inputs (non-diverse data) that leads to flawed outputs (demographic discrepancies in misidentifications). These problematic inputs and outputs provide complimentary reasons why an FRT identification alone cannot provide probable cause.



(Related)

https://arxiv.org/abs/2205.07299

Regulating Facial Processing Technologies: Tensions Between Legal and Technical Considerations in the Application of Illinois BIPA

Harms resulting from the development and deployment of facial processing technologies (FPT) have been met with increasing controversy. Several states and cities in the U.S. have banned the use of facial recognition by law enforcement and governments, but FPT are still being developed and used in a wide variety of contexts where they primarily are regulated by state biometric information privacy laws. Among these laws, the 2008 Illinois Biometric Information Privacy Act (BIPA) has generated a significant amount of litigation. Yet, with most BIPA lawsuits reaching settlements before there have been meaningful clarifications of relevant technical intricacies and legal definitions, there remains a great degree of uncertainty as to how exactly this law applies to FPT. What we have found through applications of BIPA in FPT litigation so far, however, points to potential disconnects between technical and legal communities. This paper analyzes what we know based on BIPA court proceedings and highlights these points of tension: areas where the technical operationalization of BIPA may create unintended and undesirable incentives for FPT development, as well as areas where BIPA litigation can bring to light the limitations of solely technical methods in achieving legal privacy values. These factors are relevant for (i) reasoning about biometric information privacy laws as a governing mechanism for FPT, (ii) assessing the potential harms of FPT, and (iii) providing incentives for the mitigation of these harms. By illuminating these considerations, we hope to empower courts and lawmakers to take a more nuanced approach to regulating FPT and developers to better understand privacy values in the current U.S. legal landscape.





This is a scary App…

https://ieeexplore.ieee.org/abstract/document/9773277

A Mental Trespass? Unveiling Truth, Exposing Thoughts and Threatening Civil Liberties with Non-Invasive AI Lie Detection

Imagine an app on your phone or computer that can tell if you are being dishonest, just by processing affective features of your facial expressions, body movements, and voice People could ask about your political preferences, your sexual orientation, and immediately determine which of your responses are honest and which are not. In this paper we argue why artificial intelligence-based, non-invasive lie detection technologies are likely to experience a rapid advancement in the coming years, and that it would be irresponsible to wait any longer before discussing their implications. To understand the perspective of a “reasonable” person, we conducted a survey of 129 individuals, and identified accuracy and consent as the critical factors. In our analysis, we distinguish two types of lie detection technologies: “truth metering” and “thought exposing.” We generally find that truth metering is already largely within the scope of existing US federal and state laws, albeit with some notable exceptions. In contrast, we find that current regulation of thought exposing technologies is ambiguous and inadequate to safeguard civil liberties. In order to rectify these shortcomings, we introduce the legal concept of “mental trespass” and use this concept as the basis for proposed legislation.





Convinced it will, or afraid it will?

https://www.military.com/daily-news/2022/05/21/milley-tells-west-point-cadets-technology-will-transform-war.html

Milley Tells West Point Cadets Technology Will Transform War

The top U.S. military officer challenged the next generation of Army soldiers on Saturday to prepare America's military to fight future wars that may look little like the wars of today.

Army Gen. Mark Milley, chairman of the Joint Chiefs of Staff, painted a grim picture of a world that is becoming more unstable, with great powers intent on changing the global order. He told graduating cadets at the U.S. Military Academy at West Point that they will bear the responsibility to make sure America is ready.



(Related)

https://warontherocks.com/2022/05/is-artificial-intelligence-made-in-humanitys-image-lessons-for-an-ai-military-education/

IS ARTIFICIAL INTELLIGENCE MADE IN HUMANITY’S IMAGE? LESSONS FOR AN AI MILITARY EDUCATION

Artificial intelligence is not like us. For all of AI’s diverse applications, human intelligence is not at risk of losing its most distinctive characteristics to its artificial creations.

Yet, when AI applications are brought to bear on matters of national security, they are often subjected to an anthropomorphizing tendency that inappropriately associates human intellectual abilities with AI-enabled machines. A rigorous AI military education should recognize that this anthropomorphizing is irrational and problematic, reflecting a poor understanding of both human and artificial intelligence. The most effective way to mitigate this anthropomorphic bias is through engagement with the study of human cognition — cognitive science.



(Related)

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4109202

Aspects of Realizing (Meaningful) Human Control: A Legal Perspective

The concept of ‘meaningful human control’ (MHC) has progressively emerged as a key frame of reference to conceptualize the difficulties posed by military applications of artificial intelligence (AI), and to identify solutions to mitigate these challenges. At the same time, this notion remains relatively indeterminate and difficult to operationalize. If MHC is to support the existing framework of international law applicable to military AI, it needs to be clarified in order to deal with the challenges of AI broadly construed, not limited to ‘autonomous weapons systems’ (AWS). This chapter seeks to refine the notion of MHC by exploring its nature and purpose, and reflecting on how MHC relates to core concepts of human agency and responsibility. Building on this analysis, we propose ways to operationalize MHC, in particular by putting greater emphasis on pre-deployment stages. A legal ‘compliance by design’ approach is advanced by the authors as a means to address the complex realities when military decision-making processes are mediated by AI-enabled technologies.



Saturday, May 21, 2022

Providing both the tools and rules that enabled an authorized user to “win” millions of dollars could make the lawsuits a bit confusing.

https://www.bloomberg.com/news/features/2022-05-19/crypto-platform-hack-rocks-blockchain-community

The Math Prodigy Whose Hack Upended DeFi Won’t Give Back His Millions

An 18-year-old graduate student exploited a weakness in Indexed Finance’s code and opened a legal conundrum that’s still rocking the blockchain community. Then he disappeared.

Medjedovic hasn’t officially responded to either suit; he told me he doesn’t even have a lawyer in Ontario. But in our email exchanges, he argued that he’d executed a perfectly legal series of trades. Nothing he did “involves getting access to a system I was not allowed access into,” he said. “I did not steal anyone’s private keys. I interacted with the smart contract according to its very own publicly available rules. The people who lost internet tokens in this trade were other people seeking to use the smart contract to their own advantage and taking on risky trading positions that they, apparently, did not fully understand.” Medjedovic added that he’d taken on “substantial risk” in pursuing this strategy. If he’d failed he would have lost “a pretty large chunk of my portfolio.”

The case raises several tricky questions about how people should be allowed to interact with code on the blockchain. For instance, the plaintiffs allege that Medjedovic made a “false representation” by manipulating the value of the tokens in the pools. But did Medjedovic do this, or did the algorithm? Barry Sookman, a lawyer in Toronto specializing in information technology, says it’s a distinction without a difference: “Individuals are responsible for the activities of technologies they control.”

And if Medjedovic was engaged in deception, who was being deceived? That’s one basis on which Andrew Lin, a Dallas-based lawyer who advises Medjedovic but isn’t formally involved in the Ontario cases, rejects the false representation argument. “It’s unclear who he made a misrepresentation to,” Lin says. “He set forth lines of code. The code itself is neither true nor false.”





Always useful to know who is playing for the other side.

https://www.databreaches.net/major-cyber-organizations-of-the-russian-intelligence-services/

Major Cyber Organizations of the Russian Intelligence Services

The Office of Information Security Securing One HHS and Health Sector Security Coordination Center (HC3) have released slides from:

Major Cyber Organizations of the Russian Intelligence Services (pdf, 27 pp) TLP: WHITE, ID# 202205191300 May 19, 2022

• Russian Intelligence Services’ Structure

• Russian Intelligence Services’ Mandates





Is this something a small country (or a US state) could cheerfully ignore?

https://www.cpomagazine.com/cyber-security/could-a-cyber-attack-overthrow-a-government-conti-ransomware-group-now-threatening-to-topple-costa-rican-government-if-ransom-not-paid/

Could a Cyber Attack Overthrow a Government? Conti Ransomware Group Now Threatening To Topple Costa Rican Government if Ransom Not Paid

The spate of ransomware attacks on critical infrastructure companies in 2021 was seen as a major escalation by cyber criminal groups. The Conti ransomware gang appears to be attempting to skip several steps by threatening to overthrow the government of Costa Rica, having established a presence throughout its national agencies.

The threat is almost certainly hollow, but it showcases the boldness with which major ransomware groups are operating even after international law enforcement operations took out previous line-crossers REvil and DarkSide among others.





You can provide all the Privacy and Security features you advertise as long as you don’t really provide all those Privacy and Security features. Encryption is Okay as long as we get copies of the plaintext.

https://www.cpomagazine.com/data-privacy/vpn-providers-ordered-by-indian-government-to-hold-all-customer-data-for-five-years-hand-over-to-government-upon-request/

VPN Providers Ordered by Indian Government To Hold All Customer Data for Five Years, Hand Over to Government Upon Request

Virtual private networks (VPN) sell themselves on their ability to anonymize traffic and protect user identities from any prying eyes. A new order from the Indian government could essentially undermine the business of VPN providers in the country, requiring the personal information of all users to be collected and this profile of customer data to be held for up to five years.

The country’s Computer Emergency Response Team (CERT-In), an office of the Ministry of Electronics and Information Technology tasked with taking point on cybersecurity threats, would also require VPN providers to grant it access to this customer data upon request.





Rethinking war. Why would anyone think that nothing would change?

https://breakingdefense.com/2022/05/ukraine-shows-that-city-hopping-is-the-new-era-of-defensive-warfare/

Ukraine shows that city hopping is the ‘new era’ of defensive warfare

The future of land warfare may not be hordes of missile raining down on an opposing force, crushing it and giving the attacker the advantage. Instead, the war in Ukraine may demonstrate that the advantage has swung to the defender, who can strike from hiding using tactical weapons in part because of the power of drone surveillance.

Maj. Gen. Scott Winter, commander of Australia’s 1st Division, told the more than 2,000 attendees at AUSA’s Pacific Land Warfare Conference that land warfare now increasingly resembles the island hopping strategy America followed in the Pacific during World War II. Drones create what he called “massive no-man’s lands,” stretching thousands of kilometers. Major attacking forces then get struck by smaller units hiding in urban areas, and suffering losses and disruptions to their crucial supply lines as they move between cities, tracked all the way by unmanned cameras in the sky.



Friday, May 20, 2022

This is not a “Get out of jail free” card for my Ethical Hackers. More a “Stay out of jail, IF ...” card.

https://www.theregister.com/2022/05/20/cfaa_rule_change/

US won’t prosecute ‘good faith’ security researchers under CFAA

The US Justice Department has directed prosecutors not to charge "good-faith security researchers" with violating the Computer Fraud and Abuse Act (CFAA) if their reasons for hacking are ethical — things like bug hunting, responsible vulnerability disclosure, or above-board penetration testing.

Good-faith, according to the policy [PDF], means using a computer "solely for purposes of good-faith testing, investigation, and/or correction of a security flaw or vulnerability."





Illustrating complexity for my students.

https://www.cpomagazine.com/data-protection/data-privacy-conundrum-when-different-states-play-by-different-rules/

Data Privacy Conundrum: When Different States Play by Different Rules. . .

It’s been less than two and a half years since the California Consumer Privacy Act, also known as CCPA, went into effect, but the influence of that signature legislation is already incalculable. Like General Data Privacy Regulation (GDPR), the European mandate that came before it, this set of wide-ranging regulations has fundamentally changed the conversation on data privacy and reset the clock on what government can and should do to protect consumers’ personal information.

Even CCPA won’t be CCPA much longer—when 2024 arrives, it’ll be CPRA, or the California Privacy Rights Act, which encompasses its predecessor while establishing more stringent measures (and enforcement bodies to make sure they stick). However, there are even bigger changes on the horizon, and they potentially affect every company doing business in every state.





To Bio or not to Bio? (And other interesting questions)

https://fpf.org/blog/when-is-a-biometric-no-longer-a-biometric/

WHEN IS A BIOMETRIC NO LONGER A BIOMETRIC?

In October 2021, the White House Office of Science and Technology (OSTP) published a Request for Information (RFI) regarding uses, harms, and recommendations for biometric technologies. Over 130 entities responded to the RFI, including advocacy organizations, scientists, experts in healthcare, lawyers, and technology companies. While most commenters agreed on core concepts of biometric technologies used to identify or verify identity (with differences in how to address it in policy), there was clear division as to what extent the law should apply to emerging technologies used for physical detection and characterization (such as skin cancer detection or diagnostic tools). These comments reveal that there is no general consensus on what “biometrics” should entail and thus what the applicable scope of law should be.





...and humans shall have the rights AI shall grant them, and no more.

https://www.bespacific.com/human-rights-and-algorithmic-opacity/

Human Rights, and Algorithmic Opacity

Lu, Sylvia Si-Wei, Data Privacy, Human Rights, and Algorithmic Opacity (May 6, 2022). California Law Review, Vol. 110, 2022 Forthcoming, Available at SSRN: https://ssrn.com/abstract=4004716

Decades ago, it was difficult to imagine a reality in which artificial intelligence (AI) could penetrate every corner of our lives to monitor our innermost selves for commercial interests. Within a few decades, the private sector has seen a wild proliferation of AI systems, many of which are more powerful and penetrating than anticipated. In many cases, machine-learning-based AI systems have become “the power behind the throne,” tracking user activities and making fateful decisions through predictive analysis of personal information. However, machine-learning algorithms can be technically complex and legally claimed as trade secrets, creating an opacity that hinders oversight of AI systems. Accordingly, many AI-based services and products have been found to be invasive, manipulative, and biased, eroding privacy rules and human rights in modern society. “The emergence of advanced AI systems thus generates a deeper tension between algorithmic secrecy and data privacy. Yet, in today’s policy debate, algorithmic transparency in a privacy context is an issue that is equally important but managerially disregarded, commercially evasive, and legally unactualized. This Note illustrates how regulators should rethink strategies regarding transparency for privacy protection through the interplay of human rights, disclosure regulations, and whistleblowing systems. It discusses how machine-learning algorithms threaten privacy protection through algorithmic opacity, assesses the effectiveness of the EU’s response to privacy issues raised by opaque AI systems, demonstrates the GDPR’s inadequacy in addressing privacy issues caused by algorithmic opacity, and proposes new algorithmic transparency strategies toward privacy protection, along with a broad array of policy implications and suggested moves. The analytical results indicate that in a world where algorithmic opacity has become a strategic tool for firms to escape accountability, regulators in the EU, the US, and elsewhere should adopt a human-rights-based approach to impose a social transparency duty on firms deploying high-risk AI techniques.”





Perspective.

https://www.techrepublic.com/article/ai-remains-priority-ceos-gartner-survey/

AI remains priority for CEOs, according to new Gartner survey

For the third year running, AI is the top priority for CEOs, according to a survey of CEOs and senior executives released by Gartner on Wednesday.

The survey “2022 CEO Survey — The Year Perspectives Changed gauged the opinions of CEOs and top executives on a range of issues from the workforce to the environment and digitalization. The findings also revealed that the metaverse, which has received a lot of hype in the last year, especially since the rebranding of Facebook to Meta, is not as relevant to business leaders – 63% say that they do not see the metaverse as a key technology for their organization.





For my students.

https://insights.dice.com/2022/05/20/are-there-a-lot-of-artificial-intelligence-a-i-jobs-right-now/

Are There a Lot of Artificial Intelligence (A.I.) Jobs Right Now?

Interested in a career in machine learning and artificial intelligence (A.I.)? Curious about the number of opportunities out there? A new breakdown shows that A.I. remains a highly specialized field with relatively few job openings—but that will almost certainly change in coming years.

CompTIA’s monthly Tech Jobs Report reveals that states with the largest tech hubs—including California, Texas, Washington, and Massachusetts—lead when it comes to A.I.-related job postings



Thursday, May 19, 2022

It’s called collateral damage. This is not the first time it has happened.

https://www.businessinsider.com/russian-cyberattacks-on-ukraine-may-have-gotten-out-of-hand-2022-5?r=US&IR=T

Cyberattacks quietly launched by Russia before its invasion of Ukraine may have been more damaging than intended

Russian hackers went after a variety of Ukrainian targets in the private and public sectors, but one cyber weapon aimed at a specific military target spilled over and affected tens of thousands of devices outside Ukraine.

A few hours before the Russian invasion began on February 24, Russian hackers launched a cyberweapon against Viasat, an American satellite communications company that has been providing communication services to the Ukrainian military.

Named "AcidRain," the cyberweapon was a kind of malware known as a "wiper" that targeted Viasat modems and routers and erased all their data before permanently disabling them.

However, the Russian hackers appear to have let AcidRain run amok, either not able or not caring to limit the attack to Ukrainian devices.





Interesting language to describe a “research” vessel...

https://www.scmp.com/news/china/science/article/3178382/chinas-world-first-drone-carrier-new-marine-species-using-ai

China’s world-first drone carrier is a new ‘marine species’ using AI for unmanned maritime intelligence

China launched the world’s first drone carrier capable of operating on its own on Wednesday.

The unmanned ship, which can be controlled remotely and navigate autonomously in open water, will be a powerful tool for the nation to carry out marine scientific research and observation, according to the state-run Science and Technology Daily.

It comes as artificial intelligence plays an increasingly important role in maintaining maritime security, controlling sea lanes and competing for marine resources. China aims to use AI technology to expand its maritime influence.

The wide deck of the ship can carry dozens of unmanned vehicles, including drones, unmanned ships and submersibles, and the equipment will be able to form a network to observe targets, according to the report.

Last year, Zhuhai Yunzhou Intelligence Technology Co, a leading developer of unmanned surface vehicles, announced the company had developed an unmanned high-speed vessel, a breakthrough in its “dynamic cooperation confrontation technology”, according to the state-owned Global Times.

The report said the vessel could quickly intercept, besiege and expel invasive targets and it marked a milestone in the development of unmanned maritime intelligence equipment.





Do lawyers often use such pretexts to extract data? Just asking, because there may be something profitable here…

https://www.pogowasright.org/a-sham-website-chhabria-questions-legitimacy-of-plaintiff-in-subpoena-to-unveil-anonymous-twitter-user/

A Sham Website’?: Chhabria Questions Legitimacy of Plaintiff in Subpoena to Unveil Anonymous Twitter User

Meghann M. Cuniff reports:

A federal judge has said he’s ready to quash a subpoena to Twitter over an anonymous user after pressing for more information about the limited liability company behind it, accusing its website of being a “sham” and suggesting its attorney doesn’t want an investigation into the people behind it.
Lawrence Hadley, a Glaser Weil Fink Howard Avchen & Shapiro attorney representing Bayside LLC, told U.S. District Judge Vince Chhabria of the Northern District of California he doesn’t wish to submit further evidence in support of the subpoena, but Chhabria wondered if he can push for it, mentioning his ability to issue sanctions and suggesting he has “an independent duty to explore whether Bayside has abused the judicial process.”

Read more at The Recorder.





Governance or another layer of bureaucracy?

https://www.airforcemag.com/new-pentagon-office-overseeing-data-and-ai-nearing-foc/

New Pentagon Office Overseeing Data and AI Nearing FOC

As the Defense Department looks to accelerate use of artificial intelligence and to connect its sensors and shooters into one massive data network, a new office overseeing those efforts will reach full operating capability in the coming weeks.

The Office of the Chief Data and Artificial Intelligence Officer (CDAO) will reach FOC by June 1, John Sherman, the Pentagon’s chief information officer and acting CDAO, told lawmakers May 18.

In the meantime, those already in the office are working to define its structure. To this point, AI projects across the Pentagon have formed a massive sprawling enterprise—there are more than 600 efforts currently underway, Defense Secretary Lloyd J. Austin III has said—making consolidation a key point.

And it’s not just the larger DOD-wide offices and efforts that need to be coordinated—the services have their own AI ambitions. The Department of the Air Force, in particular, has already named its new chief data and AI officer, Brig. Gen. John M. Olson, and pursued projects to integrate AI into unmanned autonomous aircraft and target identification.

… “Just as when the world came to terms with the horrors of chemical weapons in World War I, and the Geneva Convention was the result, I think this is a second Geneva Convention moment,” said Moulton, who served as an officer in the Marine Corps. “… I get that this basically falls under the State Department. But I don’t think enough people in State appreciate how important this is, and as one of the leaders in our government on the use and employment of AI, I would strongly encourage you to help mount an effort to work on this broader problem.”

Palmieri agreed with Moulton and revealed that DOD is “in the last few weeks of coordination” in developing a strategy for responsible AI.





The IRS is looking at a “face tax?”

https://krebsonsecurity.com/2022/05/senators-urge-ftc-to-probe-id-me-over-selfie-data/

Senators Urge FTC to Probe ID.me Over Selfie Data

Some of more tech-savvy Democrats in the U.S. Senate are asking the Federal Trade Commission (FTC) to investigate identity-proofing company ID.me for “deceptive statements” the company and its founder allegedly made over how they handle facial recognition data collected on behalf of the Internal Revenue Service, which until recently required anyone seeking a new IRS account online to provide a live video selfie to ID.me.





Resources. Completing my SciFi collection.

https://www.makeuseof.com/best-websites-second-hand-books/

The 5 Best Websites to Buy Second-Hand Books

If you're looking to score a bargain on expanding your book collection, second-hand sites are a good way to go. Here are five of the best.





Perspective.

https://www.bespacific.com/robophobia/

Robophobia

University of Colorado Law Review > Printed > Volume 93 > Issue 1 > Robophobia by Andrew Keane Woods

Robots—machines, algorithms, artificial intelligence—play an increasingly important role in society, often supplementing or even replacing human judgment. Scholars have rightly become concerned with the fairness, accuracy, and humanity of these systems. Indeed, anxiety about machine bias is at a fever pitch. While these concerns are important, they nearly all run in one direction: we worry about robot bias against humans; we rarely worry about human bias against robots. This is a mistake. Not because robots deserve, in some deontological sense, to be treated fairly—although that may be true—but because our bias against nonhuman deciders is bad for us. For example, it would be a mistake to reject self-driving cars merely because they cause a single fatal accident. Yet all too often this is what we do. We tolerate enormous risk from our fellow humans but almost none from machines. A substantial literature—almost entirely ignored by legal scholars concerned with algorithmic bias—suggests that we routinely prefer worse-performing humans over better-performing robots. We do this on our roads, in our courthouses, in our military, and in our hospitals. Our bias against robots is costly, and it will only get more so as robots become more capable. This Article catalogs the many different forms of antirobot bias and suggests some reforms to curtail the harmful effects of that bias. The Article’s descriptive contribution is to develop a taxonomy of robophobia. Its normative contribution is to offer some reasons to be less biased against robots. The stakes could hardly be higher. We are entering an age when one of the most important policy questions will be how and where to deploy machine decision-makers.”





Perspective. (Lawyer automation)

https://www.jdsupra.com/legalnews/legal-ai-series-chapter-nine-early-case-9773382/

Legal AI Series [Chapter Nine]: Early Case Assessment Software: AI’s “Inner Eye” to Discovery Processes

Artificial intelligence has given legal professionals an arsenal of tools to help them tackle the challenges of ESI and its unprecedented growth in the modern world. However, so far in our AI Legal Revolution series, the tools we’ve discussed have largely been reactive; solutions that attempt to resolve problems instead of anticipate them.

In other words, an alley-oop to attorneys who are desperately scrambling to play catch up.

Don’t get us wrong, these “catch up” tools are a much-needed boost over some of document review’s biggest hurdles. But what if AI software could do more than just react… what if, instead, it could act?

With early case assessment (ECA) software, attorneys now have the ability to do just that.

Here’s a closer look at the clairvoyant powers of ECA software, and how this technology can be used to improve discovery processes for legal professionals around the globe.