Saturday, May 07, 2022

I assume it is all about the assumptions programmed in.

https://www.brookings.edu/techstream/understanding-the-errors-introduced-by-military-ai-applications/

Understanding the errors introduced by military AI applications

On March 22, 2003, two days into the U.S.-led invasion of Iraq, American troops fired a Patriot interceptor missile at what they assumed was an Iraqi anti-radiation missile designed to destroy air-defense systems. Acting on the recommendation of their computer-powered weapon, the Americans fired in self-defense, thinking they were shooting down a missile coming to destroy their outpost. What the Patriot missile system had identified as an incoming missile, was in fact a UK Tornado fighter jet, and when the Patriot struck the aircraft, it killed two crew on board instantly. The deaths were the first losses suffered by the Royal Air Force in the war and the tragic result of friendly fire.

A subsequent RAF Board of Inquiry investigation concluded that the shoot-down was the result of a combination of factors: how the Patriot missile classified targets, rules for firing the missiles, autonomous operation of Patriot missile batteries, and several other technical and procedural factors, like the Tornado not broadcasting its “friend or foe” identifier at the time of the friendly fire. The destruction of Tornado ZG710, the report concluded, represented a tragic error enabled by the missile’s computer routines.





Another opportunity for AI generated errors.

https://venturebeat.com/2022/05/06/employment-ai-regulations-5-takeaways-for-technical-decision-makers/

5 ways to address regulations around AI-enabled hiring and employment

In November, the New York City Council passed the first bill in the U.S. to broadly address the use of AI in hiring and employment. It would require hiring vendors to conduct annual bias audits of the use of artificial intelligence (AI) in the city’s processes and tools.

But that was just the beginning for proposed regulations on the use of employment AI tools. The European Commission recently drafted proposals that would protect gig workers from AI-enabled monitoring. And this past April, California introduced The Workplace Technology Accountability Act, or Assembly Bill 1651, which proposes employees be notified prior to the collection of data and use of monitoring tools and deployment of algorithms, with the right to review and correct collected data. It would limit the use of monitoring technologies to job-related use cases and valid business practices and require employers to conduct impact assessments on the use of algorithms and data collection.

This kind of legislation around the use of AI in hiring and employment is becoming more common, Beena Ammanath, executive director of the Global Deloitte AI Institute, told VentureBeat. The question is, what should HR departments and technical decision-makers be thinking about as AI regulation evolves?





Interesting, but I don’t think it will spread as completely as they hope.

https://arstechnica.com/information-technology/2022/05/how-apple-google-and-microsoft-will-kill-passwords-and-phishing-in-1-stroke/

How Apple, Google, and Microsoft will kill passwords and phishing in one stroke

The program that Apple, Google, and Microsoft are rolling out will finally organize the current disarray of MFA services in some significant ways. Once it’s fully implemented, I’ll be able to use my iPhone to store a single token that will authenticate me on any of those three companies' services (and, one expects, many more follow-on services). The same credential can also be stored on a device running Android or Windows.

By presenting a facial scan or fingerprint to the device, I’ll be able to log in without having to type a password, which is faster and much more convenient. Equally important, the credential can be stored online so that it’s available when I replace or lose my current phone, solving another problem that has plagued some MFA users—the risk of being locked out of accounts when phones are lost or stolen. The recovery processes works by using an already authenticated device to download the credential, with no password required.





Tools & Techniques.

https://www.makeuseof.com/tag/how-to-trace-your-emails-back-to-the-source/

How to Trace Emails Back to Their Source IP Address



Friday, May 06, 2022

Is Facebook really this clever? Would lawmakers notice?

https://www.wsj.com/articles/facebook-deliberately-caused-havoc-in-australia-to-influence-new-law-whistleblowers-say-11651768302?mod=djemalertNEWS

Facebook Deliberately Caused Havoc in Australia to Influence New Law, Whistleblowers Say

Last year when Facebook blocked news in Australia in response to potential legislation making platforms pay publishers for content, it also took down the pages of Australian hospitals, emergency services and charities. It publicly called the resulting chaos “inadvertent.”

Internally, the pre-emptive strike was hailed as a strategic masterstroke.

Facebook documents and testimony filed to U.S. and Australian authorities by whistleblowers allege that the social-media giant deliberately created an overly broad and sloppy process to take down pages—allowing swaths of the Australian government and health services to be caught in its web just as the country was launching Covid vaccinations.





A great summary.

https://www.gibsondunn.com/artificial-intelligence-and-automated-systems-legal-update-1q22/

Artificial Intelligence and Automated Systems Legal Update (1Q22)

Click for PDF

While news about any artificial intelligence-related legal development often remained buried among the more pressing news of other major world events in the first quarter of 2022, that is not to say that nothing notable occurred. Indeed, each of the three branches of the U.S. Government took a number of significant steps towards developing more focused AI strategies, legislation, regulations, and principles of governance. As highlighted below in this quarter’s update, Congress, the Department of Defense, the Department of Energy, the Intelligence directorates, NIST, the FTC, and the EEOC all were active players in early 2022 in matters relating to AI. In addition, the EU continued this quarter in advancing efforts toward a union-wide, general AI policy and regulation, which, if and when ultimately adopted, seems likely to have an influential impact on much of the debate that continues in the U.S. on the need for a national approach. Meanwhile, state and local governments in the U.S. continue to fill some of the perceived gaps left by the continued piecemeal regulatory approach taken to date by the federal government.





Must be something interesting here. You don’t have to read them, but you should have the right to read them.

https://www.bespacific.com/everylibrary-launches-the-banned-book-store/

EveryLibrary launches the Banned Book Store

EveryLibrary is excited to launch the Banned Book Store at bannedbookstore.co as the most comprehensive list of currently banned and challenged books in the United States. Many of the book challenges come from individuals who have never read the books and who have been encouraged by national right wing organizations to present excerpts out of context to villainize and demonize librarians while building a case for horrific legislation that allows the government to bans books that don’t agree with their current political ideologies. According to a report by PEN America, book bans have targeted 1,145 unique book titles by 874 different authors, 198 illustrators, and 9 translators, impacting the literary, scholarly, and creative work of 1,081 people altogether. Among titles in PEN’s index:

  • 467 titles (41%) included protagonists or prominent secondary characters who were people of color;

  • 247 titles (22%) directly address issues of race and racism

  • 379 titles (33%) explicitly address LGBTQ+ themes, or have protagonists or prominent secondary characters who are LGBTQ+

  • 184 titles (16%) are history books or biographies. 107 have explicit or prominent themes related to rights and activism (9%).

  • 42 children’s books were censored, including biographies of Rosa Parks, Martin Luther King Jr., Ruby Bridges, Duke Ellington, Katherine Johnson, Neil deGrasse Tyson, Cesar Chavez, Sonia Sotomayor, Nelson Mandela, and Malala Yousafzai.

Many other censorship measures in several US states could keep you from being allowed to read some books in our Nation’s libraries. These books are being banned by government organizations in libraries across the country simply because a handful of extremists disagree with the content of the books. Some of these measures could also lead to the arrest of librarians as a result of their commitment to free speech and access to library materials. Some provide monetized incentives to ban books. Yet, we know that exposure to a wide range of developmentally appropriate reading materials has significant benefits on the health, livelihood, and well-being of our nation’s children. Books help develop empathy for others. They help children imagine lives and experiences that are new to them or different than their own. In fact, A 2014 study found that children became more empathetic toward LGBTQ+ folks, immigrants, and refugees after reading Harry Potter, a story of a child who is different than his peers.”





Tools & Techniques.

https://www.makeuseof.com/best-text-to-speech-firefox-add-ons/

The 5 Best Text-to-Speech Add-ons for Firefox

Whether you use text-to-speech for accessibility or convenience, it's hard to find the feature everywhere online. So, here are five Firefox add-ons.





Tools for when the worst happens.

https://www.makeuseof.com/apps-to-help-ukrainians/

5 Apps Ukrainians Are Relying on During War

With Ukraine undergoing foreign invasion, these are the vital Android and iOS apps that locals are using to stay informed and connected.



Thursday, May 05, 2022

White hat forensics.

https://www.makeuseof.com/google-dorking-how-hackers-use-it/

What Is Google Dorking and How Hackers Use It to Hack Websites

Google dorking or Google hacking is the technique of feeding advanced search queries into the Google search engine to hunt for sensitive data such as username, password, log files, etc., of websites that Google is indexing due to site misconfiguration. This data is publicly visible and, in some cases, downloadable.

A regular Google search involves a seed keyword, sentence, or question. But, in Google dorking, an attacker uses special operators to enhance search and dictate the web crawler to snipe for very specific files or directories on the internet. In most cases, they are log files or website misconfigurations.





Law any government might want.

https://news.softpedia.com/news/india-forces-all-vpn-providers-to-log-and-store-user-data-535322.shtml

India Forces All VPN Providers to Log and Store User Data

One of the benefits of using a VPN service to connect to the web is the enhanced privacy, as such a solution basically makes it possible to stay anonymous and without revealing any information about you or your device.

Of course, most people are looking for VPN services that don’t collect any data about their activity while connected to the servers, and in the last couple of years, more and more providers have been betting big on such capabilities.

India, however, has had enough with no-log VPNs, as the country has passed a new law that will require all providers not only to store information about their users but also to share it with the government when required.

Coming into effect on June 27, the new directive forces VPN services to store the data on their servers for no less than five years, as per Neowin. This must happen even if the user is no longer having a subscription.





How easy will it be to describe a ‘recommender algorithm?’ “Visible” is not always “understandable.”

https://www.cpomagazine.com/data-protection/finalized-eu-digital-services-act-promises-transparency-in-recommender-algorithms-new-restrictions-on-targeted-advertising/

Finalized EU Digital Services Act Promises Transparency in Recommender Algorithms, New Restrictions on Targeted Advertising

EU legislators have agreed to final terms on the Digital Services Act, a new law that focuses on large social media and retail platforms. The full text has yet to be released to the public, but the European Parliament and European Commission have outlined some of its central terms; these include new restrictions on how targeted advertising can use sensitive personal information, a ban on dark patterns and a requirement that the inner workings of recommender algorithms be visible to the public.





Useful perspectives?

https://www.bespacific.com/10-lessons-from-bellingcats-logan-williams-on-digital-forensic-techniques/

10 Lessons from Bellingcat’s Logan Williams on Digital Forensic Techniques

Global Investigative Journalism Network:Logan Williams is a data scientist on the Bellingcat investigative technology team. He spoke about digital forensic reporting labs at the 2022 International Journalism Festival in Perugia, Italy. GIJN attended the panel and caught up with Williams afterward to hear his top tips and advice for using digital forensic techniques in your reporting.”



Wednesday, May 04, 2022

Imagine these guys in your systems. How would you catch them?

https://www.csoonline.com/article/3659001/chinese-apt-group-winnti-stole-trade-secrets-in-years-long-undetected-campaign.html#tk.rss_all

Chinese APT group Winnti stole trade secrets in years-long undetected campaign

The Operation CuckooBees campaign used zero-day exploits to compromise networks and leveraged Windows' Common Log File System to avoid detection.

Security researchers have uncovered a cyberespionage campaign that has remained largely undetected since 2019 and focused on stealing trade secrets and other intellectual property from technology and manufacturing companies across the world. The campaign uses previously undocumented malware and is attributed to a Chinese state-sponsored APT group known as Winnti.

"With years to surreptitiously conduct reconnaissance and identify valuable data, it is estimated that the group managed to exfiltrate hundreds of gigabytes of information," researchers from security firm Cybereason said in a new report. "The attackers targeted intellectual property developed by the victims, including sensitive documents, blueprints, diagrams, formulas, and manufacturing-related proprietary data."





Be aware, be very aware.

https://www.cpomagazine.com/cyber-security/avoiding-data-breaches-a-guide-for-boards-and-c-suites/

Avoiding Data Breaches: A Guide for Boards and C-Suites

Litigation against corporate board members and C-level executives for data privacy and security claims is on the rise. Specifically, the number of suits stemming from data breaches and other cybersecurity incidents has increased as such breaches and incidents have become more common. Recently, plaintiffs have targeted corporate board members and C-level executives alleging that their data privacy–related claims result from a breach of fiduciary duties. For example, plaintiffs may allege that the board’s or C-suite’s breach of fiduciary duties caused or contributed to the data breach due to a failure to implement an effective system of internal controls or a failure to heed cybersecurity-associated red flags. Even if a breach does not lead to litigation or enforcement action against board members or C-level executives, data breaches can tarnish a corporation’s name and lead to increased scrutiny from regulators. This year alone, the U.S. Department of Health and Human Services Office for Civil Rights has recorded over 100 breaches of unsecured electronic protected health information, or ePHI. The department noted that most cyberattacks could be prevented or substantially mitigated by implementing appropriate security measures.





Is “Fair” the right goal?

https://www.nature.com/articles/d41586-022-01202-3

To make AI fair, here’s what we must learn to do

Beginning in 2013, the Dutch government used an algorithm to wreak havoc in the lives of 25,000 parents. The software was meant to predict which people were most likely to commit childcare-benefit fraud, but the government did not wait for proof before penalizing families and demanding that they pay back years of allowances. Families were flagged on the basis of ‘risk factors’ such as having a low income or dual nationality. As a result, tens of thousands were needlessly impoverished, and more than 1,000 children were placed in foster care.

From New York City to California and the European Union, many artificial intelligence (AI) regulations are in the works. The intent is to promote equity, accountability and transparency, and to avoid tragedies similar to the Dutch childcare-benefits scandal.

But these won’t be enough to make AI equitable. There must be practical know-how on how to build AI so that it does not exacerbate social inequality. In my view, that means setting out clear ways for social scientists, affected communities and developers to work together.



(Related)

https://apnews.com/article/child-welfare-algorithm-investigation-9497ee937e0053ad4144a86c68241ef1

An algorithm that screens for child neglect raises concerns

Inside a cavernous stone fortress in downtown Pittsburgh, attorney Robin Frank defends parents at one of their lowest points – when they risk losing their children.

The job is never easy, but in the past she knew what she was up against when squaring off against child protective services in family court. Now, she worries she’s fighting something she can’t see: an opaque algorithm whose statistical calculations help social workers decide which families should be investigated in the first place.

A lot of people don’t know that it’s even being used,” Frank said. “Families should have the right to have all of the information in their file.”

From Los Angeles to Colorado and throughout Oregon, as child welfare agencies use or consider tools similar to the one in Allegheny County, Pennsylvania, an Associated Press review has identified a number of concerns about the technology, including questions about its reliability and its potential to harden racial disparities in the child welfare system. Related issues have already torpedoed some jurisdictions’ plans to use predictive models, such as the tool notably dropped by the state of Illinois.

According to new research from a Carnegie Mellon University team obtained exclusively by AP, Allegheny’s algorithm in its first years of operation showed a pattern of flagging a disproportionate number of Black children for a “mandatory” neglect investigation, when compared with white children. The independent researchers, who received data from the county, also found that social workers disagreed with the risk scores the algorithm produced about one-third of the time.





A different angle, same argument.

https://puck.news/the-hollywood-a-i-i-p-supernova/

The Hollywood A.I.-I.P. Supernova

Will the robots replace us all one day? Who knows, but chances are they will eventually learn how to create a superhero movie. Ergo, the start of one of the great legal debates in Hollywood history.

The A.I. Wars are almost here. No, I’m not talking about Terminator or even a crackdown on Twitter bots. Instead, we’ll soon be witnessing a series of extraordinary test cases designed to force the American legal system to reconsider the concept of authorship as artificial intelligence begins to write short stories or pop songs. It may sound like a Zuckerbergian fever dream, but A.I. could soon be creating blockbuster movies and life-saving pharmaceuticals, too—multi-billion dollar products with no human creator.

The legal battle has already begun. Sometime in the next couple of weeks, I’ve learned, a lawsuit will be filed that challenges the U.S. Copyright Office’s recent decision to deny an “author” identified as “Creativity Machine.” Then, a few weeks later, a federal appeals court will hear oral arguments in Thaler v. Hirshfeld, an under-the-radar but potentially blockbuster case concerning whether A.I. can be listed as the “inventor” in a patent application. Meanwhile, authorities in the European Union and 15 other countries are being asked to make similar determinations to properly credit the achievements of A.I.

… Abbott is obsessed with our technological future. In writings including The Reasonable Robot, he outlines how we should discriminate between A.I. and human behavior under the law. For example, if businesses get taxed on the wages of its human but not robot workers, he asks, do we incentivize automation? And if we hold the suppliers of autonomous driving software to a punishing tort standard (i.e. strict liability rather than negligence), will there come a time when we’re actually discouraging the adoption of technology that would prevent accidents on the road?

… This new case focusing on copyright protection for A.I.-generated work could become meaningful for the creative industry as studios and filmmakers explore A.I.’s potential. In recent years, for example, Warner Bros. has used A.I. to guide its decision-making about what projects to pursue. In Japan, a new film about a boy’s dislike of tomatoes, based on a script by A.I., is now hitting the festival circuit. There’s now an A.I. tool out there that, sensing the tone of any video, recomposes music for a score. Sony, in fact, has tried to use A.I. to make new music that sounds like The Beatles, and Spotify is experimenting too. And as anyone who has seen the deepfake “Tom Cruise” knows, A.I. can do a pretty good job of replicating actors (something that’s of increasing concern to actor unions). Put it all together, and we’ll likely be seeing A.I. acting soon as the auteur on a major motion picture. And not just for movies either. A.I. is increasingly involved in video game development, too.



Tuesday, May 03, 2022

Good illustration of open source intelligence.

https://breakingdefense.com/2022/05/to-maximize-ukraine-coverage-blacksky-shifted-orbits-for-its-newest-satellites/

To maximize Ukraine coverage, BlackSky shifted orbits for its newest satellites

"We're talking to a lot of different organizations in the government — Space Force, the Army, the Air Force — many that are looking at how you could leverage commercial technologies and tactical ISR," BlackSky CEO Brian O'Toole told Breaking Defense.

When Russia invaded Ukraine in February, remote sensing firm BlackSky made a “business decision” to change the planned orbits of its two newest satellites to better keep tabs on the war — even though they were scheduled to blast off just about a month later, CEO Brian O’Toole told Breaking Defense.





The need (and cost?) for adequate security just went up. Taking away options will do that.

https://www.databreaches.net/north-carolina-becomes-first-state-to-prohibit-public-entities-from-paying-ransoms/

North Carolina Becomes First State to Prohibit Public Entities from Paying Ransoms

Hunton Andrews Kurth writes:

On April 5, 2022, North Carolina became the first state in the U.S. to prohibit state agencies and local government entities from paying a ransom following a ransomware attack.
North Carolina’s new law, which was passed as part of the state’s 2021-2022 budget appropriations, prohibits government entities from paying a ransom to an attacker who has encrypted their IT systems and subsequently offers to decrypt that data in exchange for payment. The law prohibits government entities from even communicating with the attacker, instead directing them to report the ransomware attack to the North Carolina Department of Information Technology in accordance with G.S. 143B-1379.

Read more at Privacy & Information Security Law Blog.





A most interesting question. If you tell me the AI created the content, and the AI can’t copyright it, I can use it all I want for free, right?

https://thenextweb.com/news/is-ethical-use-ai-generated-content-without-crediting-the-machine

Is it ethical to use AI-generated content without crediting the machine?





Tools & Techniques,

https://www.makeuseof.com/how-to-find-property-lines-with-landglide-app-android-ios/

Skip the Surveyor and Find Your Property Lines With LandGlide

The LandGlide app is designed to help people find their property lines using their smartphones. Thanks to advanced parcel data and the GPS on your device, LandGlide features up-to-date data that can show you exactly where your property lines are.





Resource.

https://news.slashdot.org/story/22/05/02/2137230/google-makes-100000-worth-of-tech-training-free-to-every-us-business

Google Makes $100,000 Worth of Tech Training Free To Every US Business

Alphabet's Google will provide any U.S. business over $100,000 worth of online courses in data analytics, design and other tech skills for their workers free of charge, the search company said on Monday. Reuters reports:

The offer marks a big expansion of Google's Career Certificates, a program the company launched in 2018 to help people globally boost their resumes by learning new tools at their own pace. Over 70,000 people in the United States and 205,000 globally have earned at least one certificate, and 75% receive a benefit such as a new job or higher pay within six months, according to Google.
The courses, designed by Google and sold through online education service Coursera, each typically cost students about $39 a month and take three to six months to finish. Google will now cover costs for up to 500 workers at any U.S. business, and it valued the grants at $100,000 because people usually take up to six months to finish. Lisa Gevelber, founder of Grow with Google, the company unit overseeing certificates, said course completion rates are higher when people pay out of pocket but that the new offer was still worthwhile if it could help some businesses gain digital savvy. Certificates also are available in IT support, project management, e-commerce and digital marketing. They cover popular software in each of the fields, including Google advertising services.



Monday, May 02, 2022

Inevitable, but not as impactive as I suspect a full on ‘state sponsored’ attack would be.

https://www.databreaches.net/hacking-russia-was-off-limits-the-ukraine-war-made-it-a-free-for-all/

Hacking Russia was off-limits. The Ukraine war made it a free-for-all.

Joseph Menn reports:

.. the third month of war finds Russia, not the United States, struggling under an unprecedented hacking wave that entwines government activity, political voluntarism and criminal action.

Digital assailants have plundered the country’s personal financial data, defaced websites and handed decades of government emails to anti-secrecy activists abroad. One recent survey showed more passwords and other sensitive data from Russia were dumped onto the open Web in March than information from any other country.

Read more at WaPo.



(Related) Perhaps both sides are showing restraint. These are not abnormally aggressive attacks either.

https://thehackernews.com/2022/05/russian-hackers-targeting-diplomatic.html

Russian Hackers Targeting Diplomatic Entities in Europe, Americas, and Asia

A Russian state-sponsored threat actor has been observed targeting diplomatic and government entities as part of a series of phishing campaigns commencing on January 17, 2022.

Threat intelligence and incident response firm Mandiant attributed the attacks to a hacking group tracked as APT29 (aka Cozy Bear), with some set of the activities associated with the crew assigned the moniker Nobelium (aka UNC2452/2652).

"This latest wave of spear phishing showcases APT29's enduring interests in obtaining diplomatic and foreign policy information from governments around the world," the Mandiant said in a report published last week.





Always a difficult problem. Some good suggestions…

https://www.csoonline.com/article/3658118/cybersecurity-metrics-corporate-boards-want-to-see.html#tk.rss_all

Cybersecurity metrics corporate boards want to see

It may be helpful to set a baseline of what board members really want to know about cybersecurity in any company. Here are their top five questions:

1. Are we secure? This question is the bane of many a cybersecurity pro’s existence because the answer now and always will be “no” from a literal 100% protection standpoint. If we rework the question to “what is our exposure level?” we can start to make headway.
2. Are we compliant? This question is often easily answered with audit results but may provide no real comfort due to its “point-in-time” perspective that can change at a moment’s notice. Better to assess our cybersecurity program using a control framework.
3. Have we had any (significant) incidents? Board members will be well-aware of any significant incidents, so this question is usually answered with details as well as estimates regarding costs and potential liability.

I said there are five questions, but the three above are the ones that are typically articulated. These final two are implied as a standard element of good board management:

4. How effective is our security program? Quality first.
5. How efficient is our security program? And then quantity.





Looks like my face has some value. I wonder if I should get a © tattoo?

https://finance.yahoo.com/news/global-facial-recognition-market-forecast-095300708.html

Global Facial Recognition Market Forecast Report 2021-2028: 3D Face Recognition Systems Gaining Traction & Adoption of Cloud-Based Facial Recognition Technology

The Facial Recognition market is projected to reach US$ 12,670.22 million by 2028 from US$ 5,012.71 million in 2021; it is expected to grow at a CAGR of 14.2% from 2021 to 2028.

The use of facial recognition in law enforcement and non-law enforcement applications is predicted to increase rapidly during the forecast period. Furthermore, facial recognition is often preferred over other biometric technologies, such as voice recognition, skin texture recognition, iris identification, and fingerprint scanning, due to its contactless procedure and easy deployment.





To be or not to be, AI asks the question.

https://www.csoonline.com/article/3658831/firms-struggling-with-non-person-identities-in-the-cloud.html#tk.rss_all

Firms struggling with non-person identities in the cloud

The explosion of non-human identities in public cloud deployments has decision makers turning to new identity and access management tools to keep their environments secure, according to a new study performed by Forrester Consulting for Sonrai Security and Amazon Web Services (AWS).

The study released Thursday found that more than half the 154 North American IT and security decision makers surveyed for the report acknowledged that they were struggling with machine and non-people identities running rampant in the cloud.

"When you secure stuff in the traditional data center model, you form networks, which form the perimeter for the model," Sonrai CISO Eric Kedrosky tells CSO. "In the cloud, those networks disappear, and identities become central to securing the cloud."

"What a lot of organizations that have moved to the cloud are finding is they're thinking a lot about those person identities but they're not thinking about those non-person identities, which are magnitudes greater than person identities," Kedrosky continues. "It's a real blind spot for organizations. They are blind to the risks that identities pose to their cloud."



Sunday, May 01, 2022

If autonomous weapons are outlawed, only outlaws (and AI) will have autonomous weapons.

https://researchers.cdu.edu.au/en/publications/weaponized-artificial-intelligence-ai-and-the-laws-of-armed-confl

Weaponized Artificial Intelligence (AI) and the Laws of Armed Conflict (LOAC) - the RAILE Project

Today much has already been written about Artificial Intelligence (AI), robotics and autonomous systems. In particular the more and more prevalent autonomous vehicles, i.e. cars, trucks, trains and to a lesser extent aeroplanes. This article looks at an emerging technology that has a fundamental impact on our society, namely the use of artificial intelligence (AI) in lethal autonomous weapon systems (LAWS) – weaponized AI - as used by the armed forces. It specifically approaches the questions around how laws and policy for this specific form of emerging technology - the military application of autonomous weapon systems (AWS) could be developed. The article focuses on how potential solution(s) may be found rather than on the well-established issues. Currently, there are three main streams in the debate around how to deal with LAWS; the ‘total ban’, the ‘wait and see’ and ‘the ‘pre-emptive legislation’ path. The recent increase in the development of LAWS has led to the Human Rights Watch (HRW) taking a strong stance against ‘killer robots’ promoting a total ban. This causes its own legal issues already in the first stage, the definition of autonomous weapons, which is inconsistent but often refers to the Human Rights Watch (HRW) 3-step listing – human-in/on/out-of the loop. However, the fact remains that the LAWS are already in existence and continues to be developed. This raises the question of how to deal with them. From a civilian perspective, the initial legal issue has been focusing on liability in relation to accidents. On the military side, international legislation has been and still is, through a series of treaties between states, striving to regulate the behaviour of troops on the fields of armed conflict. These treaties, at times referred to as Laws of Armed Conflict (LOAC) and at times as International Humanitarian Law (IHL) share four (4) fundamental core principles – distinction, proportionality, humanity and military necessity. With LAWS being an unavoidable fact in today’s field of armed conflict and rules governing troop behaviour existing in the form of international treaties, what is the next step? This article will look to present a short description of each debate stream utilizing relevant literature for the subject matter including a selection of arguments raised by prominent authors in the field of AWS and international law. The question for this article is: How do we achieve AWS/AI programming which adheres to the LOAC/IHL’s intentions of the ‘core principles of distinction, proportionality, humanity and military necessity?



(Related)

https://www.mdpi.com/2078-2489/13/5/215/htm

Editorial for the Special Issue on Meaningful Human Control and Autonomous Weapons Systems

Global discussions on the legality and ethics of using Artificial intelligence (AI) technology in warfare, particularly the use of autonomous weapons (AWS), continue to be hotly debated. Despite the push for a ban on these types of systems, unilateral agreement remains out of reach. Much of the disaccord comes from a privation of common understandings of fundamental notions of what it means for these types of systems to be autonomous. Similarly, there is a dispute as to what, if at all possible, it means for humans to have meaningful control over these systems.





Liability is liable to change in law?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4080883

Bridging the liability gaps: why AI challenges the existing rules on liability and how to design human-empowering solutions

This chapter explores the so-called ‘liability gaps’ that occurs when, in applying existing contractual, extracontractual, or strict liability rules to harms caused by AI, the inherent characteristics of AI may result in unsatisfying outcomes, in particular for the damaged party. The chapter explains the liability gaps, investigating which features of AI challenge the application of traditional legal solutions and why. Subsequently, this chapter explores the challenges connected to the different possible solutions, including contract law, extracontractual law, product liability, mandatory insurance, company law, and the idea of granting legal personhood to AI and robots. The analysis is carried out using hypothetical scenarios, to highlight both the abstract and practical implications of AI, based on the roles and interactions of the various parties involved. As a conclusion, this chapter offers an overview of the fundamental principles and guidelines that should be followed to elaborate a comprehensive and effective strategy to bridge the liability gaps. The argument made is that the guiding principle in designing legal solutions to the liability gaps must be the protection of individuals, particularly their dignity, rights and interests.





Automating lawyers… (Will all AI lawyers be called ‘Sue?’)

https://cadmus.eui.eu/handle/1814/74443

Data protection and judicial automation

The words "judicial automation" invoke a broad range of images, ranging from time-saving tools to decision-aiding tools or even quixotic ideas of robot judges. As the development of artificial intelligence technologies expands the range of possible automation, it also raises questions about the extent to which automation is admissible in judicial contexts and the safeguards required for the safe use of AI in judicial contexts. This chapter argues that these applications raise specific challenges for data protection law, as the use of personal data for judicial automation requires the adoption of safeguards against risks to the right to a fair trial. The chapter discusses current and proposed uses of judicial automation, identifying how they use personal data in their operation and the issues that arise from this use, such as algorithmic biases and system opacity. By connecting these issues to the safeguards required for automated decision-making and data protection by design, the chapter shows how data protection law may contribute to a fair trial in contexts of judicial automation and highlights open research questions in the interface between procedural rights and data protection.





A topic that may yet lead to AI personhood…

https://scholarship.law.uc.edu/cgi/viewcontent.cgi?article=1043&context=ipclj

The Patentability of Inventions with Artificial Intelligence Listed as an Inventor Following Thaler v. Hirshfeld

Computers have become an integral part of daily life for a plethora of individuals in the United States and around the world. For many, computers create ease and improve quality of life as they provide a variety of different functions. From allowing individuals to communicate with one another across the globe, to providing a medium for individuals to work and learn, to many things never previously imaginable, computers have completely transformed many aspects of daily life. Following the development of computers, many inventors and developers have been consistently looking for methods to make them faster, better, smarter, and able to solve problems. This drive eventually led to the creation of Artificial Intelligence (“AI”). AI essentially utilizes computers and machines in order to mimic the problem-solving and decision-making capabilities that are present within the human mind.1

AI can serve a wide range of purposes and applications, encompassing everything from various types of speech recognition to customer service, among many others. Some individuals are even harnessing the power of AI to help create new inventions, solve problems and innovate new methods of improving society.2 For example, AI has been used to detect defects in pharmaceutical products, to develop new composition for green technology products, and for analyzing biological samples in the manufacturing process, along with many other applications.3 As a result, when inventors are seeking intellectual property protection for their new inventions, specifically patent protection, some inventors chose to list the artificial intelligence as the inventor of the new invention when filing patent applications.





...because your car will become a risk.

https://ieeexplore.ieee.org/abstract/document/9762777

Security and Privacy Issues in Autonomous Vehicles: A Layer-Based Survey

Artificial Intelligence (AI) is changing every technology we are used to deal with. Autonomy has long been a sought-after goal in vehicles, and now more than ever we are very close to that goal. Big auto manufacturers as well are investing billions of dollars to produce Autonomous Vehicles (AVs). This new technology has the potential to provide more safety for passengers, less crowded roads, congestion alleviation, optimized traffic, fuel-saving, less pollution as well as enhanced travel experience among other benefits. But this new paradigm shift comes with newly introduced privacy issues and security concerns. Vehicles before were dumb mechanical devices, now they are becoming smart, computerized, and connected. They collect huge troves of information, which needs to be protected from breaches. In this work, we investigate security challenges and privacy concerns in AVs. We examine different attacks launched in a layer-based approach. We conceptualize the architecture of AVs in a four-layered model. Then, we survey security and privacy attacks and some of the most promising countermeasures to tackle them. Our goal is to shed light on the open research challenges in the area of AVs as well as offer directions for future research.