Saturday, November 05, 2022

Increase the value of your large data collection by suing the people who use it?

https://www.vice.com/en/article/bvm3k5/github-users-file-a-class-action-lawsuit-against-microsoft-for-training-an-ai-tool-with-their-code

GitHub Users File a Class-Action Lawsuit Against Microsoft for Training an AI Tool With Their Code

This lawsuit represents a growing concern from programmers, artists, and other people that AI systems may be using their code, artwork, and other data without permission.

GitHub programmers have filed a class-action lawsuit against GitHub, its parent Microsoft, and its technology partner, OpenAI, for allegedly violating their open-source licenses and using their code to train Microsoft’s latest AI tool, called Copilot.



(Related)

https://www.vice.com/en/article/3ad58k/ai-is-probably-using-your-images-and-its-not-easy-to-opt-out

AI Is Probably Using Your Images and It's Not Easy to Opt Out

Viral image-generating AI tools like DALL-E and Stable Diffusion are powered by massive datasets of images that are scraped from the internet, and if one of those images is of you, there’s no easy way to opt out, even if you never explicitly agreed to have it posted online.





I find this difficult to believe.

https://www.cpomagazine.com/data-privacy/study-shows-privacy-awareness-is-the-new-normal-for-consumers-online-behavior-is-much-more-guarded/

Study Shows Privacy Awareness Is the “New Normal” for Consumers, Online Behavior Is Much More Guarded

There had already been growing privacy awareness among consumers for some time, but the Cambridge Analytica scandal of 2018 seemed to be the incident that really pushed data handling practices into the mainstream. A mounting body of evidence has suggested that the average person is now very aware of how companies track their online behavior, and a new report from DataGrail indicates that they will cease doing business with companies and services that they do not trust.

The consumer sentiment survey was conducted in July and included about 4,000 respondents divided between the United States and several countries in Europe. The results are compared to a similar survey conducted in 2020, and both concerns about personal privacy and the actions people are willing to take to protect it are on the rise. Online behavior is also increasingly trust-driven, with consumers opting out of sharing their data with companies that are not transparent about how it is handled or that have been busted in the past for mishandling it.





As we identify “state-backed hackers” do they automatically void any insurance when they commit a “hostile or warlike action in time of peace?” (See yesterday’s blog)

https://therecord.media/microsoft-accuses-china-of-abusing-vulnerability-disclosure-requirements/

Microsoft accuses China of abusing vulnerability disclosure requirements

Microsoft on Friday accused state-backed hackers in China of abusing the country’s vulnerability disclosure requirements in an effort to discover and develop zero-day exploits.



Friday, November 04, 2022

I think this could be important as more ‘hackers’ are identified as agents of governments.

https://www.csoonline.com/article/3678970/mondelez-and-zurich-s-notpetya-cyber-attack-insurance-settlement-leaves-behind-no-legal-precedent.html#tk.rss_all

Mondelez and Zurich’s NotPetya cyber-attack insurance settlement leaves behind no legal precedent

Mondelez International and Zurich American Insurance settled a keenly watched lawsuit over how cyberattack insurance applies to intrusions from nation states during wartime. A private agreement, its resolution sheds no light on how the issue will be play out.

Multinational food and beverage company Mondelez International and Zurich American Insurance have settled their multiyear litigation surrounding the cyberattack coverage – or lack of such coverage – following the NotPetya malware attack that damaged the Mondelez network and infrastructure. The specifics of the settlement are unknown, but that it would come mid-trial has caught everyone’s attention.

The pain was felt on June 27, 2017, when NotPetya wiped out 24,000 laptops and 1,700 servers within the Mondelez network. The malware, designed to destroy, did just that. Mondelez estimated damages would approach $100 million USD.





Control of citizens.

https://www.wired.com/story/algorithms-quietly-run-the-city-of-dc-and-maybe-your-hometown/

Algorithms Quietly Run the City of DC—and Maybe Your Hometown

WASHINGTON, DC, IS the home base of the most powerful government on earth. It’s also home to 690,000 people—and 29 obscure algorithms that shape their lives. City agencies use automation to screen housing applicants, predict criminal recidivism, identify food assistance fraud, determine if a high schooler is likely to drop out, inform sentencing decisions for young people, and many other things.

That snapshot of semiautomated urban life comes from a new report from the Electronic Privacy Information Center (EPIC). The nonprofit spent 14 months investigating the city’s use of algorithms and found they were used across 20 agencies, with more than a third deployed in policing or criminal justice.





Control of employees.

https://www.pogowasright.org/nlrb-general-counsel-memo-on-electronic-monitoring-of-employees/

NLRB General Counsel Memo on Electronic Monitoring of Employees

Jonathan J. Spitz, Richard F. Vitarelli, Joseph J. Lazzarotti, and Chad P. Richter of JacksonLewis write:

Responding in part to the nature of the post-COVID-19 remote workplace, NLRB GC Jennifer Abruzzo has released a memo on employers’ use of electronic monitoring and automated management in the workplace. The memo also directs NLRB Regions to submit to the Division of Advice any cases involving intrusive or abusive electronic surveillance and algorithmic management that interferes with the exercise of NLRA Section 7 rights.
Read the full article on Jackson Lewis’ Labor & Collective Bargaining.





Control of fans.

https://www.wired.com/story/soccer-world-cup-biometric-surveillance/

Soccer Fans, You’re Being Watched

Stadiums around the world, including at the 2022 World Cup in Qatar, are subjecting spectators to invasive biometric surveillance tech.





A response to Elon Musk as Head Twit.

https://www.makeuseof.com/tag/twitter-alternative-social-networks/

12 Twitter Alternatives Worth Considering



(Related)

https://www.technologyreview.com/2022/11/03/1062752/twitter-may-have-lost-more-than-a-million-users-since-elon-musk-took-over

Twitter may have lost more than a million users since Elon Musk took over



Thursday, November 03, 2022

Will this result in rules at some point?

https://www.vice.com/en/article/5d3qk3/nlrbs-top-lawyer-wants-to-crack-down-on-electronic-surveillance-in-the-workplace

NLRB’s Top Lawyer Wants to Crack Down on Electronic Surveillance in the Workplace

On Monday, the National Labor Relations Board’s top lawyer issued a new memo announcing she would be pushing for the organization to crack down on electronic surveillance and automated management practices that interfere with workers' labor rights.

NLRB General Counsel Jennifer Abruzzo said that she was motivated by a concern that employers could deploy workplace technologies to disrupt union organizing or other federally-protected activities.





Resource.

https://www.unesco.org/en/articles/ai-decoded-new-online-course-seeks-demystify-artificial-intelligence-all

AI Decoded: New online course seeks to demystify Artificial Intelligence for all

Destination AI has been designed with precisely this risk-benefit and case-based approach in mind, running across 12 modules which address aspects such as AI’s risks, benefits, and societal impact, as well as the steps involved in an AI-based projects and the fundamentals of machine learning.

Destination AI: Introduction to Artificial Intelligence https://openclassrooms.com/en/courses/7078811-destination-ai-introduction-to-artificial-intelligence/7169791-identify-common-applications-of-artificial-intelligence



Wednesday, November 02, 2022

A log of the decision points encountered to reach a conclusion would help a lot, but that increases the complexity and storage requirements.

https://www.vice.com/en/article/y3pezm/scientists-increasingly-cant-explain-how-ai-works

Scientists Increasingly Can’t Explain How AI Works

… The people who develop AI are increasingly having problems explaining how it works and determining why it has the outputs it has. Deep neural networks (DNN)—made up of layers and layers of processing systems trained on human-created data to mimic the neural networks of our brains—often seem to mirror not just human intelligence but also human inexplicability.





Clearing things up or not? (I am prepared to register works created by my AI without mentioning its involvement.)

https://ipwatchdog.com/2022/11/01/us-copyright-office-backtracks-registration-partially-ai-generated-work/id=152451/

U.S. Copyright Office Backtracks on Registration of Partially AI-Generated Work

This action from the USCO may serve as an early warning that anyone filing works that contain any portions generated by artificial intelligence must disclose such portions and be prepared to support their registration and prove a degree of human authorship. At this time, it’s unclear exactly what amount of human authorship the USCO is seeking for these types of registration applications. Earlier this year, the USCO upheld the refusal to register a work purportedly generated entirely by machine. However, aside from the still unsettled case, the line on the creation-generation spectrum at which the USCO is making its determination is unclear.





A vast collection of articles and resources.

https://www.bespacific.com/2023-privacy-guide/

2023 Privacy Guide

Via LLRX 2023 Privacy Guide The fundamental concept of privacy has changed dramatically as more individuals have shifted most of their data to online platforms. There are however a wide range of personal, professional, corporate and legal issues that present significant barriers to the goal of maintaining privacy on the internet. Online privacy is not a right or even a choice when you use email, browsers and search engines, social media, ecommerce sites, online subscriptions…the list goes on and on. Trying to achieve even a modicum of online privacy now involves the use of multiple applications and services, specific software and hardware, time, due diligence, and flexibility – as the challenges continue to evolve. This pathfinder by Marcus P. Zillman will assist in your efforts to secure additional privacy when using email, conducting research, while on social media, completing online learning programs, transferring health records, shopping online, and with many other online services and system with which you interact daily. Even if you only choose to start using several applications or services that Zillman has referenced, this will establish a foundation on which you can build and execute a more effective privacy and security plan. Think about starting with choosing a new browser, search engine and email provider, and move forward from there. This is a journey, and it will take time, but it is worth the effort.





Privacy check!

https://www.bespacific.com/facebook-probably-has-your-phone-number-even-if-you-never-shared-it/

Facebook probably has your phone number, even if you never shared it.

Business Insider – Now it has a secret tool to let you delete it. “Facebook’s parent firm Meta has quietly rolled out a new service that lets people check whether the firm holds their contact information, such as their phone number or email address, and delete and block it. The tool has been available since May 2022, Insider understands, although Meta does not seem to have said anything publicly about it. A tipster pointed us to the tool, which is well-hidden and apparently only available via a link that is embedded 780 words into a fairly obscure page in Facebook’s help section for non-users. The linked text gives no indication that it’s sending you to a privacy tool, and simply reads: “Click here if you have a question about the rights you may have.”…



Tuesday, November 01, 2022

What would access to your network be worth?

https://www.databreaches.net/hackers-selling-access-to-576-corporate-networks-for-4-million/

Hackers selling access to 576 corporate networks for $4 million

Bill Toulas reports:

A new report shows that hackers are selling access to 576 corporate networks worldwide for a total cumulative sales price of $4,000,000, fueling attacks on the enterprise.
The research comes from Israeli cyber-intelligence firm KELA which published its Q3 2022 ransomware report, reflecting stable activity in the sector of initial access sales but a steep rise in the value of the offerings.

Read more at BleepingComputer.





AI ain’t easy.

https://www.theregister.com/2022/10/31/deloitte_ai_enterprise/

Enterprises are rolling out more AI – to 'middling results'

The 5th Edition of Deloitte's State of AI in the Enterprise report is based on a survey of 2,620 business leaders from organizations around the globe, all of whom are responsible for AI technology spending or managing its implementation.

According to the authors, the AI race (if such a thing ever existed) is no longer about adopting AI or automating processes for efficiency, but has now moved on to realizing value, driving outcomes, and unleashing the potential of AI to drive new opportunities.

However, the topline findings are that many organizations are struggling with "middling results" despite increased deployment activity since the last edition of the report.

According to Deloitte, 79 percent of respondents claimed to have achieved full-scale deployment of three or more types of AI applications, up from 62 percent last year. But also up was the percentage of those rating their organizations as "underachievers" – 22 percent in this report compared with 17 percent last time.



Monday, October 31, 2022

I wonder if something like this could be made to work if the rules and basis for removal were available for public review?

https://www.theregister.com/2022/10/30/asia_in_brief/

Indian government creates body with power to order social media content takedowns

India's government has given itself the power to compel social networks to take down content.

Amendments to the nation's Information Technology Rules gazetted [PDF] last Friday allow the creation of Grievance Appellate Committees (GACs) that citizens can petition if social networks and other online services don't act on their takedown requests.

India's minister of state for electronics and information technology, Rajeev Chandrasekhar, said the GACs are needed because India's previous attempt at regulating social media – requiring the networks to appoint a grievance officer – has not delivered.

But India's Internet Freedom Foundation characterized the Committees as "a government censorship body for social media that will make bureaucrats arbiters of our online free speech."

"Given that the GAC would hear appeals against the decisions of social media platforms to remove content or not, it will incentivize platforms to remove/suppress/label any speech unpalatable to the government, or those exerting political pressure," the Foundation argued.



(Related) Which approach is likely to work?

https://theintercept.com/2022/10/31/social-media-disinformation-dhs/

TRUTH COPS

Leaked Documents Outline DHS’s Plans to Police Disinformation

THE DEPARTMENT OF HOMELAND SECURITY is quietly broadening its efforts to curb speech it considers dangerous, an investigation by The Intercept has found. Years of internal DHS memos, emails, and documents — obtained via leaks, Freedom of Information Act requests, and an ongoing lawsuit, as well as public reports — illustrate an expansive effort by the agency to influence tech platforms.

The work, much of which remains unknown to the American public, came into clearer view earlier this year when DHS announced a new “Disinformation Governance Board”: a panel designed to police misinformation (false information spread unintentionally), disinformation (false information spread intentionally), and malinformation (factual information shared, typically out of context, with harmful intent) that allegedly threatens U.S. interests. While the board was widely ridiculed, immediately scaled back, and then shut down within a few months, other initiatives are underway as DHS pivots to monitoring social media now that its original mandate — the war on terror — has been wound down.





A reaction to Elon Musk?

https://www.bespacific.com/how-to-download-a-backup-copy-of-your-twitter-data/

How to download a backup copy of your Twitter data (or deactivate your account)

Ars Technica: “Big changes are underway at Twitter as we speak—including new leadership —and some people are nervous about what the future might bring for the social network. Things may end up completely fine, but even in tranquil times, it’s good to know how to get a copy of your Twitter data for local safekeeping—or to deactivate your Twitter account if you choose. This puts control of your data in your hands Before we start, it’s important to know that the process of getting a copy of your Twitter data can take 24 hours or more. Twitter does this both for safety reasons and ostensibly to give its servers time to gather the detailed data it will send you. Also, you’ll need an email address or mobile phone number registered to your Twitter account so the site can send you a special confirmation code to complete the process. Once you have the data, you’ll get a local copy of all of your tweets that you can store indefinitely without needing to log in to Twitter…”

See also Ars Technica: “Report: Musk names himself Twitter CEO and intends to reverse Trump ban. Musk previously called the Trump ban “morally wrong and flat-out stupid.”





Background.

https://www.unite.ai/what-is-computational-thinking/

What is Computational Thinking?

Computational thinking, often referred to as CT, is a problem-solving technique that computer programmers use when writing computer programs and algorithms. In the case of programmers, they break down complex problems into more bite-sized pieces, which makes it easier to fully understand them and develop solutions that work for both computers and humans.

When looking at computational thinking, there are four key techniques that should be understood:

  • Decomposition: breaking down complex problems into smaller, more manageable pieces.

  • Pattern Recognition: identifying similarities among and within problems.

  • Abstraction: focusing on important information while leaving out irrelevant details.

  • Algorithms: developing a step-by-step solution or certain rules that should be followed to solve the problem.

Each one of these techniques is just as important as the next. If you’re missing one, then the entire system is likely to collapse.





The true ethical dilemma.

https://dilbert.com/strip/2022-10-31



Sunday, October 30, 2022

I would probably argue the opposite.

https://ruor.uottawa.ca/handle/10393/44188

Artificial Intelligence & the Machine-ation of the Rule of Law

In this dissertation, I argue that the Rule of Law is made vulnerable by technological innovations in artificial intelligence (AI) and machine learning (ML) that take power previously delegated to legal decision-makers and put it in the hands of machines. I assert that we need to interrogate the potential impacts of AI and ML in law: without careful scrutiny, AI and ML's wide-ranging impacts might erode certain fundamental ideals. Our constitutional democratic framework is dependent upon the Rule of Law: upon a contiguous narrative thread linking past legal decisions to our future lives. Yet, incursions by AI and ML into legal process – including algorithms and automation; profiling and prediction – threaten longstanding legal precepts in state law and constraints against abuses of power by private actors. The spectre of AI over the Rule of Law is most apparent in proposals for "self-driving laws," or the idea that we might someday soon regulate society entirely by machine. Some academics have posited an approaching "legal singularity," in which the entire corpus of legal knowledge would be viewed as a complete data set, thereby rendering uncertainty obsolete. Such "regulation by machine" advocates would then employ ML approaches on this legal data set to refine and improve the law. In my view, such proposals miss an important point by assuming machines can necessarily outperform humans, without first questioning what such performance entails and whether machines can be meaningfully said to participate in the normative and narrative activities of interpreting and applying the law. Combining insights from three distinct areas of inquiry – legal theory, law as narrative scholarship, and technology law – I develop a taxonomy for analysing Rule of Law problems. This taxonomy is then applied to three different technological approaches powered by AI/ML systems: sentencing software, facial recognition technology, and natural language processing. Ultimately, I seek the first steps towards developing a robust normative framework to prevent a dangerous disruption to the Rule of Law.





An extreme example on many levels.

https://www.pogowasright.org/warrantless-drone-surveillance-lawsuit-appealed-to-michigan-supreme-court/

Warrantless Drone Surveillance Lawsuit Appealed to Michigan Supreme Court

Matt Powers reports:

Can the government pilot a low-flying drone over your property without a warrant and then use the evidence against you in court? That’s the question at the heart of an application for appeal filed with the Michigan Supreme Court today on behalf of Todd and Heather Maxon. For two years, the government flew a sophisticated drone over Todd and Heather Maxons’ property to take detailed photographs and videos, all without ever seeking a warrant. Now, the Maxons, represented by the Institute for Justice (IJ), are asking the Michigan Supreme Court to hold that the government violated their Fourth Amendment rights and can’t use its illegally obtained photos and videos to punish them in court.

Read more at Institute for Justice.





Ethics are ethics, no matter where applied…

https://link.springer.com/article/10.1007/s13347-022-00591-7

To Each Technology Its Own Ethics: The Problem of Ethical Proliferation

Ethics plays a key role in the normative analysis of the impacts of technology. We know that computers in general and the processing of data, the use of artificial intelligence, and the combination of computers and/or artificial intelligence with robotics are all associated with ethically relevant implications for individuals, groups, and society. In this article, we argue that while all technologies are ethically relevant, there is no need to create a separate ‘ethics of X’ or ‘X ethics’ for each and every subtype of technology or technological property—e.g. computer ethics, AI ethics, data ethics, information ethics, robot ethics, and machine ethics. Specific technologies might have specific impacts, but we argue that they are often sufficiently covered and understood through already established higher-level domains of ethics. Furthermore, the proliferation of tech ethics is problematic because (a) the conceptual boundaries between the subfields are not well-defined, (b) it leads to a duplication of effort and constant reinventing the wheel, and (c) there is danger that participants overlook or ignore more fundamental ethical insights and truths. The key to avoiding such outcomes lies in a taking the discipline of ethics seriously, and we consequently begin with a brief description of what ethics is, before presenting the main forms of technology related ethics. Through this process, we develop a hierarchy of technology ethics, which can be used by developers and engineers, researchers, or regulators who seek an understanding of the ethical implications of technology. We close by deducing two principles for positioning ethical analysis which will, in combination with the hierarchy, promote the leveraging of existing knowledge and help us to avoid an exaggerated proliferation of tech ethics.



(Related) Is this article in agreement or not?

https://www.researchgate.net/profile/Mattis-Jacobs-2/publication/364666895_Reexamining_computer_ethics_in_light_of_AI_systems_and_AI_regulation/links/63568ebc96e83c26eb4ccc44/Reexamining-computer-ethics-in-light-of-AI-systems-and-AI-regulation.pdf

Reexamining computer ethics in light of AI systems and AI regulation

This article argues that the emergence of AI systems and AI regulation showcases developments that have signifcant implications for computer ethics and make it necessary to reexamine some key assumptions of the discipline. Focusing on designand policy-oriented computer ethics, the article investigates new challenges and opportunities that occur in this context. The main challenges concern how an AI system’s technical, social, political, and economic features can hinder a successful application of computer ethics. Yet, the article demonstrates that features of AI systems that potentially interfere with successfully applying some approaches to computer ethics are (often) only contingent, and that computer ethics can infuence them. Furthermore, it shows how computer ethics can make use of how power manifests in an AI system’s technical, social, political, and economic features to achieve its goals. Lastly, the article outlines new interdependencies between policy- and design-oriented computer ethics, manifesting as either conficts or synergies.





Inevitable.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4252778

All Rise for the Honourable Robot Judge? Using Artificial Intelligence to Regulate AI

There is a rich literature on the challenges that AI poses to the legal order. But to what extent might such systems also offer part of the solution? China, which has among the least developed rules to regulate conduct by AI systems, is at the forefront of using that same technology in the courtroom. This is a double-edged sword, however, as its use implies a view of law that is instrumental, with parties to proceedings treated as means rather than ends. That, in turn, raises fundamental questions about the nature of law and authority: at base, whether law is reducible to code that can optimize the human condition, or if it must remain a site of contestation, of politics, and inextricably linked to institutions that are themselves accountable to a public. For many of the questions raised, the rational answer will be sufficient; but for others, what the answer is may be less important than how and why it was reached, and whom an affected population can hold to account for its consequences.





My take: The tools exist, who else will teach students how to use them? This seems similar to saying that math students should never use calculators.

https://www.theinformation.com/articles/students-are-using-ai-text-generators-to-write-papers-are-they-cheating

Students Are Using AI Text Generators to Write Papers—Are They Cheating?