Friday, October 14, 2022

Perspective.

https://www.bbc.com/news/technology-63228466

AI tools fail to reduce recruitment bias - study

"There is growing interest in new ways of solving problems such as interview bias," the Cambridge University researchers say, in the journal Philosophy and Technology

The use of AI is becoming widespread - but its analysis of candidate videos or applications is "pseudoscience".

"These tools can't be trained to only identify job-related characteristics and strip out gender and race from the hiring process, because the kinds of attributes we think are essential for being a good employee are inherently bound up with gender and race," she said. [Some jobs must have attributes like “White” and “Male?” Bob]





Tools & Techniques.

https://www.bespacific.com/review-haikubox/

Review: Haikubox

Wired – “This AI-enabled device can identify the species around your home by their songs and alert you when new ones arrive. For bird watchers, being able to identify birds by their song is the holy grail. Some people seem to be naturals, hearing a song once and remembering it forever. If you’re like me—not one of those people—you’ve probably had the thought, “Why isn’t there a Shazam for birds?” Surely if Shazam can identify a song with a few seconds of bad audio playing over some blown-out speakers, someone can figure out how to do the same for a bird singing clearly in a nearby tree. That, in a nutshell, is what the creators of the Haikubox have done—created the Shazam of birdsong. That in itself is welcome and remarkable, but the Haikubox turns out to be much more than that. It’s one of the rare pieces of technology that actually increases your connection to the world around you, rather than cutting you off…”



Thursday, October 13, 2022

An argument with no basis?

https://www.theregister.com/2022/10/13/clientside_scanning_csam_anderson/

Scanning phones to detect child abuse evidence is harmful, 'magical' thinking

Laws in the UK and Europe have been proposed that would give authorities the power to undermine strong end-to-end encryption in the pursuit of, in their minds, justice.

If adopted, these rules would – according to a top British computer security expert – authorize the reading and analysis of people's previously private communication for the sake of potentially preventing the spread of child sex abuse material and terrorism communications.

Ross Anderson, professor of security engineering in the Department of Computer Science and Technology at the UK's University of Cambridge, argues that these proposed regulations – which, frankly, rely on technical solutions such as device-side message scanning and crime-hunting machine-learning algorithms in place of police, social workers, and teachers – lead to magical thinking and unsound policies.

In a paper titled Chat Control or Child Protection?, to be distributed via ArXiv, Anderson offers a rebuttal to arguments advanced in July by UK government cyber and intelligence experts Ian Levy, technical director of the UK National Cyber Security Centre, and Crispin Robinson, technical director of cryptanalysis at Government Communications Headquarters (GCHQ), the UK's equivalent to the NSA.

That pro-snoop paper, penned by Levy and Robinson and titled Thoughts on Child Safety on Commodity Platforms, was referenced on Monday by EU Commissioner for Home Affairs, Ylva Johansson, before the European Parliament’s Civil Liberties (LIBE) Committee in support of the EU Child Sexual Abuse Regulation (2022/0155), according to Anderson.





Podcast.

https://www.pbs.org/wgbh/nova/video/computers-vs-crime/

Computers v. Crime

In police departments and courts across the country, artificial intelligence is being used to help decide who is policed, who gets bail, how offenders should be sentenced, and who gets parole. But is it actually making our law enforcement and court systems fairer and more just? This timely investigation digs into the hidden biases, privacy risks, and design flaws of this controversial technology.





Perspective.

https://mitsloan.mit.edu/ideas-made-to-matter/secs-gary-gensler-how-artificial-intelligence-changing-finance

SEC’s Gary Gensler on how artificial intelligence is changing finance

I think that we’re living in a truly transformational time,” said Gensler, who spoke at the recent  AI Policy Forum summit at MIT. Artificial intelligence is “every bit as transformational as the internet,” especially when it comes to predictive data analytics, “but it comes with some risks.”

During the conversation, Gensler shared his thoughts on how artificial intelligence is changing finance. Here are four of his takeaways:





Translating technical terms for managers…

https://dilbert.com/strip/2022-10-13



Wednesday, October 12, 2022

A problem in the short term only.

https://slate.com/technology/2022/10/artificial-intelligence-superintelligence-gullibility.html

The Real Threat From A.I. Isn’t Superintelligence. It’s Gullibility.

… Don’t worry about superintelligent A.I.s trying to enslave us; worry about ignorant and venal A.I.s designed to squeeze every penny of online ad revenue out of us.

And worry about police agencies that gullibly think A.I.s can anticipate crimes before they occur—when in reality all they do is perpetuate harmful stereotypes about minorities.

The reality is that no A.I. could ever harm us unless we explicitly provide it the opportunity to do so—yet we seem hellbent on putting unqualified A.I.s in powerful decision-making positions where they could do exactly that.





What could possibly go wrong?

https://www.vice.com/en/article/pkgma8/police-are-using-dna-to-generate-3d-images-of-suspects-theyve-never-seen

Police Are Using DNA to Generate 3D Images of Suspects They've Never Seen

On Tuesday, the Edmonton Police Service (EPS) shared a computer generated image of a suspect they created with DNA phenotyping, which it used for the first time in hopes of identifying a suspect from a 2019 sexual assault case. Using DNA evidence from the case, a company called Parabon NanoLabs created the image of a young Black man. The composite image did not factor in the suspect’s age, BMI, or environmental factors, such as facial hair, tattoos, and scars. The EPS then released this image to the public, both on its website and on social media platforms including its Twitter, claiming it to be “a last resort after all investigative avenues have been exhausted.”





The laws creative AI should know.

https://www.natlawreview.com/article/key-rules-and-cases-patent-practitioners-working-ai-patent-applications

Key Rules and Cases for Patent Practitioners Working on AI Patent Applications

On September 22, 2022, the U.S. Patent and Trademark Office (USPTO) directed patent practitioners to current case law and sections of the Manual of Patent Examining Procedure (MPEP) as reminders as the patent practitioners continue to work in the Artificial Intelligence (AI) technology space. A summary of these reminders (and links to more information) are provided herein.





Tools & Techniques. (Because I have no artistic talent…)

https://beebom.com/best-ai-text-to-image-art-generators/

12 Best AI Art Generators You Should Use (Free & Paid)



(Related)

https://www.makeuseof.com/ai-art-generators-things-you-can-create/

5 Things You Can Create With AI Art Generators



Tuesday, October 11, 2022

Should lawyers be trusted with technology?

https://www.bespacific.com/do-we-need-more-technologies-in-courts-mapping-concerns-for-legal-technologies-in-courts/

Do We Need More Technologies in Courts? Mapping Concerns for Legal Technologies in Courts

BarysÄ—, DovilÄ—, Do We Need More Technologies in Courts? Mapping Concerns for Legal Technologies in Courts (September 6, 2022). Available at SSRN: https://ssrn.com/abstract=4218897 or http://dx.doi.org/10.2139/ssrn.4218897

Courts use progressively more technologies, and there is no consensus on how much and what technologies would benefit or harm courts and in what ways. The analysis of the variety of concerns in law is gaining momentum. However, there is little data on lawyers’ beliefs and attitudes toward technologies in courts. In this study, practicing lawyers and researchers from three countries were interviewed to map their main concerns for technologies in courts. Thematic analysis was conducted. The main reasons for skepticism toward technologies in courts are based on the lack of knowledge, research, and regulation. The primary concerns involve specific properties of technologies, effects on human decision-making, issues in the legal system, lack of research, advantages and disadvantages in access, equality, effectiveness, and fairness, and the “human factor”. The latter includes the need for human interaction, flexible decision-making, and perceived fairness. More focus on humans in human-automation interaction is needed.”





Resources.

https://www.bespacific.com/a-list-of-text-only-news-sites-updated-2022/

A List Of Text-Only News Sites (Updated 2022)

Greycoder: “Text-only websites are quite useful, especially today. Web pages are increasingly filled with ads, videos, and bandwidth-heavy content. Here is a list of text-only, clutter-free news sites:





Tools & Techniques. (I’m still trying to predict next week)

https://www.openculture.com/2022/10/how-to-predict-what-the-world-will-look-like-in-2122-insights-from-futurist-peter-schwartz.html

How to Predict What the World Will Look Like in 2122: Insights from Futurist Peter Schwartz

It’s very easy to imagine how things go wrong,” says futurist Peter Schwartz in the video above. “It’s much harder to imagine how things go right.” So he demonstrated a quarter-century ago with the Wired magazine cover story he co-wrote with Peter Leyden, “The Long Boom.” Made in the now techno-utopian-seeming year of 1997, its predictions of “25 years of prosperity, freedom, and a better environment for a whole world” have since become objects of ridicule. But in the piece Schwartz and Leyden also provide a set of less-desirable alternative scenarios whose details — a new Cold War between the U.S. and China, climate change-related disruptions in the food supply, an “uncontrollable plague” — look rather more prescient in retrospect.

The intelligent futurist, in Schwartz’s view, aims not to get everything right. “It’s almost impossible. But you test your decisions against multiple scenarios, so you make sure you don’t get it wrong in the scenarios that actually occur.” The art of “scenario planning,” as Schwartz calls it, requires a fairly deep rootedness in the past.





Tools & Techniques. (Has potential but need work)

https://www.bespacific.com/consensus-evidence-based-answers-faster/

Consensus – Evidence-Based Answers, Faster

Consensus: “Consensus only searches through peer-reviewed scientific research to find the most credible insights to your queries. We recommend asking questions related to topics that have likely been studied by scientists. Consensus has subject matter coverage that ranges from medical research and physics to social sciences and economics. Consensus is NOT meant to be used to ask questions about basic facts such as: “How many people live in Europe?” or “When is the next leap year?” as there would likely not be research dedicated to investigating these subjects..”



Monday, October 10, 2022

I must be misinterpreting some of this...

https://www.insideprivacy.com/european-union-2/the-digital-markets-act-for-privacy-professionals/

The Digital Markets Act for Privacy Professionals

This post is the first of a series of blog posts about the Digital Markets Act (“DMA”), which was adopted on July 18, 2022, and it deals specifically with those provisions of the DMA that are relevant to organizations’ privacy programs.

The DMA sets out the following obligations and restrictions on gatekeepers that are relevant to compliance with privacy rules:

  1. it restricts the GDPR legal bases gatekeepers may rely on to process personal data in certain cases;

  2. it prohibits the processing of certain data generated or received from other businesses or their end users for the purpose of competing with other businesses;

  3. it requires the sharing of end users’ personal data with businesses operating on a gatekeeper’s platform, and with advertising companies the gatekeeper works with, at their request;

  4. it requires gatekeepers to port end users’ data at their request; and

  5. it requires gatekeepers to share independently audited information about profiling techniques with the European Commission.

Below we explain these obligations and prohibitions in more detail.





Is this too unique to set a useful precedent?

https://nltimes.nl/2022/10/09/dutch-employee-fired-us-firm-shutting-webcam-awarded-eu75000-court

Dutch employee fired by U.S. firm for shutting off webcam awarded €75,000 in court

… He worked for the American firm for over a year and a half, but on 23 August he was ordered to take part in a virtual training period called a "Corrective Action Program." He was told that during the period he would have to remain logged in for the entire workday with screen-sharing turned on and his webcam activated.

The telemarketing worker replied back two days later, “I don't feel comfortable being monitored for 9 hours a day by a camera. This is an invasion of my privacy and makes me feel really uncomfortable. that's the reason why my camera isn't on. You can already monitor all activities on my laptop and I am sharing my screen.” He was summarily fired on 26 August, for “refusal to work” and “insubordination.”





I admit I know very little about crypto…

https://www.makeuseof.com/best-cryptocurrency-podcasts/

The 5 Best Podcasts to Learn More About Cryptocurrency



Sunday, October 09, 2022

Any poorly defined goal can lead to processes optimized poorly.

https://philpapers.org/rec/EDIAAT-3

AI and the Law: Can Legal Systems Help Us Maximize Paperclips while Minimizing Deaths?

This Chapter provides a short undergraduate introduction to ethical and philosophical complexities surrounding the law’s attempt (or lack thereof) to regulate artificial intelligence. Swedish philosopher Nick Bostrom proposed a simple thought experiment known as the paperclip maximizer. What would happen if a machine (the “PCM”) were given the sole goal of manufacturing as many paperclips as possible? It might learn how to transact money, source metal, or even build factories. The machine might also eventually realize that humans pose a threat. Humans could turn the machine off at any point, and then it wouldn’t be able to make as many paperclips as possible! Taken to the logical extreme, the result is quite grim—the PCM might even start using humans as raw material for paperclips. The predicament only deepens once we realize that Bostrom’s thought experiment overlooks a key player. The PCM and algorithms like it do not arise spontaneously (at least, not yet). Most likely, some corporation—say, Office Corp.—designed, owns, and runs the PCM. The more paperclips the PCM manufactures, the more profits Office Corp. makes, even if that entails converting some humans (but preferably not customers!) into raw materials. Less dramatically, Office Corp. may also make more money when PCM engages in other socially sub-optimal behaviors that would otherwise violate the law, like money laundering, sourcing materials from endangered habitats, manipulating the market for steel, or colluding with competitors over prices. The consequences are predictable and dire. If Office Corp. isn’t held responsible, it will not stop with the PCM. Office Corp. would have every incentive to develop more maximizers—say for papers, pencils, and protractors. This chapter issues a challenge for tech ethicists, social ontologists, and legal theorists: How can the law help mitigate algorithmic harms without overly compromising the potential that AI has to make us all healthier, wealthier, and wiser? The answer is far from straightforward.





We want to protect our children in the worst way…

https://reason.com/2022/10/06/a-california-law-designed-to-protect-childrens-digital-privacy-could-lead-to-invasive-age-verification/

A California Law Designed To Protect Children's Digital Privacy Could Lead to Invasive Age Verification

While the California Age-Appropriate Design Code Act was hailed as a victory for digital privacy, critics warn of a litany of unintended consequences.

The California Age-Appropriate Design Code Act was signed last month by California Gov. Gavin Newsom (D). The law requires that online businesses create robust privacy protections for users under 18.

However, critics of the law have raised concerns about its vague language, which leaves unclear what kinds of business might be subject to the law's constraints and what specific actions companies must take to comply with the law.

For instance, due to the law's strict age requirements, online businesses may resort to invasive age verification regimes—such as face-scanning or checking government-issued IDs.





They are coming. Think about dealing with them.

https://elibrary.verlagoesterreich.at/article/10.33196/ealr202201003201

The Use of Autonomous Weapons – fortunium dei?

Weapons were created to inflict damage, whether to property or to people. With the “help” of innovation and digitalisation, there has been a rapid increase in the development and promotion of autonomous weapons. Among other things, pictures and videos of people are used in order to feed the artificial intelligence with material and therefore let it process and develop itself further.

This paper describes the issues of using such weapons in connection with international humanitarian and human rights law. It is divided into four chapters, starting with the functioning and description of the actual weapon and autonomous weapons, and the types of such armament. Furthermore, the points of being in favour or against these systems will be dismantled in a detailed way within the scope of international human rights and humanitarian law, and philosophical standards, the latter in a brief way. Lastly, the conclusion will briefly summarise the results, suggestions for improvements and concerns in the future of using such weapons. Due to the specialisation on to what extent the use of autonomous weapon systems is permissible under human and international humanitarian law, this paper intentionally will not deal with liability and immunity by using them because it would go beyond the scope of this essay.





Ethics for Designers, a basic class for computer scientists?

https://journals.sagepub.com/doi/abs/10.1177/09716858221119546

Artificial Intelligent Systems and Ethical Agency

The article examines the challenges involved in the process of developing artificial ethical agents. The process involves the creators or designing professionals, the procedures to develop an ethical agent and the artificial systems. There are two possibilities available to create artificial ethical agents: (a) programming ethical guidance in the artificial Intelligence (AI)-equipped machines and/or (b) allowing AI-equipped machines to learn ethical decision-making by observing humans. However, it is difficult to fulfil these possibilities due to the subjective nature of ethical decision-making. The challenge related to the developers is that they themselves lack training in ethical skills. The creators who develop an artificial ethical agent should be able to foresee the ethical issues and have knowledge about ethical decision-making to improve the ethical use of AI-equipped machines. The suggestion is that the focus should be on training professionals involved in the process of developing these artificial systems in ethics rather than developing artificial ethical agents and thereby attributing ethical agency to it.