Tuesday, March 15, 2022

A war-like act or simply a childish reaction?

https://www.bespacific.com/russia-says-its-businesses-can-steal-patents-from-anyone-in-unfriendly-countries/

Russia says its businesses can steal patents from anyone in ‘unfriendly’ countries

Washington Post:Russia has effectively legalized patent theft from anyone affiliated with countries “unfriendly” to it, declaring that unauthorized use will not be compensated. The decree, issued this week, illustrates the economic war waged around Russia’s invasion of Ukraine, as the West levies sanctions and pulls away from Russia’s huge oil and gas industry. Russian officials have also raised the possibility of lifting restrictions on some trademarks, according to state media, which could allow continued use of brands such as McDonald’s that are withdrawing from Russia in droves. The effect of losing patent protections will vary by company, experts say, depending on whether they have a valuable patent in Russia. The U.S. government has long warned of intellectual property rights violations in the country; last year Russia was among nine nations on a “priority watch list” for alleged failures to protect intellectual property. Now Russian entities could not be sued for damages if they use certain patents without permission…”





This could hurt. Will they have to prove that any replacement algorithm does not include any objectionable aspects of the old algorithm?

https://www.protocol.com/policy/ftc-algorithm-destroy-data-privacy

The FTC’s new enforcement weapon spells death for algorithms

The Federal Trade Commission has struggled over the years to find ways to combat deceptive digital data practices using its limited set of enforcement options. Now, it’s landed on one that could have a big impact on tech companies: algorithmic destruction. And as the agency gets more aggressive on tech by slowly introducing this new type of penalty, applying it in a settlement for the third time in three years could be the charm.

In a March 4 settlement order, the agency demanded that WW International — formerly known as Weight Watchers — destroy the algorithms or AI models it built using personal information collected through its Kurbo healthy eating app from kids as young as 8 without parental permission. The agency also fined the company $1.5 million and ordered it to delete the illegally harvested data.

When it comes to today’s data-centric business models, algorithmic systems and the data used to build and train them are intellectual property, products that are core to how many companies operate and generate revenue. While in the past the FTC has required companies to disgorge ill-gotten monetary gains obtained through deceptive practices, forcing them to delete algorithmic systems built with ill-gotten data could become a more routine approach, one that modernizes FTC enforcement to directly affect how companies do business.





Another opportunity for bias. Always “correct” a New Jersey accent but never modify a Texas accent.

https://techcrunch.com/2022/03/14/sayso-accent-changing/

Sayso is launching an API to dial down people’s accents a wee bit

Struggling to understand your heavily accented co-worker? Can’t follow what the customer support person at the other end of the phone is saying? Technology rushes to the rescue. It turns out that listening to an accent you’re not familiar with can dramatically increase the cognitive load (and, by extension, the amount of energy you expend to understand someone). Sayso is attempting to tackle this problem, by giving developers an API that can change accented English from one accent to another in near real time





Another study to watch. Can technology change the way the world thinks?

https://www.usu.edu/today/story/the-future-of-governance-usu-professors-studying-effect-of-ai-enabled-surveillance-in-government

The Future of Governance: USU Professors Studying Effect of AI-Enabled Surveillance in Government

Just how much influence can artificial intelligence-enabled surveillance technology have on how a society is governed? This is the key question Utah State University researchers Jeannie Johnson and Briana Bowen are looking to answer, thanks to a three-year, $1.49 million grant from the Department of Defense and its Minerva Research Initiative. Johnson and Bowen are studying the effect of AI surveillance technology and how its adoption in certain governments could change societal structure and norms.

Taking a case-study approach, Johnson and Bowen are heading up a multidisciplinary, multi-institution team to study the export of AI-enabled surveillance technology originating in and exported from China to a number of Latin American countries. The Chinese government is a major world supplier of AI-driven surveillance systems and also has been testing the technology domestically in certain pilot cities, often with transparency to its own citizens.

The question is, if you export these digital technologies, do you also export political norms?” Bowen said. “Or are you exporting a tool — and societies will use the tool however they want and remain relatively untouched by the social ecosystem from which those technologies originated?”





Coming from an Audit background, I like to start with “What is the system supposed to do?”

https://twin-cities.umn.edu/news-events/meaningful-standards-auditing-high-stakes-artificial-intelligence

Meaningful Standards for Auditing High-Stakes Artificial Intelligence

it is important to ask: can AI tools ever be truly unbiased decision-makers? In response to claims of unfairness and bias in tools used in hiring, college admissions, predictive policing, health interventions, and more, the University of Minnesota recently developed a new set of auditing guidelines for AI tools.

The auditing guidelines, published in the American Psychologist, were developed by Richard Landers, associate professor of psychology at the University of Minnesota, and Tara Behrend from Purdue University.

The researchers developed guidelines for AI auditing by first considering the ideas of fairness and bias through three major lenses of focus:

  • How individuals decide if a decision was fair and unbiased

  • How societal legal, ethical and moral standards present fairness and bias

  • How individual technical domains — like computer science, statistics and psychology — define fairness and bias internally





Another summary?

https://cosmosmagazine.com/technology/ai/ai-ethics-good-in-the-machine/

Where AI and ethics meet

How we can make “good” artificial intelligence, what does it mean for a machine to be ethical, and how can we use AI ethically? Good in the Machine – 2019’s SCINEMA International Science Film Festival entry – delves into these questions, the origins of our morality, and the interplay between artificial agency and our own moral compass.




No comments: