Saturday, October 01, 2022

Clearly we are on the road Skynet! First bugs then the people who bug us…

https://www.extremetech.com/extreme/339945-scientists-create-ai-powered-laser-to-neutralize-cockroaches

Scientists Create AI-Powered Laser to ‘Neutralize’ Cockroaches

Researchers in Scotland have devised a way to “neutralize” creepy crawlies in the coolest way possible: by shooting them with a laser. Ildar Rakhmatulin, a research associate at Edinburgh’s Heriot-Watt University, recently partnered with a group of engineers, biologists, and machine learning specialists to create a cockroach-compromising, AI-powered laser device.

… The system begins with a single-board Jetson Nano, a small computer capable of running deep learning algorithms. Using 1,000 images of cockroaches in different lighting, Rakhmatulin and his team trained the Nano to recognize its target and track the insect’s movement. Once the two cameras attached to the device have located a roach, the Nano calculates its target’s distance within 3D space. It then sends this information to a galvanometer, which uses mirrors to adjust the laser’s direction. The laser can then be shot at the target.





I’m thinking of starting a non-profit so I can profit from the need for Copyright exempt data…

https://waxy.org/2022/09/ai-data-laundering-how-academic-and-nonprofit-researchers-shield-tech-companies-from-accountability/

AI Data Laundering: How Academic and Nonprofit Researchers Shield Tech Companies from Accountability

The academic researchers who compiled the Shutterstock dataset acknowledged the copyright implications in their paper, writing, “The use of data collected for this study is authorised via the Intellectual Property Office’s Exceptions to Copyright for Non-Commercial Research and Private Study.”

But then Meta is using those academic non-commercial datasets to train a model, presumably for future commercial use in their products. Weird, right?

Not really. It’s become standard practice for technology companies working with AI to commercially use datasets and models collected and trained by non-commercial research entities like universities or non-profits.

In some cases, they’re directly funding that research.





Perspective. Unlikely to be the kiss of death for Fox, but suggests that a rating system might be useful...

https://slate.com/technology/2022/09/wikipedia-fox-news-reliability.html?scrolla=5eb6d68b7fedc32c19ef33b4

Wikipedia’s Fox News Problem

The final result: Li found consensus that Fox be deemed a “marginally reliable” source for information about politics and science. This means that its use as a reference in Wikipedia articles will not be permitted for “exceptional claims” that require heightened scrutiny, but that its reliability will be evaluated on a case-by-case basis for other claims.



Friday, September 30, 2022

I suspect the intelligence agencies are already using this technology. Has it been approved for use in the courts?

https://www.engadget.com/ai-is-already-better-at-lip-reading-that-we-are-183016968.html

AI is already better at lip reading than we are

a 2009 study found that most people can only read lips with around 20 percent accuracy and the CDC’s Hearing Loss in Children Parent’s Guide estimates that, “a good speech reader might be able to see only 4 to 5 words in a 12-word sentence.” Similarly, a 2011 study out of the University of Oklahoma saw only around 10 percent accuracy in its test subjects.

For humans, lip reading is a lot like batting in the Major Leagues — consistently get it right even just three times out of ten and you’ll be among the best to ever play the game. For modern machine learning systems, lip reading is more like playing Go — just round after round of beating up on the meatsacks that created and enslaved you — with today’s state-of-the-art systems achieving well over 95 percent sentence-level word accuracy. And as they continue to improve, we could soon see a day where tasks from silent-movie processing and silent dictation in public to biometric identification are handled by AI systems.





Perhaps to recommend someone for a sensitive position? To ‘certify’ that new security software? Or maybe to exchange some security details in strict confidence?

https://krebsonsecurity.com/2022/09/fake-ciso-profiles-on-linkedin-target-fortune-500s/

Fake CISO Profiles on LinkedIn Target Fortune 500s

Someone has recently created a large number of fake LinkedIn profiles for Chief Information Security Officer (CISO) roles at some of the world’s largest corporations. It’s not clear who’s behind this network of fake CISOs or what their intentions may be. But the fabricated LinkedIn identities are confusing search engine results for CISO roles at major companies, and they are being indexed as gospel by various downstream data-scraping sources.



(Related)

https://www.theregister.com/2022/09/30/microsoft_north_korea_zinc_threat/

Microsoft warns of North Korean crew posing as LinkedIn recruiters

Microsoft has claimed a North Korean crew poses as LinkedIn recruiters to distribute poisoned versions of open source software packages.



Thursday, September 29, 2022

Incident response guides help you create a plan or check the one you have.

https://www.trendmicro.com/en_us/ciso/22/i/incident-response-services.html

Incident Response Services & Playbooks Guide

No matter the size of a business, it faces the risk of a cyberattack. Over 50% of organizations experienced a cyberattack. And while proactive protection is ideal, there is no silver bullet when it comes to security—meaning you should plan for incident response as well. Yet, 63% of C-level executives in the US do not have an incident response plan, according to a report by Shred-It.





Strange that Dyson et al. can detect Child Porn, but Twitter can not…

https://www.reuters.com/technology/exclusive-brands-blast-twitter-ads-next-child-pornography-accounts-2022-09-28/

Exclusive: Brands blast Twitter for ads next to child pornography accounts

Some major advertisers including Dyson, Mazda, Forbes and PBS Kids have suspended their marketing campaigns or removed their ads from parts of Twitter because their promotions appeared alongside tweets soliciting child pornography, the companies told Reuters.





How ‘backed-up’ do you need to be? Consider what you would do if you lost access to your computer…

https://www.bespacific.com/how-to-back-up-your-digital-life/

How to Back Up Your Digital Life

Wired: “Nowadays I back up my data at least three times, in three physically separate places. I know what you’re thinking—wow, he is really bummed about missing out on that mai tai. It may sound excessive, but it costs next to nothing and happens without me lifting a finger, so why not? If the perfect backup existed, then sure, three would be overkill, but there is no perfect backup. Things go wrong with backups too. You need to hedge your bets. At the very least, you should have two backups, one local and one remote. For most people, this strikes the best balance between safety, cost, and effort…”





Find alternatives to “censorship.”

https://www.insideprivacy.com/technology/fifth-circuit-upholds-texas-law-restricting-online-censorship/

Fifth Circuit Upholds Texas Law Restricting Online “Censorship”

On September 16, the Fifth Circuit issued its decision in NetChoice L.L.C. v. Paxton, upholding Texas HB 20, a law that limits the ability of large social media platforms to moderate content and imposes various disclosure and appeal requirements on them. The Fifth Circuit vacated the district court’s preliminary injunction, which previously blocked the Texas Attorney General from enforcing the law. NetChoice is likely to ask the U.S. Supreme Court to review the Fifth Circuit’s decision.

HB 20 prohibits “social media platforms” with “more than 50 million active users” from “censor[ing] a user, a user’s expression, or a user’s ability to receive the expression of another person” based on the “viewpoint” of the user or another person, or the user’s location. HB 20 also includes various transparency requirements for covered entities, for example, requiring them to publish information about their algorithms for displaying content, to publish an “acceptable use policy” with information about their content restrictions, and to provide users an explanation for each decision to remove their content, as well as a right to appeal the decision.



(Related) (Only in Texas…)

https://www.theatlantic.com/ideas/archive/2022/09/netchoice-paxton-first-amendment-social-media-content-moderation/671574/

Is This the Beginning of the End of the Internet?

Occasionally, something happens that is so blatantly and obviously misguided that trying to explain it rationally makes you sound ridiculous. Such is the case with the Fifth Circuit Court of Appeals’s recent ruling in NetChoice v. Paxton. Earlier this month, the court upheld a preposterous Texas law stating that online platforms with more than 50 million monthly active users in the United States no longer have First Amendment rights regarding their editorial decisions. Put another way, the law tells big social-media companies that they can’t moderate the content on their platforms. YouTube purging terrorist-recruitment videos? Illegal. Twitter removing a violent cell of neo-Nazis harassing people with death threats? Sorry, that’s censorship, according to Andy Oldham, a judge of the United States Court of Appeals and the former general counsel to Texas Governor Greg Abbott.





No one said good laws come easily.

https://www.cpomagazine.com/data-protection/indonesia-data-protection-law-includes-potential-prison-time-asset-seizure-right-to-compensation-for-data-breaches/

Indonesia Data Protection Law Includes Potential Prison Time, Asset Seizure, Right to Compensation for Data Breaches

An Indonesia data protection law that has been in development since 2016 includes some of the harshest penalties yet seen in national data privacy regulations, allowing for prison time for illegally obtaining or falsifying data along with large fines and the potential for asset forfeiture. Residents of Indonesia will also be granted a right to compensation for data breaches.

However, in spite of these terms, some privacy analysts remain unconvinced that the law will be effective. The central issue is that there are existing privacy protection terms scattered throughout a number of other laws that potentially conflict with the new bill, yet the new bill makes clear that these existing terms remain valid.





Tools & Techniques. Could one of these write this Blog for me? Perhaps the great AImerican novel?

https://www.makeuseof.com/best-ai-writing-extensions-chrome/

The 5 Best AI Writing Extensions for Google Chrome



Wednesday, September 28, 2022

I suspect this is because misinformation seems to work…

https://www.bespacific.com/us-politicians-tweet-far-more-misinformation-than-those-in-the-uk-and-germany/

US politicians tweet far more misinformation than those in the UK and Germany

The Conversation: “Politicians from mainstream parties in the UK and Germany post far fewer links to untrustworthy websites on Twitter and this has remained constant since 2016, according to our new research. By contrast, US politicians posted a much higher percentage of untrustworthy content in their tweets, and that share has been increasing steeply since 2020. We also found systematic differences between the parties in the US, where Republican politicians were found to share untrustworthy websites more than nine times as often as Democrats…”



(Related)

https://www.bespacific.com/fcc-targeting-and-eliminating-unlawful-text-messages/

FCC Targeting and Eliminating Unlawful Text Messages

The Federal Communications Commission today proposed new rules to fight back against malicious robotext campaigns. The agency will take public comment on ideas to apply caller ID authentication standards to text messaging and require providers to find and actively block illegal texts before they get to consumers…The Notice of Proposed Rulemaking released today proposes and seeks comment on applying caller ID authentication standards to text messaging. It proposes requiring mobile wireless providers to block texts, at the network level, that purport to be from invalid, unallocated, or unused numbers, and numbers on a Do-Not-Originate (DNO) list. It also seeks input on other actions the Commission might take to address illegal texts, including enhanced consumer education. The FCC’s Robocall Response Team recently issued a Consumer Alert on the growing problem of scam robotexts. This warning noted the increase in consumer complaints to the FCC about unwanted text messages. It explained how scammers use texts to solicit information, defraud consumers, and/or spur responses from the consumers to possibly sell their number as a target. Consumers should look out for signs of possible scam texts including unknown numbers, misleading or incomplete information, misspellings, mysterious links, and sales pitches.





Getting to the new normal.

https://hbr.org/2022/09/4-steps-to-start-monetizing-your-companys-data

4 Steps to Start Monetizing Your Company’s Data

As artificial intelligence becomes ubiquitous in business, non-tech companies need to learn how to use their data to gain a competitive edge. Companies that are trying to decide where and how to use AI should take four steps: 1) survey the data they have and how other companies are generating and using data, 2) look for data- and AI-focused companies, such as startups, that can help jumpstart your data strategy, 3) buy, don’t build, and 4) start building a data moat.





Is this the equivalent of requiring auto makers to include a buggy whip with each new car?

https://www.bespacific.com/france-delivery-fee-for-online-book-sales-to-help-stores-compete-with-amazon/

France sets delivery fee for online book sales to help stores compete with Amazon

Reuters: “France plans to impose a minimum delivery fee of 3 euros ($2.93) for online book orders of less than 35 euros to level the playing field for independent bookstores struggling to compete against e-commerce giants, the government said on Friday. A 2014 French law already prohibits free book deliveries, but Amazon and other vendors such as Fnac have circumvented this by charging a token 1 cent per delivery. Local book stores typically charge up to 7 euros for shipping a book. Legislation was passed in December 2021 to close the one-cent loophole through a minimum shipping fee, but the law could not take effect until the government had decided on the size of that fee. “This will adapt the book industry to the digital era by restoring an equilibrium between large e-commerce platforms, which offer virtually free delivery for books whatever the order size, and bookstores that cannot match these delivery prices,” the culture and finance ministries said in a joint statement…”





Tools & Techniques.

https://petapixel.com/2022/09/27/palette-is-a-free-web-based-ai-powered-photo-colorizer/

Palette is a Free Web-Based AI-Powered Photo Colorizer

A new artificial intelligence-powered web-based tool called Palette is able to take any black and white photo and colorize it. The creator is so confident in the results that he is billing it “the Dall-E of color.”





Resource. For kids with Kindles?

https://www.bespacific.com/archive-of-7000-historical-childrens-books-all-digitized-free-to-read-online/

Archive of 7,000 Historical Children’s Books, All Digitized & Free to Read Online

Open Culture: “We can learn much about how a historical period viewed the abilities of its children by studying its children’s literature. Occupying a space somewhere between the purely didactic and the nonsensical, most children’s books published in the past few hundred years have attempted to find a line between the two poles, seeking a balance between entertainment and instruction. However, that line seems to move closer to one pole or another depending on the prevailing cultural sentiments of the time. And the very fact that children’s books were hardly published at all before the early 18th century tells us a lot about when and how modern ideas of childhood as a separate category of existence began…by examining the children’s literature of the Victorian era, perhaps the most innovative and diverse period for children’s literature thus far by the standards of the time. And we can do so most thoroughly by surveying the thousands of mid- to late 19th century titles at the University of Florida’s Baldwin Library of Historical Children’s Literature. Their digitized collection currently holds over 7,000 books free to read online from cover to cover, allowing you to get a sense of what adults in Britain and the U.S. wanted children to know and believe…”



Tuesday, September 27, 2022

I’m learning new things about the Fifth Amendment.

https://www.politico.com/news/magazine/2022/09/22/trump-attorney-general-fraud-case-00058377

Opinion | Trump Made N.Y. Attorney General’s Fraud Case Virtually Unbeatable

He should have settled early, but he got boxed into taking the Fifth — and that can be used against him now.





Do you think Putin has held anything back in his earlier cyber attacks? How far is too far?

https://www.theregister.com/2022/09/27/russia_plans_massive_cyberattacks_ukraine/

Ukraine fears 'massive' Russian cyberattacks on power, infrastructure



(Related)

https://www.protocol.com/bulletins/meta-russia-china-takedowns

Smash and grab’: Meta uncovers Russia's ‘largest and most complex’ info op since the war began

Russia set up a sprawling and sophisticated network of websites impersonating mainstream media outlets, which it used to spread anti-Ukrainian messaging that was amplified via fake social media accounts, Meta has found. In a new report published Tuesday, Meta called it Russia’s “largest and most complex” influence operation since the war in Ukraine began.





Perspective.

https://sloanreview.mit.edu/audio/the-three-roles-of-the-chief-data-officer-adps-jack-berkowitz/

The Three Roles of the Chief Data Officer: ADP’s Jack Berkowitz

As chief data officer of payroll and benefits management company ADP, Jack Berkowitz has three primary responsibilities. One is to oversee the organization’s data overall, ensuring that functions like data governance, security, and analytics, are running well. Another is to build ADP’s data products, such as people analytics and benchmark tools. But the responsibility that’s of most interest to Me, Myself, and AI hosts Sam Ransbotham and Shervin Khodabandeh is Jack’s oversight of the organization’s use of artificial intelligence.





Perspective.

https://www.wired.com/story/2022-itu-secretary-general-election/

This Vote Could Change the Course of Internet History

THIS WEEK IN Romania, a US State Department candidate is facing a Russian challenger in an election for the leadership of one of the most important international technology bodies in the world.

Who wins could determine whether the internet remains a relatively decentralized and open platform—or begins to centralize into the hands of nation-states and state-run companies that may want great control over what their citizens see and do online.



Monday, September 26, 2022

I need to invent anti-social media. Add an AI that constantly asks, “do you really want to say that?”

https://www.cpomagazine.com/cyber-security/why-social-media-is-a-weak-spot-for-companies-cybersecurity/

Why Social Media Is a Weak Spot for Companies’ Cybersecurity

Social media can be great for many things. It can keep you connected to friends and relatives far away, help you find like-minded individuals, and provide access to valuable tips from experts. After all, it’s why 4.62 billion people (or 58.4% of the world’s population) use social media. But, if you’re a business owner, that amount of social media activity can pose a major cyber security risk.

But why are these popular platforms so dangerous? Here are three reasons.

Easy access to employees

Increased opportunity

Tapping into psychological weakness





Are they more afraid of the government or their customers?

https://www.wired.com/story/vpn-firms-flee-india-data-collection-law/

VPN Providers Flee India as a New Data Law Takes Hold

AHEAD OF THE deadline to comply with the Indian government’s new data-collection rules, VPN companies from across the globe have pulled their servers out of the country in a bid to protect their users’ privacy.

Starting today, the Indian Computer Emergency Response Team, or CERT—a body appointed by the Indian government to deal with cybersecurity and threats—will require VPN operators to collect and maintain customer information including names, email addresses, and IP addresses for at least five years, even after they have canceled their subscription or account.

In April, CERT said it needed to implement these rules because “the requisite information is not found available” with the security provider during investigations into cybersecurity threats, thereby thwarting inquiries. The new rules, CERT claims, will “strengthen cyber security in India” and are “in the interest of sovereignty or integrity of India.”





Another missing link. Thanks for pointing that out...

https://thenextweb.com/news/common-sense-test-for-ai-smarter-machines

A new ‘common sense’ test for AI could lead to smarter machines

So why haven’t scientists been able to crack the common sense code thus far?

Called the “dark matter of AI, common sense is both crucial to AI’s future development and, thus far, elusive. Equipping computers with common sense has actually been a goal of computer science since the field’s very start; in 1958, pioneering computer scientist John McCarthy published a paper titled “Programs with common sense” which looked at how logic could be used as a method of representing information in computer memory. But we’ve not moved much closer to making it a reality since.

Common sense includes not only social abilities and reasoning but also a “naive sense of physics” — this means that we know certain things about physics without having to work through physics equations, like why you shouldn’t put a bowling ball on a slanted surface. It also includes basic knowledge of abstract things like time and space, which lets us plan, estimate, and organize. “It’s knowledge that you ought to have,” says Michael Witbrock, AI researcher at the University of Auckland.

All this means that common sense is not one precise thing, and therefore cannot be easily defined by rules.





Brilliant! But unlikely.

https://www.bespacific.com/lawless-surveillance/

Lawless Surveillance

Friedman, Barry, Lawless Surveillance (February 1, 2022). 97 N.Y.U. L. Rev. (2022), NYU School of Law, Public Law Research Paper No. 22-28, Available at SSRN: https://ssrn.com/abstract=4111547

Here in the United States, policing agencies are engaging in mass collection of personal data, building a vast architecture of surveillance. License plate readers collect our location information. Mobile forensics data terminals suck in the contents of cell phones during traffic stops. CCTV maps our movements. Cheap storage means most of this is kept for long periods of time—sometimes into perpetuity. Artificial intelligence makes searching and mining the data a snap. For most of us whose data is collected, stored, and mined, there is no suspicion whatsoever of wrongdoing. This growing network of surveillance is almost entirely unregulated. It is, in short, lawless. The Fourth Amendment touches almost none of it, either because what is captured occurs in public, and so is supposedly “knowingly exposed,” or because of doctrine that shields information collected from third parties. It is unregulated by statutes because legislative bodies—when they even know about these surveillance systems—see little profit in taking on the police. In the face of growing concern over such surveillance, this Article argues there is a constitutional solution sitting in plain view. In virtually every other instance in which personal information is collected by the government, courts require that a sound regulatory scheme be in place before information collection occurs. The rulings on the mandatory nature of regulation are remarkably similar, no matter under which clause of the Constitution collection is challenged. This Article excavates this enormous body of precedent and applies it to the problem of government mass data collection. It argues that before the government can engage in such surveillance, there must be a regulatory scheme in place. And by changing the default rule from allowing police to collect absent legislative prohibition, to banning collection until there is legislative action, legislatures will be compelled to act (or there will be no surveillance). The Article defines what a minimally-acceptable regulatory scheme for mass data collection must include, and shows how it can be grounded in the Constitution.”





Is this something the AI Lawyers can use?

https://arstechnica.com/information-technology/2022/09/artist-receives-first-known-us-copyright-registration-for-generative-ai-art/

Artist receives first known US copyright registration for latent diffusion AI art

It's likely that artists have registered works created by machine or algorithms before because the history of generative art extends back to the 1960s. But this is the first time we know of that an artist has registered a copyright for art created by the recent round of image synthesis models powered by latent diffusion, which has been a contentious subject among artists.

Zarya of the Dawn, which features a main character with an uncanny resemblance to the actress Zendaya, is available for free through the AI Comic Books website. AI artists often use celebrity names in their prompts to achieve consistency between images, since there are many celebrity photographs in the data set used to train Midjourney.



Sunday, September 25, 2022

I wish them well.

https://www.databreaches.net/denver-suburb-wont-cough-up-millions-in-ransomware-attack-that-closed-city-hall/

Denver suburb won’t cough up millions in ransomware attack that closed city hall

John Aguilar reports:

The demand was big: $5 million to unlock Wheat Ridge’s municipal data and computer systems seized by a shadowy overseas ransomware operation.
The response was defiant: We’ll keep our money and fix the mess you made ourselves.

Read more at The Denver Post.





A very good ‘bad example.’

https://qmro.qmul.ac.uk/xmlui/handle/123456789/80559

Facial Recognition Technology vs Privacy: The Case of Clearview AI

In January 2020, the New York Times revealed the existence of Clearview AI, a company that had developed a facial recognition tool of unprecedented performance. Various actors were fast in declaring the loss of privacy accompanying the deployment of the application. This paper analyses how the economic motives behind facial recognition technologies challenge the established understanding and purpose of the fundamental right to privacy by the example of the EU. It argues that Clearview AI’s business model, based on the surveillance of the company’s data subjects, forcibly entails a violation of the latter’s fundamental right to privacy. The traditional vertical application of fundamental rights in cyberspace disregards the power asymmetry existing between private individuals and private companies with state-like power in the Digital Age, thus resulting in legal ineffectiveness in face of this violation. The author concludes that the most fruitful approach to safeguard privacy would be the horizontal application of the fundamental right to privacy.





Humans need to talk ethics to their AI.

https://link.springer.com/article/10.1007/s43681-022-00214-z

Ethics in human–AI teaming: principles and perspectives

Ethical considerations are the fabric of society, and they foster cooperation, help, and sacrifice for the greater good. Advances in AI create a greater need to examine ethical considerations involving the development and implementation of such systems. Integrating ethics into artificial intelligence-based programs is crucial for preventing negative outcomes, such as privacy breaches and biased decision making. Human–AI teaming (HAIT) presents additional challenges, as the ethical principles and moral theories that provide justification for them are not yet computable by machines. To that effect, models of human judgments and decision making, such as the agent-deed-consequence (ADC) model, will be crucial to inform the ethical guidance functions in AI team mates and to clarify how and why humans (dis)trust machines. The current paper will examine the ADC model as it is applied to the context of HAIT, and the challenges associated with the use of human-centric ethical considerations when applied to an AI context.





Easily understood. Most “AI” is aimed at very specific commercial goals, not broad ‘general intelligence’ issues.

https://www.zdnet.com/article/metas-ai-guru-lecun-most-of-todays-ai-approaches-will-never-lead-to-true-intelligence/

Meta's AI guru LeCun: Most of today's AI approaches will never lead to true intelligence

Yann LeCun, chief AI scientist of Meta Properties, owner of Facebook, Instagram, and WhatsApp, is likely to tick off a lot of people in his field.

With the posting in June of a think piece on the Open Review server, LeCun offered a broad overview of an approach he thinks holds promise for achieving human-level intelligence in machines.

Implied if not articulated in the paper is the contention that most of today's big projects in AI will never be able to reach that human-level goal.





Interesting (appropriate?) application of facial recognition. Protecting Taylor Swift...

https://via.library.depaul.edu/cgi/viewcontent.cgi?article=4202&context=law-review

The Legal and Ethical Considerations of Facial Recognition Technology in the Business Sector

Anonymity is no longer possible since most individuals have photo identifications and social media posts with pictures available for public viewing. Surveillance methods also continue to develop, which has resulted in individuals being exposed to the greater use of facial recognition techniques without their awareness or permission. This system references equipment with the dual purpose of “connecting faces to identities,” and permitting the “distribution of those identities across computer networks.” The software is primarily employed for “identification and access control or for identifying individuals who are under surveillance.” This technology is premised upon the idea that their inherent physical or behavioral characteristics can be used to correctly recognize every individual.





Averaging ethics?

https://www.mdpi.com/2673-2688/3/3/45

Bridging East-West Differences in Ethics Guidance for AI and Robotics

Societies of the East are often contrasted with those of the West in their stances toward technology. This paper explores these perceived differences in the context of international ethics guidance for artificial intelligence (AI) and robotics. Japan serves as an example of the East, while Europe and North America serve as examples of the West. The paper’s principal aim is to demonstrate that Western values predominate in international ethics guidance and that Japanese values serve as a much-needed corrective. We recommend a hybrid approach that is more inclusive and truly ‘international’. Following an introduction, the paper examines distinct stances toward robots that emerged in the West and Japan, respectively, during the aftermath of the Second World War, reflecting history and popular culture, socio-economic conditions, and religious worldviews. It shows how international ethics guidelines reflect these disparate stances, drawing on a 2019 scoping review that examined 84 international AI ethics documents. These documents are heavily skewed toward precautionary values associated with the West and cite the optimistic values associated with Japan less frequently. Drawing insights from Japan’s so-called ‘moonshot goals’, the paper fleshes out Japanese values in greater detail and shows how to incorporate them more effectively in international ethics guidelines for AI and robotics.



(Related)

https://onlinelibrary.wiley.com/doi/full/10.1111/meta.12583

Flourishing Ethics and identifying ethical values to instill into artificially intelligent agents

The present paper uses a Flourishing Ethics analysis to address the question of which ethical values and principles should be “instilled” into artificially intelligent agents. This is an urgent question that is still being asked seven decades after philosopher/scientist Norbert Wiener first asked it. An answer is developed by assuming that human flourishing is the central ethical value, which other ethical values, and related principles, can be used to defend and advance. The upshot is that Flourishing Ethics can provide a common underlying ethical foundation for a wide diversity of cultures and communities around the globe; and the members of each specific culture or community can add their own specific cultural values—ones which they treasure, and which help them to make sense of their moral lives.





Backgrounders…

https://www.taylorfrancis.com/chapters/edit/10.4324/9781003280392-13/data-privacy-artificial-intelligence-ai-lars-erik-casper-ferm-sara-quach-park-thaichon

Data privacy and artificial intelligence (AI)

Artificial intelligence (AI) has disrupted the ways customers and firms interact. However, AI runs on data and, in the case of this chapter, customers’ personal informational data. Yet, even in the face of a new paradigm for AI and customers’ online behaviors, there is a need for clarification of key concepts and definitions in this domain. The main objective of this chapter is threefold. First to provide a concrete definition of data privacy which will conceptually drive this chapter. Second, we focus on the three popular types of AI in the marketing domain AI types prevalent (natural language processing, machine learning, and deep learning) and identify how they give way to AI data privacy issues. Third, this study will provide two case studies (Clearview AI and Hello Barbie) which document and explicate the means by which AI collects customer data and how this gives way to data privacy issues. Overall, this chapter will provide a conceptual alignment and understanding of the complex arena of AI and data privacy.



(Related)

https://www.taylorfrancis.com/chapters/edit/10.4324/9781003280392-14/solutions-artificial-intelligence-ai-privacy-lars-erik-casper-ferm-park-thaichon-sara-quach

Solutions to artificial intelligence (AI) and privacy

The main objective of this study is to understand the implications of artificial intelligence (AI) and its influence on data privacy. Via a series of case studies and discussions, this chapter strives to provide insights into data privacy issues facing marketers and customers in the digital space through their usage of AI. To achieve this, this chapter will align and extend AI and data privacy knowledge in two ways. First, by providing information on AI types (natural language processing, machine learning, and deep learning) and how/what customer data AI uses (e.g., segmentation and targeting, personalization, and customer service) along with a discussion of the accompanying arising data privacy issues. Then, this chapter will provide potential solutions to the problems presented and identified throughout this chapter (e.g., data value propositions, degree of personalization, and federated learning). The identified privacy issues and accompanying solutions within this chapter will hopefully aid the understanding of current and future marketing practitioners and academics in their use and understanding of AI.





One of my favorite topics for (heated) debate. (AI is only a tool?)

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4220854

Untangling the Author/Inventor(Ship) Issues in the Artificial Intelligence-Intellectual Output

The growing sophistication and diffusion of Artificial Intelligence (AI) in creative tasks is triggering the assumptions that the AI machine (rather than human) should be considered as an author/inventor of the intellectual output produce by or with the help of AI techniques. Due to this paradigm shift, the orthodox conceptions of authorship and inventorship under intellectual property (IP) regime are being challenged in ways that have never been experienced before. Since the major area of existing literature on IP policy has not examined the technological machinery of AI system in depth, this study primarily investigates various technical concepts and aspects behind the working mechanism of an AI system. It provides a brief analysis of AI-human interaction in the production process of AI-intellectual output (AIIO) and points out that the human actor (designer, programmer, etc.) uses the AI system as a problem-solving tool to produce desired intellectual output. Furthermore, the study examines authorship and inventorship norms under current IP realm and posits that only human actor should be considered as author and inventor of the AIIO.