Saturday, September 24, 2022

I was sure that everyone had safeguards built in by now… What a disappointment!

https://www.counton2.com/news/south-carolina-news/hackers-steal-south-carolina-fire-departments-paychecks/#:~:text=SPARTANBURG%20COUNTY%2C%20S.C.,report%20of%20fraud%2Fpayroll%20theft.

Hackers steal South Carolina fire department’s paychecks

Evatt told deputies that six members of the fire department failed to receive their direct deposit on Wednesday.

Deputies said they discovered that someone hacked and/or gained remote access to the Assistant Chief’s employee email and gained access to employee direct deposit information and payroll accounts.

The unknown subject(s) then edited the direct deposit information of the six employees resulting in their payroll earnings being deposited into reloadable pre-paid debit cards.

A CPA firm Greene Finney and Cauley, which manages payroll for the fire department said the IP addresses related to illegal sign-in to the assistant chief’s account were traced back to Nigeria, California and Florida.





Tools & Techniques. You might want to bookmark a couple…

https://www.popsci.com/diy/powerful-websites-you-should-know/

19 free online tools you’ll want to bookmark right now



Friday, September 23, 2022

A Colorado story I have not been following.

https://gizmodo.com/my-pillow-election-2020-trump-dominion-1849568341

MyPillow CEO Is Under Federal Investigation for Potential Ties to Colorado Election Security Breach

MyPillow CEO and proverbial yeller at clouds Mike Lindell is under investigation by the Department of Justice for potential identity theft and intent to damage a protected computer potentially connected to a 2020 Colorado voting equipment security breach.

Lindell’s legal team published a copy of the search and seizure warrant issued against him as part of Lindell’s broader lawsuit against the FBI and Justice Department to try and return his cell phone. FBI agents seized Lindell’s phone while he was sitting in a Hardee’s drive-through like last week. According to The New York Times, Lindell claims those agents questioned him about his ties to a Colorado county clerk who’s facing an indictment for allegedly attempting to pull data from Dominion Voting Systems.





How do we measure bias? Is there a NYC standard for “un-bias?”

https://www.forbes.com/sites/lanceeliot/2022/09/23/ai-ethics-and-the-looming-debacle-when-that-new-york-city-law-requiring-ai-biases-audits-kicks-into-gear/?sh=5a32056ab5dd

AI Ethics And The Looming Debacle When That New York City Law Requiring Audits For AI Biases Kicks Into Gear

let’s take a close look at a new law in New York City regarding Artificial Intelligence (AI) that will take effect on January 1, 2023. You could easily win a sizable bet that all manner of confusion, consternation, and troubles will arise once the law comes into force. Though the troubles are not by design, they will indubitably occur as a result of a poor design or at least an insufficient stipulation of necessary details that should and could have easily been devised and explicitly stated.



Thursday, September 22, 2022

Artificial Intelligence creates images that are Almost Indistinguishable from normal photos. (Could my AI sue to reverse this blatant discrimination?)

https://arstechnica.com/information-technology/2022/09/fearing-copyright-issues-getty-images-bans-ai-generated-artwork/

Fearing copyright issues, Getty Images bans AI-generated artwork

Getty sidesteps potential legal problems from unresolved rights and ethics issues.

Getty Images has banned the sale of AI generative artwork created using image synthesis models such as Stable Diffusion, DALL-E 2, and Midjourney through its service, The Verge reports.

To clarify the new policy, The Verge spoke with Getty Images CEO Craig Peters. "There are real concerns with respect to the copyright of outputs from these models and unaddressed rights issues with respect to the imagery, the image metadata and those individuals contained within the imagery," Peters told the publication.

Getty Images is a large repository of stock and archival photographs and illustrations, often used by publications (such as Ars Technica) to illustrate articles after paying a license fee.

Getty's move follows image synthesis bans by smaller art community sites earlier this month, which found their sites flooded with AI-generated work that threatened to overwhelm artwork created without the use of those tools. Getty Images competitor Shutterstock allows AI-generated artwork on its site (and although Vice recently reported the site was removing AI artwork, we still see the same amount as before—and Shutterstock's content submission terms have not changed).



(Related)

https://gizmodo.com/dall-e-ai-openai-deep-fakes-image-generators-1849557604

DALL-E Users Can Now Upload and Edit Real Human Faces. What Could Possibly Go Wrong?

OpenAI believes it’s ready to start letting DALL-E users edit images of real human faces, a possibility previously blocked over concerns of potential sexual and political deepfakes proliferating from the AI.

In a letter to users on Monday spotted by TechCrunch and shared with Gizmodo, OpenAI said it would reintroduce the ability to upload and edit real human faces to its advanced AI image generator after building new detection and response techniques meant to prevent misuse and ultimately minimize, “the potential of harm.” Users are reportedly still barred from uploading images of people without their consent as well as images they don’t have legal rights to.



(Related)

https://www.latimes.com/projects/artificial-intelligence-generated-art-ownership-bias-dall-e-midjourney/

How AI-generated art is changing the concept of art itself

This is one way that artificial intelligence can output a selection of images based on words and phrases one feeds it. The program gathers possible outputs from its dataset references that it learned from — typically pulled from the internet — to provide possible images.





This could be useful!

https://www.insideprivacy.com/artificial-intelligence/cnil-tests-tools-to-audit-ai-systems/

CNIL Tests Tools to Audit AI Systems

With the growing use of AI systems and the increasing complexity of the legal framework relating to such use, the need for appropriate methods and tools to audit AI systems is becoming more pressing both for professionals and for regulators. The French Supervisory Authority (“CNIL”) has recently tested tools that could potentially help its auditors understand the functioning of an AI system.

Overview of the tools tested by the CNIL





Not surprising…

https://www.vice.com/en/article/y3pnkw/us-military-bought-mass-monitoring-augury-team-cymru-browsing-email-data

Revealed: US Military Bought Mass Monitoring Tool That Includes Internet Browsing, Email Data

Multiple branches of the U.S. military have bought access to a powerful internet monitoring tool that claims to cover over 90 percent of the world’s internet traffic, and which in some cases provides access to people’s email data, browsing history, and other information such as their sensitive internet cookies, according to contracting data and other documents reviewed by Motherboard.

Additionally, Sen. Ron Wyden says that a whistleblower has contacted his office concerning the alleged warrantless use and purchase of this data by NCIS, a civilian law enforcement agency that’s part of the Navy, after filing a complaint through the official reporting process with the Department of Defense, according to a copy of the letter shared by Wyden’s office with Motherboard.





Perspective.

https://www.bloomberg.com/news/articles/2022-09-22/what-is-a-chief-metaverse-officer-and-do-you-need-one

Chief Metaverse Officers Are Getting Million-Dollar Paydays. So What Do They Do All Day?

Disney, P&G, LVMH and other big names have invested in chief metaverse officers to plot a course through the next chapter of the internet. Do companies really need them?





Perspective. I doubt any of these are valid in the long term.

https://www.makeuseof.com/reasons-artificial-intelligence-cant-replace-humans/

6 Reasons Why Artificial Intelligence Can’t Replace Humans at Work





Tools & Techniques.

https://www.makeuseof.com/how-to-run-old-software-on-a-modern-pc-laptop/

How to Run Old Software on a Modern PC or Laptop

Need to retrieve data from some old documents or spreadsheets but can’t open them in modern apps? Have some old applications or games you want to run, but your computer refuses to install them? Perhaps the media is old, or taking up unnecessary space; you want to back up the files before the disks are ditched.

It’s over 40 years since the first home computers were sold. Many of us have several decades’ of digital data, much of which seems inaccessible. But with the right tools and software, it is possible to rescue old data and run it on current operating systems.

In fact, now is probably the best time to retrieve that data, before it is too late.





For the kids…

https://www.makeuseof.com/legal-sites-watch-cartoons-free/

The 7 Best Places to Watch Cartoons Online for Free (Legally)



Wednesday, September 21, 2022

Note the even huge breaches don’t make much of a ripple on the evening news.

https://www.databreaches.net/ask-fm-user-database-with-350m-user-records-has-shown-up-for-sale/

Ask.FM user database with 350m user records has shown up for sale

“I think it’s probably one of the biggest breaches in a long time, can’t think of any bigger ones,” Pompompurin, the owner of Breached.to, wrote when asked about a new for-sale listing that appeared on his forum.

A seller called “Data,” who Pompompurin says he will “vouch all day and night for” listed user data from Ask.FM (ASKfm), the social networking site.

“I’m selling the users database of Ask.fm and ask.com,” Data wrote. “For connoisseurs, you can also get 607 repositories plus their Gitlab, Jira, Confluence databases.”

There are about 350 million records in the database, with about 45 million of them using Single Sign-On login.

The fields in the user database include: “user_id, username, mail, hash, salt, fbid, twitterid, vkid, fbuid, iguid” and the hashes are reportedly crackable.

Data, who joined the forum in March, also provided a list of repositories, and sample git and sample user data.

DataBreaches reached out to Data to ask some questions about when the data were acquired and how. DataBreaches also reached out to Ask.FM last night to ask them some questions.

Ask.FM didn’t reply to either of two inquiries over a 24 hour period, but Data did respond to this site’s questions, with two prefacatory remarks. The first was to berate yours truly for having a protonmail account. The second was a request to please add “Marine Le Pen is a racist fraudster.”

Having dealt with those remarks, let’s turn to the clarification Data provided on the Ask.FM incident.

In response to the first query about initial access, Data replied that there was a vulnerability in Safety Center: the server contained a WordPress site on their ASKFM-NET network.

As to when the hack occurred, Data replied that the server was first accessed in 2019 and the database was obtained on 2020-03-14. Data provided this site with users on the Safety Center and wrote insultingly about a certain ‘lazy’ administrator who allegedly used the same password everywhere.

[Note: Data provided specific and technical details that DataBreaches is not reproducing in this post out of concern that they might encourage or enable others to re-attack Ask.FM. According to Data, Ask.FM is still vulnerable due to a poor response to the 2020 incident.

“Specific parts were taken in 2021, although they assumed the aggressors were kicked off,” Data wrote. “The buyer will get specific details on how piss easy it is to compromise the morons.”]

How easy is “piss easy,” you wonder? “Just need to open 10 source files and spot either a vulnerability or peek at the heavy password re-use,” Data told DataBreaches.

Ask.FM Knew But Kept Quiet?

When asked whether Ask.FM knew about the breach in 2020, Data was unequivocal in stating that they knew. Ask.FM noticed the March 2020 breach circa June 2020, Data claims, but “was apparently too busy laying off employees to give Answers to the attempt to contact them.”

Data’s claim that Ask.FM knew was based, in part, on Ask.FM burning some specific access the hackers had played around with, like several production AWS credentials provided to DataBreaches.

DataBreaches could find no media coverage or other indication that Ask.FM ever disclosed the March 2020 breach or notified users of it. If anyone ever received a notification about it, please contact DataBreaches. If Ask.FM replies to inquiries, this post will be updated.

Because Data invited contacts by private message, it’s not clear how many purchase offers they have received at this point, but they tell DataBreaches that they are now looking more at a single (exclusive) sale.

Updated 9/21/2022: Because there has still been no reply by AskFM, DataBreaches sent an inquiry to the Irish DPC asking whether AskFM ever reported the March 2020 incident to them under the GDPR. This post will be updated when a reply to that inquiry is received.





Will lawyers be asked to arrange abortions and will that communication be as vulnerable?

https://www.theregister.com/2022/09/20/encryption_abortion_data/

Meta, Twitter, Apple, Google urged to up encryption game in post-Roe America

Now that America has entered its post-Roe era, in which more than a dozen states have banned abortion, digital rights advocacy group Fight for the Future has called on tech companies to implement strong on-by-default end-to-end encryption (E2EE) across their messaging services to secure users' communications, and prevent conversations from being shared with police and others.

Crucially, campaigners want to ensure that people's chats discussing procedures outlawed at the state level can't be obtained by the cops and used to build a criminal case against them.

"When our messages are protected from interlopers, we can communicate freely, without the fear of being watched," said Caitlin Seeley George, Fight for the Future's campaigns and managing director, in a statement.





We had an effect? You’re welcome?

https://fpf.org/blog/the-colorado-effect-status-check-on-colorados-privacy-rulemaking/

THE “COLORADO EFFECT?” STATUS CHECK ON COLORADO’S PRIVACY RULEMAKING

Colorado is set to formally enter a rulemaking process which may establish de facto interpretations for privacy protections across the United States. With the passage of the Colorado Privacy Act (CPA) in 2021, Colorado, along with Virginia, Utah, and Connecticut, became part of an emerging group of states adopting privacy laws that share a similar framework and many core definitions with a legislative model developed (though never enacted) in Washington State. However, while the general model of legislation seen in the CPA is similar to recently enacted state privacy laws, the CPA stands alone in providing authority to the state Attorney General to issue regulations.

Because no other similar state law has provided for this type of interpretative authority, regulations issued by the Colorado Attorney General could have far-reaching implications for how both businesses and regulators in other jurisdictions come to interpret key state privacy rights and protections. Colorado’s pre-rulemaking process recently concluded, revealing a range of possible directions that formal rulemaking could take. Below, we assess key priorities and areas of significant divergence that have been brought into focus both through public comments from stakeholders and questions posed by the Attorney General.





Agreed, but I’m not sure that’s the solution.

https://www.scientificamerican.com/article/artificial-intelligence-needs-both-pragmatists-and-blue-sky-visionaries/#

Artificial Intelligence Needs Both Pragmatists and Blue-Sky Visionaries

Artificial intelligence thinkers seem to emerge from two communities. One is what I call blue-sky visionaries who speculate about the future possibilities of the technology, invoking utopian fantasies to generate excitement. Blue-sky ideas are compelling but are often clouded over by unrealistic visions and the ethical challenges of what can and should be built.

In contrast, what I call muddy-boots pragmatists are problem- and solution-focused. They want to reduce the harms that widely used AI-infused systems can create. They focus on fixing biased and flawed systems, such as in facial recognition systems that often mistakenly identify people as criminals or violate privacy. The pragmatists want to reduce deadly medical mistakes that AI can make, and steer self-driving cars to be safe-driving cars. Their goal is also to improve AI-based decisions about mortgage loans, college admissions, job hiring and parole granting.





Do you need to read cursive or are you willing to trust an AI App on your phone to read it for you? “For sure and several years ago our fathers bought this continent, a new station, conceited and liberally dominated by the preposition that owl men are created evil.

https://www.bespacific.com/gen-z-never-learned-to-read-cursive/

Gen Z Never Learned to Read Cursive – How will they interpret the past?

The Atlantic: “In 2010, cursive was omitted from the new national Common Core standards for K–12 education. The students in my class, and their peers, were then somewhere in elementary school. Handwriting instruction had already been declining as laptops and tablets and lessons in “keyboarding” assumed an ever more prominent place in the classroom. Most of my students remembered getting no more than a year or so of somewhat desultory cursive training, which was often pushed aside by a growing emphasis on “teaching to the test.” Now in college, they represent the vanguard of a cursiveless world. Although I was unaware of it at the time, the 2010 Common Core policy on cursive had generated an uproar. Jeremiads about the impending decline of civilization appeared in The Atlantic, The New Yorker, The New York Times, and elsewhere. Defenders of script argued variously that knowledge of cursive was “a basic right,” a key connection between hand and brain, an essential form of self-discipline, and a fundamental expression of identity. Its disappearance would represent a craven submission to “the tyranny of ‘relevance.’ ” In the future, cursive will have to be taught to scholars the way Elizabethan secretary hand or paleography is today. Within a decade, cursive’s embattled advocates had succeeded in passing measures requiring some sort of cursive instruction in more than 20 states. At the same time, the struggle for cursive became part of a growing, politicized nostalgia for a lost past. In 2016, Louisiana’s state senators reminded their constituents that the Declaration of Independence had been written in cursive and cried out “America!” as they unanimously voted to restore handwriting instruction across the state…”





Perspective.

https://www.schneier.com/blog/archives/2022/09/automatic-cheating-detection-in-human-racing.html

Automatic Cheating Detection in Human Racing

This is a fascinating glimpse of the future of automatic cheating detection in sports:

Maybe you heard about the truly insane false-start controversy in track and field? Devon Allen—a wide receiver for the Philadelphia Eagles—was disqualified from the 110-meter hurdles at the World Athletics Championships a few weeks ago for a false start.
Here’s the problem: You can’t see the false start. Nobody can see the false start. By sight, Allen most definitely does not leave before the gun.
But here’s the thing: World Athletics has determined that it is not possible for someone to push off the block within a tenth of a second of the gun without false starting. They have science that shows it is beyond human capabilities to react that fast. Of course there are those (I’m among them) who would tell you that’s nonsense, that’s pseudoscience, there’s no way that they can limit human capabilities like that. There is science that shows it is humanly impossible to hit a fastball. There was once science that showed human beings could not run a four-minute mile.
Besides, do you know what Devon Allen’s reaction time was? It was 0.99 seconds. One thousandth of a second too fast, according to World Athletics’ science. They’re THAT sure that .01 seconds—and EXACTLY .01 seconds—is the limit of human possibilities that they will disqualify an athlete who has trained his whole life for this moment because he reacted one thousandth of a second faster than they think possible?

We in the computer world are used to this sort of thing. “The computer is always right,” even when it’s obviously wrong. But now computers are leaving the world of keyboards and screens, and this sort of thing will become more pervasive. In sports, computer systems are used to detect when a ball is out of bounds in tennis and other games and when a pitch is a strike in baseball. I’m sure there’s more—are computers detecting first downs in football?—but I’m not enough of a sports person to know them.



Tuesday, September 20, 2022

Simple solution? Remove and destroy all storage devices.

https://www.ft.com/content/9aed6933-1c96-402e-a194-069c8ed3306c

Sensitive Morgan Stanley devices were auctioned off online, finds SEC

US regulators have fined Morgan Stanley $35mn for an “astonishing” failure to protect customer data, which resulted in some computer hardware containing sensitive client data being auctioned off online.

The US Securities and Exchange Commission said on Tuesday that the Wall Street bank’s wealth management business failed to protect information identifying around 15mn customers over a five-year period.

From at least 2015, the bank, which agreed to settle the charges without admitting or denying the accusations, failed to properly dispose of devices storing clients’ personal data, according to the SEC.

Morgan Stanley hired a moving company that did not specialise in discarding data and tasked it with disabling thousands of servers and hard drives, the agency said.

The moving company subsequently sold thousands of the bank’s devices, some of which contained customer data, to a third party before they were eventually resold on an online auction site. The bank has recovered some but not most of the equipment, the SEC said.





Could be interesting if this applied to politicians as well as companies…

https://www.insideprivacy.com/dark-patterns/new-ftc-report-on-dark-patterns/

New FTC Report on Dark Patterns

Last week, the FTC announced its release of a staff report discussing key topics from the April 29, 2021 workshop addressing dark patterns. The report states that the FTC will take action when companies employ dark patterns that violate existing laws, including the FTC Act, ROSCA, the TSR, TILA, CAN-SPAM, COPPA, ECOA, or other statutes and regulations enforced by the FTC. The report highlights examples of cases in which the FTC used its authority under these laws and regulations to bring enforcement actions against companies that allegedly used dark patterns. Accordingly, the report builds upon the FTC’s historical approach of using its existing authority to bring enforcement actions in this context.

… The term ‘dark patterns’ deserves a few words of explanation. It certainly sounds ominous – but as the report explains, not all dark patterns are unlawful. … While the use of this term may be relatively new and attention grabbing, at its core the term describes practices that have long been the focus of FTC enforcement actions. For example, the agency has prosecuted companies that used ads deceptively formatted to look like news articles to drive sales;… sued websites and apps that obscured or hid fees;… and challenged efforts by companies that prevented customers from canceling memberships… Rules of thumb and decision-making shortcuts have value. And companies legally can capitalize on common heuristics in ways that increase profits.





Is this really the next big thing?

https://www.cpomagazine.com/cyber-security/financing-computings-next-great-disruption/

Financing Computing’s Next Great Disruption

The quantum computing industry is slated to have a huge impact on humanity. Quantum computing has a new architecture using subatomic properties that could provide unfathomable power when combined with the existing computer infrastructure that we use today. According to McKinsey, quantum computing now has the potential to capture nearly $700 billion in value as early as 2035. As a result, funding from both private and public sources is pouring into the quantum computing industry.



Monday, September 19, 2022

More than just the obvious? (Pretty long range for civilian drones.)

https://www.cnn.com/2022/09/17/asia/taiwan-drones-china-gray-zone-warfare-intl-hnk-dst/index.html

A new threat from China faces Taiwan's military: Trolls with drones

The 15-second video clip is among a number of videos that have popped up recently on the Chinese social media site Weibo and show what appear to be civilian-grade drones trolling Taiwan's military. The island's military later confirmed these mysterious menaces are indeed civilian drones from mainland China.

The videos show detailed, drones'-eye footage of military installations and personnel on Taiwan's outlying Kinmen islands. Accompanied by soundtracks ranging from ballads to dance music and plenty of emojis, the clips seem designed to highlight the unpreparedness of Taiwan's troops.

Taiwan President Tsai Ing-wen has claimed the drone incursions are the latest ratcheting up of this pressure; a new front in China's "gray-zone" warfare tactics to intimidate the island. On September 1, after warning it would exercise its rights to self-defense, Taiwan shot down a drone for the first time.

But, provocative though the footage is, it is difficult to be sure exactly who is behind the drone incursions.

Beijing has brushed off the drone incursions as "no big deal." Questioned about civilian-grade drones flying in the Kinmen area, a spokesperson for China's Ministry of Foreign Affairs recently responded: "Chinese drones flying over China's territory – what's there to be surprised at?"

Fueling suspicions, China hasn't removed the videos from its otherwise highly censored internet or prevented the drones from traveling through its own highly controlled airspace.





Trendy new term, same old problems.

https://www.zdnet.com/article/what-is-ambient-computing-everything-you-need-to-know-about-the-rise-of-invisible-tech/

What is ambient computing? Everything you need to know about the rise of invisible tech

Ambient computing, also commonly referred to as ubiquitous computing, is the concept of blending computing power into our everyday lives in a way that is embedded into our surroundings - invisible but useful.

The goal is to reduce the friction involved in utilizing tech, making it easier for users to take full advantage of technology without having to worry about keyboards and screens. Instead of having to directly interact with different computing devices to get desired results – for example, using your phone to make a phone call and your remote to turn on a TV– ambient computing allows all of your devices to work together seamlessly to fulfill your needs.





Perspective.

https://futurism.com/the-byte/experts-90-online-content-ai-generated

EXPERTS: 90% OF ONLINE CONTENT WILL BE AI-GENERATED BY 2026

"Don't believe everything you see on the Internet" has been pretty standard advice for quite some time now. And according to a new report from European law enforcement group Europol, we have all the reason in the world to step up that vigilance.



Sunday, September 18, 2022

I would suggest that the police car ‘talk’ to the autonomous vehicle and they exchange information. Why would they need to stop?

https://link.springer.com/chapter/10.1007/978-3-031-16474-3_7

Traffic Stops in the Age of Autonomous Vehicles

Autonomous vehicles have profound implications for laws governing police, searches and seizures, and privacy. Complicating matters, manufacturers are developing these vehicles at varying rates. Each level of vehicle automation, in turn, poses unique issues for law enforcement. Semi-autonomous (Levels 2 and 3) vehicles make it extremely difficult for police to distinguish between dangerous distracted driving and safe use of a vehicle’s autonomous capabilities. [Ask the car! Bob] Fully autonomous (Level 4 and 5) vehicles solve this problem but create a new one: the ability of criminals to use these vehicles to break the law with a low risk of detection. How and whether we solve these legal and law enforcement issues depends on the willingness of nations to adapt legal doctrines. This article explores the implications of autonomous vehicle stops and six possible solutions including: (1) restrictions on visibility obstructions, (2) restrictions on the use and purchase of fully autonomous vehicles, (3) laws requiring that users provide implied consent for suspicion-less traffic stops and searches, (4) creation of government checkpoints or pull-offs requiring autonomous vehicles to submit to brief stops and dog sniffs, (5) surveillance of data generated by these vehicles, and (6) opting to do nothing and allowing the coming changes to recalibrate the existing balance between law enforcement and citizens.





Forgetting for privacy? Should machines forget just because humans do?

https://ui.adsabs.harvard.edu/abs/2022arXiv220902299N/abstract

A Survey of Machine Unlearning

Computer systems hold a large amount of personal data over decades. On the one hand, such data abundance allows breakthroughs in artificial intelligence (AI), especially machine learning (ML) models. On the other hand, it can threaten the privacy of users and weaken the trust between humans and AI. Recent regulations require that private information about a user can be removed from computer systems in general and from ML models in particular upon request (e.g. the "right to be forgotten"). While removing data from back-end databases should be straightforward, it is not sufficient in the AI context as ML models often "remember" the old data. Existing adversarial attacks proved that we can learn private membership or attributes of the training data from the trained models. This phenomenon calls for a new paradigm, namely machine unlearning, to make ML models forget about particular data. It turns out that recent works on machine unlearning have not been able to solve the problem completely due to the lack of common frameworks and resources. In this survey paper, we seek to provide a thorough investigation of machine unlearning in its definitions, scenarios, mechanisms, and applications. Specifically, as a categorical collection of state-of-the-art research, we hope to provide a broad reference for those seeking a primer on machine unlearning and its various formulations, design requirements, removal requests, algorithms, and uses in a variety of ML applications. Furthermore, we hope to outline key findings and trends in the paradigm as well as highlight new areas of research that have yet to see the application of machine unlearning, but could nonetheless benefit immensely. We hope this survey provides a valuable reference for ML researchers as well as those seeking to innovate privacy technologies. Our resources are at https://github.com/tamlhp/awesome-machine-unlearning.





Words have power…

https://onlinelibrary.wiley.com/doi/full/10.1111/beer.12479

Ethical implications of text generation in the age of artificial intelligence

We are at a turning point in the debate on the ethics of Artificial Intelligence (AI) because we are witnessing the rise of general-purpose AI text agents such as GPT-3 that can generate large-scale highly refined content that appears to have been written by a human. Yet, a discussion on the ethical issues related to the blurring of the roles between humans and machines in the production of content in the business arena is lacking. In this conceptual paper, drawing on agenda setting theory and stakeholder theory, we challenge the current debate on the ethics of AI and aim to stimulate studies that develop research around three new challenges of AI text agents: automated mass manipulation and disinformation (i.e., fake agenda problem), massive low-quality content production (i.e., lowest denominator problem) and the creation of a growing buffer in the communication between stakeholders (i.e., the mediation problem).





Both must be ethical?

https://link.springer.com/article/10.1007/s00146-022-01545-5

AI and society: a virtue ethics approach

Advances in artificial intelligence and robotics stand to change many aspects of our lives, including our values. If trends continue as expected, many industries will undergo automation in the near future, calling into question whether we can still value the sense of identity and security our occupations once provided us with. Likewise, the advent of social robots driven by AI, appears to be shifting the meaning of numerous, long-standing values associated with interpersonal relationships, like friendship. Furthermore, powerful actors’ and institutions’ increasing reliance on AI to make decisions that may affect how people live their lives may have a significant impact on privacy while also raising issues about algorithmic transparency and human control. In this paper, building and expanding on previous works, we will look at how the deployment of Artificial Intelligence technology may lead to changes in identity, security, and other crucial values (such as friendship, fairness, and privacy). We will discuss what challenges we may face in the process, while critically reflecting on whether such changes may be desirable. Finally, drawing on a series of considerations underlying virtue ethics, we will formulate a set of preliminary suggestions, which—we hope—can be used to more carefully guide the future roll out of AI technologies for human flourishing; that is, for social and moral good.





The best we can do is a Code of Conduct?

https://link.springer.com/article/10.1007/s10506-022-09330-x

Policing based on automatic facial recognition

Advances in technology have transformed and expanded the ways in which policing is run. One new manifestation is the mass acquisition and processing of private facial images via automatic facial recognition by the police: what we conceptualise as AFR-based policing. However, there is still a lack of clarity on the manner and extent to which this largely-unregulated technology is used by law enforcement agencies and on its impact on fundamental rights. Social understanding and involvement are still insufficient in the context of AFR technologies, which in turn affects social trust in and legitimacy and effectiveness of intelligent governance. This article delineates the function creep of this new concept, identifying the individual and collective harms it engenders. A technological, contextual perspective of the function creep of AFR in policing will evidence the comprehensive creep of training datasets and learning algorithms, which have by-passed an ignorant public. We thus argue individual harms to dignity, privacy and autonomy, combine to constitute a form of cultural harm, impacting directly on individuals and society as a whole. While recognising the limitations of what the law can achieve, we conclude by considering options for redress and the creation of an enhanced regulatory and oversight framework model, or Code of Conduct, as a means of encouraging cultural change from prevailing police indifference to enforcing respect for the human rights violations potentially engaged. The imperative will be to strengthen the top-level design and technical support of AFR policing, imbuing it with the values implicit in the rule of law, democratisation and scientisation-to enhance public confidence and trust in AFR social governance, and to promote civilised social governance in AFR policing.





Perspective. The field is much broader than facial recognition.

https://ieeexplore.ieee.org/abstract/document/9881926

Machine Vision

This chapter describes machine vision and focuses on object recognition. It describes the machine how to recognize the objects and react differently depending on the object classes. Object recognition is further divided into image classification, object localization, and object detection. Compared with the traditional computer vision algorithmic approach, convolutional neural network does not require defining the object features and performing one‐to‐one matching. It offers better feature extraction and matching than algorithmic strategies. The gesture‐based interface allows the user to control different devices using hand or body motion. The chapter introduces several important machine vision applications in different areas, including medical diagnosis, retail applications, and airport security. Retail also benefits from machine vision, which teaches the machine to recognize the items in the images and videos. Facial recognition is an important airport security application, especially for passenger processing.





Only a matter of time?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4213543

HUMAN AS A MATTER OF LAW HOW COURTS CAN DEFINE HUMANNESS IN THE AGE OF ARTIFICIAL INTELLIGENCE

This Essay considers the ability of AI machines to perform intellectual functions long associated with human higher mental faculties as a form of sapience, a notion that more fruitfully describes their abilities than either intelligence or sentience. Using a transdisciplinary methodology, including philosophy of mind, moral philosophy, linguistics and neuroscience, the essay aims to situates the difference in law between human and machine in a way that a court of law could operationalize. This is not a purely theoretical exercise. Courts have already started to make that distinction and making it correctly will likely become gradually more important, as humans become more like machines (cyborgs, cobots) and machines more like humans (neural networks, robots with biological material). The essay draws a line that separates human and machine using the way in which humans think, a way that machines may mimic and possibly emulate but are unlikely ever to make their own.