Saturday, September 17, 2022

I noticed something at the end of this article that I had not seen before…

https://www.yaktrinews.com/yakima-police-use-ai-powered-license-plate-readers-to-find-suspects-cars-in-real-time/

Yakima police use AI-powered license plate readers to find suspects’ cars in real time

Murray said if community members or local businesses want to help make the city safer, they can get together to purchase their own Flock cameras for their neighborhoods.

As long as you share that data with us, it is exponentially more than what we’re able to afford on our own,” Murray said.

Flock cameras can be purchased and installed for a $350 fee and customers pay an annual fee of $2,500 per camera.





When you need your news right now!

https://www.makeuseof.com/tag/free-internet-tv-channels-watch-online/

15 Free Internet TV Channels You Can Watch Online

Here are the best internet TV channels to watch online, all of which are both free and legal.



Friday, September 16, 2022

I think the more likely scenario is that AI will ignore us.

https://bgr.com/science/new-paper-by-google-and-oxford-scientists-claims-ai-will-soon-destroy-mankind/

New paper by Google and Oxford scientists claims AI will soon destroy mankind

Researchers with the University of Oxford and Google Deepmind have shared a chilling warning in a new paper. The paper, which was published in AI Magazine last month, claims that the threat of AI is greater than previously believed. It’s so great, in fact, that artificial intelligence is likely to one day rise up and annihilate humankind.



(Related) Unfortunately, neither are the human supervisors.

https://hbr.org/2022/09/ai-isnt-ready-to-make-unsupervised-decisions

AI Isn’t Ready to Make Unsupervised Decisions

AI has progressed to compete with the best of the human brain in many areas, often with stunning accuracy, quality, and speed. But can AI introduce the more subjective experiences, feelings, and empathy that makes our world a better place to live and work, without cold, calculating judgment? Hopefully, but that remains to be seen. The bottom line is, AI is based on algorithms that responds to models and data, and often misses the big picture and most times can’t analyze the decision with reasoning behind it. It isn’t ready to assume human qualities that emphasize empathy, ethics, and morality.





Are things that different at the border?

https://www.pogowasright.org/customs-officials-have-copied-americans-phone-data-at-massive-scale/

Customs officials have copied Americans’ phone data at massive scale

Drew Harwell reports:

U.S. government officials are adding data from as many as 10,000 electronic devices each year to a massive database they’ve compiled from cellphones, iPads and computers seized from travelers at the country’s airports, seaports and border crossings, leaders of Customs and Border Protection told congressional staff in a briefing this summer.
The rapid expansion of the database and the ability of 2,700 CBP officers to access it without a warrant — two details not previously known about the database — have raised alarms in Congress about what use the government has made of the information, much of which is captured from people not suspected of any crime. CBP officials told congressional staff the data is maintained for 15 years.

Read more at The Washington Post,





Am I correct to say we no longer need actual harm but rather proof that the risk of future harm is “sufficient.” (What does sufficient mean?)

https://www.insideprivacy.com/united-states/litigation/data-breach-and-the-dark-web-third-circuit-allows-class-action-standing-with-sufficient-risk-of-harm/

Data Breach and the Dark Web: Third Circuit Allows Class Action Standing With Sufficient Risk of Harm

In a new post on the Inside Class Actions blog, our colleagues discuss a recent Third Circuit decision reinstating the putative class action Clemens v. ExecuPharm Inc., concluding there was sufficient risk of imminent harm after a data breach to confer standing on the named plaintiff when the information had been posted on the Dark Web.





Will AI produce a rebuttal? Is everything they write an attempt to “fool” people? Can AI contribute nothing to our understanding of law?

https://www.bespacific.com/a-human-being-wrote-this-law-review-article/

A Human Being Wrote This Law Review Article: GPT-3 and the Practice of Law

Cyphert, Amy, A Human Being Wrote This Law Review Article: GPT-3 and the Practice of Law (November 1, 2021). UC Davis Law Review, Volume 55, Issue 1, WVU College of Law Research Paper No. 2022-02, Available at SSRN: https://ssrn.com/abstract=3973961

Artificial intelligence tools can now “write” in such a sophisticated manner that they fool people into believing that a human wrote the text. None are better at writing than GPT-3, released in 2020 for beta testing and coming to commercial markets in 2021. GPT-3 was trained on a massive dataset that included scrapes of language from sources ranging from the NYTimes to Reddit boards. And so, it comes as no surprise that researchers have already documented incidences of bias where GPT-3 spews toxic language. But because GPT-3 is so good at “writing,” and can be easily trained to write in a specific voice — from classic Shakespeare to Taylor Swift — it is poised for wide adoption in the field of law. This Article explores the ethical considerations that will follow from GPT-3’s introduction into lawyers’ practices. GPT-3 is new, but the use of AI in the field of law is not. AI has already thoroughly suffused the practice of law. GPT-3 is likely to take hold as well, generating some early excitement that it and other AI tools could help close the access to justice gap. That excitement should nevertheless be tempered with a realistic assessment of GPT-3’s tendency to produce biased outputs. As amended, the Model Rules of Professional Conduct acknowledge the impact of technology on the profession and provide some guard rails for its use by lawyers. This Article is the first to apply the current guidance to GPT-3, concluding that it is inadequate. I examine three specific Model Rules — Rule 1.1 (Competence), Rule 5.3 (Supervision of Nonlawyer Assistance), and Rule 8.4(g) (Bias) — and propose amendments that focus lawyers on their duties and require them to regularly educate themselves about pros and cons of using AI to ensure the ethical use of this emerging technology.”





Perspective.

https://www.bespacific.com/blackboxing-law-by-algorithm/

Blackboxing Law by Algorithm

Grigoleit, Hans Christoph, Blackboxing Law by Algorithm (June 16, 2022). Speech delivered at Oxford Business Law Blog Annual Conference on June 16, 2022

This post is part of a special series including contributions to the OBLB Annual Conference 2022 on ‘Personalized Law—Law by Algorithm, held in Oxford on 16 June 2022. This post comes from Hans Christoph Grigoleit, who participated on the panel on ‘Law by Algorithm’. “Adapting a line by the ingenious pop-lyricist Paul Simon, there are probably 50 ways to leave the traditional paths of legal problem solving by making use of algorithms. However, it seems that the law lags behind other fields of society in realizing synergies resulting from the use of algorithms. In their book ‘Law by Algorithm, Horst Eidenmüller and Gerhard Wagner accentuate this hesitance in a paradigmatic way: while the chapter on ‘Arbitration’ is optimistic regarding the use of algorithms in law (‘… nothing that fundamentally requires human control …’), the authors’ view turns much more pessimistic when trying to specify the perspective of the ‘digital judge’. Following up on this ambivalence, I would like to share some observations on where and why it is not so simple to bring together algorithms and legal problem solving.”



Wednesday, September 14, 2022

I find little new in security as we migrate from mainframes to mini computers to personal computers and to smartphones. I also find that we forget the lessons learned in previous generations.

https://www.csoonline.com/article/3673313/one-third-of-enterprises-dont-encrypt-sensitive-data-in-the-cloud.html#tk.rss_all

One-third of enterprises don’t encrypt sensitive data in the cloud

While most organizations list cloud security as one of their top IT priorities, they continue to ignore basic security hygiene when it comes to data in the cloud, according to Orca’s latest public cloud security report. The report revealed that 36% of organizations have unencrypted sensitive data such as company secrets and personally identifiable information in their cloud assets.





Do you trust every text you receive?

https://www.nbcnews.com/tech/security/disinformation-text-message-problem-answers-rcna41997

Disinformation via text message is a problem with few answers

While there’s now a cottage industry and federal agencies that target election disinformation when it’s on social media, there’s no comparable effort for texts.

The biggest election disinformation event of the 2022 midterm primaries was not an elaborate Russian troll scheme that played out on Twitter or Facebook. It was some text messages.

The night before Kansans were set to vote on a historic statewide referendum last month, voters saw a lie about how to vote pop up on their phone. A blast of old-fashioned text messages falsely told them that a “yes” vote protected abortion access in their state, when the opposite was true — a yes vote would cut abortion protections from the state’s constitution





Makes me wonder how large their budget is for the fines they must know will follow their business models.

https://www.reuters.com/technology/skorea-fines-google-meta-over-accusations-privacy-law-violations-yonhap-2022-09-14/

S.Korea fines Google, Meta billions of won for privacy violations





A new forensic tool and an aide to stalkers everywhere… You can find anything in a sufficiently large (i.e. comprehensive) database.

https://petapixel.com/2022/09/13/ai-searches-public-cameras-to-find-instagram-photos-as-they-are-taken/

AI Searches Public Cameras to Find When Instagram Photos Were Taken

Dries Depoorter has created an artificial intelligence (AI) software that searches public camera feeds against Instagram posts to find the moment that a photo was taken.

The Belgian artist has posted a video of his remarkable project that he calls The Follower [As in stalker? Bob] which he began by recording open cameras that are public and broadcast live on websites such as EarthCam.

After that, he scraped all Instagram photos tagged with the locations of the open cameras and then used AI software to cross-reference the Instagram photos with the recorded footage. He trained the software to scan through the footage and make matches with the Instagram photos he had scraped, and it worked amazingly well.





Only 3?

https://venturebeat.com/ai/3-essential-abilities-ai-is-missing/

3 essential abilities AI is missing

Throughout the past decade, deep learning has come a long way from a promising field of artificial intelligence (AI) research to a mainstay of many applications. However, despite progress in deep learning, some of its problems have not gone away. Among them are three essential abilities:

To understand concepts,

to form abstractions and

to draw analogies

that’s according to Melanie Mitchell, professor at the Santa Fe Institute and author of “Artificial Intelligence: A Guide for Thinking Humans.”

During a recent seminar at the Institute of Advanced Research in Artificial Intelligence, Mitchell explained why abstraction and analogy are the keys to creating robust AI systems. While the notion of abstraction has been around since the term “artificial intelligence” was coined in 1955, this area has largely remained understudied, Mitchell says.





Eventually an AI resource!

https://www.bespacific.com/free-law-project-collaborates-with-vlex-to-launch-complete-updated-and-audited-open-case-law-database/

Free Law Project Collaborates with vLex to Launch Complete, Updated, and Audited Open Case Law Database

Free Law Project: “In early 2010, we launched CourtListener.com. It wasn’t much, but it was a start. It could scrape decisions off court websites and whenever it found new decisions, it would send alerts by email and make the documents publicly searchable. Pretty soon, our ambition grew. We came to believe that an open database of legal opinions was an important part of the democratic experiment. Wikipedia said it wanted to provide access to “the sum all human knowledge.” We believed we could provide the sum of all case law information. Over the last twelve years, we’ve worked on this problem. We expanded to state courts and set up scrapers so we had more data coming in each day. We added bulk data downloads, the first-ever legal opinion API, and database replication, so people could get the data out. We added historical data from the Library of Congress, Public.Resource.Org, Lawbox, Inc,  Justia, Harvard Law Library, and other sources. Recently, we even added a system for courts to add their decisions directly to CourtListener. It’s been a long, expensive and Sisyphean effort, but, like others working to open this data, we’ve never lost sight of how important it is to the country, nor of the impact it would make on the legal sector if we could pull it off. Today, through our collaboration with vLex, we take another big step.

With their financial and technical support, we aim to finish collecting every precedential legal decision from both the federal courts and the state appellate courts. Once collected, we will clean up this data and audit our collection for completion. We will enhance our citation finder and our database of courts so they are complete. Finally, by collaborating with others in this effort, we aim to do the hard work of adding citations to our database as they are published in regional and federal reporters…”





Not unusual. I found the same thing in the equestrian world. There are still a few firms that make saddles and buggy whips, and make a lot of money doing it.

https://www.bespacific.com/we-spoke-with-the-last-person-standing-in-the-floppy-disk-business/

We Spoke With the Last Person Standing in the Floppy Disk Business

Aiga Eye on Design: “Turns out the obsolete floppy is way more in demand than you’d think. Tom Persky is the self-proclaimed “last man standing in the floppy disk business.” He is the time-honored founder of floppydisk.com, a US-based company dedicated to the selling and recycling of floppy disks. Other services include disk transfers, a recycling program, and selling used and/or broken floppy disks to artists around the world. All of this makes floppydisk.com a key player in the small yet profitable contemporary floppy scene. While putting together the manuscript for our new book, Floppy Disk Fever: The Curious Afterlives of a Flexible Medium, we met with Tom to discuss the current state of the floppy disk industry and the perks and challenges of running a business like his in the 2020s. What has changed in this era, and what remains the same?…

In the beginning, I figured we would do floppy disks, but never CDs. Eventually, we got into CDs and I said we’d never do DVDs. A couple of years went by and I started duplicating DVDs. Now I’m also duplicating USB drives. You can see from this conversation that I’m not exactly a person with great vision. I just follow what our customers want us to do. When people ask me: “Why are you into floppy disks today?” the answer is: “Because I forgot to get out of the business.” Everybody else in the world looked at the future and came to the conclusion that this was a dying industry. Because I’d already bought all my equipment and inventory, I thought I’d just keep this revenue stream. I stuck with it and didn’t try to expand. Over time, the total number of floppy users has gone down. However, the number of people who provided the product went down even faster. If you look at those two curves, you see that there is a growing market share for the last man standing in the business, and that man is me…”



Tuesday, September 13, 2022

I have never endorsed amateur ‘hack back’ schemes. It’s risky even for the professionals.

https://www.csoonline.com/article/3673090/u-s-government-offensive-cybersecurity-actions-tied-to-defensive-demands.html#tk.rss_all

U.S. government offensive cybersecurity actions tied to defensive demands

Offensive cyber operations are best known as acts of digital harm, mainly in the context of cyber “warfare,” with nation-states, particularly intelligence organizations, serving as the primary actors. But, as experts and officials speaking at the Billington Cybersecurity Summit this year attest, “offensive cyber” is also a term increasingly applied to the growing use of digital tools and methods deployed by various arms of the federal government, often in partnership with private sector parties, to snuff out threats or help victims of ransomware actors proactively.

These officials and experts say that, for the most part, offensive cyber, if done right and with collaboration among the necessary partners, can lay the groundwork for more robust public and private sector defense. The downside, however, is that a possible misfired offensive hack can cause collateral damage among innocent parties, possibly sparking dangerous real-world responses.

Although the U.S. National Security Agency (NSA) has long engaged in offensive cyber operations, U.S. Cyber Command, an arm of the U.S. military founded in 2010 that is closely linked to NSA, has only recently become a visible player in this arena. In 2018, the U.S. Department of Defense (DoD) published a Cyber Strategy summary introducing a new concept called “defense forward.” The summary states that DoD will “defend forward to disrupt or halt malicious cyber activity at its source, including activity that falls below the level of armed conflict.”

It marked a radical shift in the military’s strategic posture and signaled that the U.S. would not wait until a malicious cyber act occurred before taking action. As legal scholar Bobby Chesney put it, “Defense forward entails operations that are intended to have a disruptive or even destructive effect on an external network: either the adversary’s own system or, more likely, a midpoint system in a third country that the adversary has employed or is planning to employ for a hostile action.”



(Related)

https://thehackernews.com/2022/09/china-accuses-nsas-tao-unit-of-hacking.html

China Accuses NSA's TAO Unit of Hacking its Military Research University

China has accused the U.S. National Security Agency (NSA) of conducting a string of cyberattacks aimed at aeronautical and military research-oriented Northwestern Polytechnical University in the city of Xi'an in June 2022.

The National Computer Virus Emergency Response Centre (NCVERC) disclosed its findings last week, and accused the Office of Tailored Access Operations (TAO ), a cyber-warfare intelligence-gathering unit of the National Security Agency (NSA), of orchestrating thousands of attacks against the entities located within the country





Clearly we need an AI lawyer to make sense of this.

https://thenextweb.com/news/what-does-europes-approach-data-privacy-mean-for-gpt-and-dall-e

What does Europe’s approach to data privacy mean for GPT and DALL-E?

Let's examine the gray areas of data privacy and ownership

GDPR’s primary purpose is to protect European citizens from harmful actions and consequences related to the misuse, abuse, or exploitation of their private information. It’s not much use to citizens (or organizations) when it comes to protecting intellectual property (IP).

Unfortunately, the policies and regulations put in place to protect IP are, to the best of our knowledge, not equipped to cover data scraping and anonymization. That makes it difficult to understand exactly where the regulations apply when it comes to scraping the web for content.





We need an answer.

https://www.axios.com/2022/09/12/ai-images-ethics-dall-e-2-stable-diffusion

AI-generated images open multiple cans of worms

Machine-learning programs that can produce sometimes jaw-dropping images from brief text prompts have advanced in a matter of months from a "that's quite a trick" stage to a genuine cultural disruption.

These new AI capabilities confront the world with a mountain of questions over the rights to the images the programs learned from, the likelihood they will be used to spread falsehoods and hate, the ownership of their output and the nature of creativity itself.





Tools & Techniques.

https://www.bespacific.com/how-to-search-tweets-by-location/

How To Search Tweets By Location

Advos: “Ever wonder who the people are that are near your location and tweeting with a certain topic or hashtag? Maybe not, unless you are a Twitter nerd like me. But it can be interesting and if done correctly, improve your social marketing. Here is how to do it …”



Monday, September 12, 2022

Any manufacturer of hardware that is not making a similar shift is doomed.

https://www.wsj.com/articles/deere-invests-billions-in-self-driving-tractors-smart-crop-sprayers-11662904802?mod=djemalertNEWS

Deere Invests Billions in Self-Driving Tractors, Smart Crop Sprayers

For decades, Deere & Co. has dominated the hardware that powers the American farm industry with tractors, harvesters and other machinery used to plant seeds and reap crops.

Now, Deere aims to extend its dominance to software to make those machines—and agriculture—more efficient and productive.

The company this year is rolling out self-driving tractors hat can plow fields by themselves, and sprayers that distinguish weeds from crops. Deere, which helped make satellite-guided tractors ubiquitous in the U.S. Farm Belt over the past 20 years, is investing billions of dollars to develop smarter machines that the company said will make farming faster and more efficient than it ever could be with just farmers behind the wheel.

It’s all about doing more with less,” said John May, Deere’s chief executive.

By the end of the decade, Mr. May projects that 10% of Deere’s annual revenue will come from fees for using software.





Just the facts…

https://www.pogowasright.org/fact-sheet-on-the-ftcs-commercial-surveillance-and-data-security-rulemaking/

Fact Sheet on the FTC’s Commercial Surveillance and Data Security Rulemaking

From the FTC:

Commercial surveillance is the business of collecting, analyzing, and profiting from information about people. Technologies essential to everyday life also enable near constant surveillance of people’s private lives. The volume of data collected exposes people to identity thieves and hackers. Mass surveillance has heightened the risks and stakes of errors, deception, manipulation, and other abuses. The Federal Trade Commission (FTC) is asking the public to weigh in on whether new rules are needed to protect people’s privacy and information in the commercial surveillance economy.

Access the full fact sheet [Here: https://www.ftc.gov/system/files/ftc_gov/pdf/Commercial%20Surveillance%20and%20Data%20Security%20Rulemaking%20Fact%20Sheet_1.pdf





Vulnerable to ‘tailgating’ like any access key?

https://www.theguardian.com/australia-news/2022/sep/06/sydney-schools-use-of-fingerprint-scanners-in-toilets-an-invasion-of-privacy-expert-says

Sydney school’s use of fingerprint scanners in toilets an invasion of privacy, expert says

A Sydney high school’s decision to install fingerprint scanners at the entrance to toilets to track student movements and prevent vandalism has been criticised as “unreasonable and disproportionate” by a privacy expert.

Moorebank high school moved to install the scanners in term three, with the school’s principal, Vally Grego, telling parents it was a measure intended to reduce vandalism

… “Students should have the right to go to the bathroom without having their biometric information collected, and [their] movements constantly monitored,” she said.





Note the alternatives that allow you to be an authorized stalker.

https://www.makeuseof.com/free-phone-tracker-app-without-permission/

5 Reasons Not to Install a Free Phone Tracker App Without Permission

Looking for the best free phone tracker app to use without someone's permission? You might want to reconsider...



Sunday, September 11, 2022

Is this the path to AI personhood? (Could you sentence an AI to “Life?”)

https://cyberleninka.ru/article/n/criminal-liability-for-actions-of-artificial-intelligence-approach-of-russia-and-china

CRIMINAL LIABILITY FOR ACTIONS OF ARTIFICIAL INTELLIGENCE: APPROACH OF RUSSIA AND CHINA

In the Era of Artificial intelligence (AI) it is necessary not only to define precisely in the national legislation the extent of protection of personal information and limits of its rational use by other people, to improve data algorithms and to create ethics committee to control risks, but also to establish precise liability (including criminal liability) for violations, related to AI agents. According to existed criminal law of Russia and criminal law of the People’s Republic of China AI crimes can be divided into three types: crimes, which can be regulated with existed criminal laws; crimes, which are regulated inadequately with existed criminal laws; crimes, which cannot be regulated with existed criminal laws. Solution of the problem of criminal liability for AI crimes should depend on capacity of the AI agent to influence on ability of a human to understand public danger of committing action and to guide his activity or omission. If a machine integrates with an individual, but it doesn’t influence on his ability to recognize or to make decisions. In this case an individual is liable to be prosecuted. If a machine influences partially on human ability to recognize or to make decisions. In this case engineers, designers and units of combination should be prosecuted according to principle of relatively strict liability. In case, when AI machine integrates with an individual and controls his ability to recognize or to make decisions, an individual should be released from criminal prosecution.





Has the pendulum swung too far?

https://www.wsj.com/articles/trial-of-former-uber-executive-has-security-officials-worried-about-liability-for-hacks-11662638107?mod=djemalertNEWS

Trial of Former Uber Executive Has Security Officials Worried About Liability for Hacks

Joe Sullivan, a former federal prosecutor, is accused of helping to cover up a security breach, a charge he denies

The federal trial of a former Uber Technologies Inc. executive over a 2016 hack has raised concerns among cybersecurity professionals about the liability they might face as they confront attackers or seek to negotiate with them.

Joseph Sullivan, the former executive, is facing criminal obstruction charges in a trial that began Wednesday in San Francisco for his role in paying hackers who claimed to have discovered a security vulnerability within Uber’s systems.

Federal prosecutors have charged Mr. Sullivan with criminal obstruction, alleging that he helped orchestrate a coverup of the security breach and sought to conceal it to avoid required disclosures.





AI will not be as friendly (or as smart) as Judge Judy!

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4206664

AI, Can You Hear Me? Promoting Procedural Due Process in Government Use of Artificial Intelligence Technologies

This Article explores the constitutional implications of algorithms, machine learning, and Artificial Intelligence (AI) in legal processes and decision-making, particularly under the Due Process Clause. Regarding Judge Henry J. Friendly’s procedural due process principles of the U.S. Constitution, decisions produced using AI appear to violate all but one or two of them. For instance, AI systems may provide the right to present evidence and notice of the proposed action, but do not provide any opportunity for meaningful cross-examination, knowledge of opposing evidence, or the true reasoning behind a decision. Notice can also be inadequate or even incomprehensible. This Article analyzes the challenges of complying with procedural due process when employing AI systems, explains constraints on computer-assisted legal decision-making, and evaluates policies for fair AI processes in other jurisdictions, including the European Union (EU) and the United Kingdom (UK). Building on existing literature, it explores the various stages in the AI development process, noting the different points at which bias may occur, thereby undermining procedural due process principles. Furthermore, it discusses the key variables at the heart of AI machine learning models and proposes a framework for responsible AI designs. Finally, this Article concludes with recommendations to promote the interests of justice in the United States as the technology develops.





People for the Ethical Treatment of Fish? Automating bias: determining outcomes by how the salmon looks?.

https://www.wageningenacademic.com/doi/abs/10.3920/978-90-8686-939-8_73

Ethics through technology – individuation of farmed salmon by facial recognition

One fundamental element in our moral duties to sentient animals, according to some central ethical approaches, is to treat them as individuals that are morally significant for their own sake. This is impossible in large-scale industrial salmon aquaculture due to the number of animals and their inaccessibility under the surface. Reducing the numbers to ensure individual care would make salmon farming economically unfeasible. Technology may provide alternative solutions. FishNet is an emerging facial recognition technology which allows caretakers to monitor behaviour and health of individual fish. We argue that FishNet may be a solution for ensuring adequate animal welfare by overcoming current obstacles to monitoring and avoid stress caused by physical interaction with humans. This surveillance can also expand our knowledge of farmed fish behaviour, physical and social needs. Thus, we may learn to perceive them as fellow creatures deserving of individual care and respect, ultimately altering the industry practices. However, the technology may serve as a deflection, covering up how individual salmon are doomed to adverse and abnormal behaviour. This may strengthen a paradigm of salmon as biomass, preventing the compassion required for moral reform, where the understanding of fish welfare is restricted to the prevention of suffering as a means to ensure quality products. Whether FishNet will contribute to meet the moral duty to recognize and treat farmed fish as individuals or not, requires reflection upon the ethical dualities of this technology, simultaneously enabling and constraining our moral perceptions and freedom to act. We will discuss the conditions for realizing the ethical potential of this technology.





I wonder how easily this translates to humans listening to misinformation? (Should we test every AI?) Could we design ‘self testing’ into our AI?

https://www.the-sun.com/tech/6158380/psychopath-ai-scientists-content-dark-web/

Psychopath AI’ created by scientists who fed it content from ‘darkest corners of web’

PSYCHOPATHIC AI was created by scientists who fed it dark content from the web, a resurfaced study reveals.

In 2018, MIT scientists developed an AI dubbed 'Norman', after the character Norman Bates in Alfred Hitchcock’s cult classic Psycho, per BBC.

The aim of this experiment was to see how training AI on data from "the dark corners of the net" would alter its viewpoints.

'Norman' was pumped with continuous image captions from macabre Reddit groups that share death and gore content.

And this resulted in the AI meeting traditional 'psychopath criteria, per psychiatrists.

This led to insight for the MIT scientists behind 'Norman', who said that if an AI displays bias, it's not the program that's at fault. [I think it is! Bob]

"The culprit is often not the algorithm itself but the biased data that was fed into it,” the team explained.





Some headlines just catch your eye.

https://www.washingtonexaminer.com/news/justice/how-trump-fbi-raid-may-have-exposed-lawyers

Make Attorneys Get Attorneys’: How Trump FBI raid may have exposed MAGA lawyers