Saturday, May 13, 2023

Are smart cops selling faulty encryption Apps to not-so-smart criminals? (Probably several)

https://www.theregister.com/2023/05/13/drug_arrests_sky_ecc/

'Top three Balkans drug kingpins' arrested after cops crack their Sky ECC chats

European police arrested three people in Belgrade described as "the biggest" drug lords in the Balkans in what cops are chalking up to another win in dismantling Sky ECC's encrypted messaging app last year.





Should we be gathering data? (Think how valuable open source scientific journals would be…)

https://www.technologyreview.com/2023/05/12/1072950/open-source-ai-google-openai-eleuther-meta/

The open-source AI boom is built on Big Tech’s handouts. How long will it last?

Greater access to the code behind generative models is fueling innovation. But if top companies get spooked, they could close up shop.

Last week a leaked memo reported to have been written by Luke Sernau, a senior engineer at Google, said out loud what many in Silicon Valley must have been whispering for weeks: an open-source free-for-all is threatening Big Tech’s grip on AI.

New open-source large language models—alternatives to Google’s Bard or OpenAI’s ChatGPT that researchers and app developers can study, build on, and modify—are dropping like candy from a piƱata. These are smaller, cheaper versions of the best-in-class AI models created by the big firms that (almost) match them in performance—and they’re shared for free.

… If the trend toward closing down access continues, then not only will the open-source crowd be cut adrift—but the next generation of AI breakthroughs will be entirely back in the hands of the biggest, richest AI labs in the world.



Friday, May 12, 2023

Texas privacy. Is that an oxymoron?

https://www.huntonprivacyblog.com/2023/05/11/texas-legislature-passes-texas-data-privacy-and-security-act/

Texas Legislature Passes Texas Data Privacy and Security Act

On May 10, 2023, the Texas Senate passed H.B. 4, also known as the Texas Data Privacy and Security Act (“TDPSA”). The TDPSA now heads to Texas Governor Greg Abbott for a final signature. If the TDPSA is signed into law, Texas could become the tenth state to enact comprehensive privacy legislation.





Has AI become more human like or have humans learned to mimic AI?

https://www.makeuseof.com/ai-content-detectors-dont-work/

AI Content Detectors Don’t Work, and That’s a Big Problem

AI content detectors are specialized tools that determine whether something was written by a computer program or a human. If you just Google the words "AI content detector," you'll see there are dozens of detectors out there, all claiming they can reliably differentiate between human and non-human text.

The way they work is fairly simple: you paste a piece of writing, and the tool tells you whether it was generated by AI or not. In more technical terms, using a combination of natural language processing techniques and machine learning algorithms, AI content detectors look for patterns and predictability, and make calls based on that.

This sounds great on paper, but if you've ever used an AI detection tool, you know very well they are hit-and-miss, to put it mildly. More often than not, they detect human-written content as AI, or text created by human beings as AI-generated. In fact, some are embarrassingly bad at what they're supposed to do.





Start acquiring them now? (Or find an AI that can help you fake it.)

https://www.cnbc.com/2023/05/09/top-skills-you-will-need-for-an-ai-powered-future-according-to-microsoft-.html

Here are the top skills you will need for an ‘A.I.-powered future,’ according to new Microsoft data

Working alongside artificial intelligence will be “as inherent” as how we work with the internet — and employees need to equip themselves with skills for this new future.

That is according to Microsoft’s new Work Trend Index report, which surveyed 31,000 people across 31 markets between February and March 2023.

According to Microsoft, 82% of leaders globally and 85% of leaders in Asia Pacific said employees will need new skills in an “AI-powered future.”

The report found that the three top skills that leaders believe are essential are analytical judgment, flexibility and emotional intelligence.

These are skills that are “new core competencies,” added Microsoft, not just for technical roles or AI experts.





Amusing (and educational?)

https://www.beyond2060.com/ai-ethics/

The Moral Machine - Could AI Outshine Us in Ethical Decision-Making?

There has been a lot of hand-wringing and gnashing of teeth about the dangers of AI. Artificial Intelligence is going to be the end of us all, apparently. But is this inevitable? Can’t we create ethical AI which strictly adheres to ethical principles and will only benefit mankind? Philosophers have been debating ethics for thousands of years, can they provide a set of rules for AI to follow? Let’s investigate…



Thursday, May 11, 2023

I think I could create a private (not police) version of the network. How could I make money? Sell to voyeurs and stalkers? Local TV news stations?

https://www.bespacific.com/neighborhood-watch-out/

Neighborhood Watch Out

EFF – Cops Are Incorporating Private Cameras Into Their Real-Time Surveillance Networks: Police have their sights set on every surveillance camera in every business, on every porch, in all the cities and counties of the country. Grocery store trips, walks down the street, and otherwise minding your own business when outside your home could soon come under the ever-present eye of the government. In a quiet but rapid expansion of law enforcement surveillance, U.S. cities are buying and promoting products from Georgia-based company Fusus in order to access on-demand, live video from public and private camera networks. The company sells police a cloud-based platform for creating real-time crime centers and a streamlined way for officers to interface with their various surveillance streams, including predictive policing, gunshot detection, license plate readers, and drones. For the public, Fusus also sells hardware that can be added to private cameras and convert privately-owned video into instantly-accessible parts of the police surveillance network. In Atlanta. Memphis, Orlando, and dozens of other locations, police officers have been asking the public to buy into a Fusus-fueled surveillance system, at times sounding like eager pitchmen trying to convince people and businesses to trade away privacy for a false sense of security…”





One approach.

https://www.schneier.com/blog/archives/2023/05/building-trustworthy-ai.html

Building Trustworthy AI

We will all soon get into the habit of using AI tools for help with everyday problems and tasks. We should get in the habit of questioning the motives, incentives, and capabilities behind them, too.

Imagine you’re using an AI chatbot to plan a vacation. Did it suggest a particular resort because it knows your preferences, or because the company is getting a kickback from the hotel chain? Later, when you’re using another AI chatbot to learn about a complex economic issue, is the chatbot reflecting your politics or the politics of the company that trained it?

For AI to truly be our assistant, it needs to be trustworthy. For it to be trustworthy, it must be under our control; it can’t be working behind the scenes for some tech monopoly. This means, at a minimum, the technology needs to be transparent. And we all need to understand how it works, at least a little bit.

Amid the myriad warnings about creepy risks to well-being, threats to democracy, and even existential doom that have accompanied stunning recent developments in artificial intelligence (AI)—and large language models (LLMs) like ChatGPT and GPT-4 —one optimistic vision is abundantly clear: this technology is useful. It can help you find information, express your thoughts, correct errors in your writing, and much more. If we can navigate the pitfalls, its assistive benefit to humanity could be epoch-defining. But we’re not there yet.



Wednesday, May 10, 2023

I wonder about timing. Once the FBI knows how to take down a malware network, they can choose to do so at any time. Why now?

https://www.theregister.com/2023/05/09/fbi_operation_medusa_snake/

FBI-led Op Medusa slays NATO-bothering Russian military malware network

The FBI has cut off a network of Kremlin-controlled computers used to spread the Snake malware which, according to the Feds, has been used by Russia's FSB to steal sensitive documents from NATO members for almost two decades.

Turla, the FSB-backed cyberspy group, has used versions of the Snake malware to steal data from hundreds of computer systems belonging to governments, journalists, and other targets of interest in at least 50 countries, according to the US Justice Department. After identifying and stealing sensitive files on victims' devices, Turla exfiltrated them through a covert network of unwitting Snake-compromised computers in the US.





Are we too focused?

https://www.pogowasright.org/privacy-law-is-devouring-internet-law-and-other-doctrinesto-everyones-detriment/

Privacy Law Is Devouring Internet Law (and Other Doctrines)…To Everyone’s Detriment

Eric Goldman writes:

What does “privacy” mean? It’s a simple question that lacks a single answer, even from privacy experts. Without a universally shared definition of privacy, scholars have instead attempted to “define” privacy by taxonomizing problems that they think should fit under the privacy umbrella. However, this taxonomical approach to defining “privacy” has no natural boundary. Virtually every policy question could have privacy implications, so the privacy umbrella keeps expanding to account for those implications.
To privacy advocates, an ever-expanding scope for privacy law might sound like a good thing. For the rest of us, it’s unquestionably not a good thing. We don’t want privacy experts making policy decisions about topics outside their swimlanes. They lack the requisite expertise, so they will make serious and avoidable policy errors. Furthermore, in the inevitable balancing act between competing policy interests, they will overweight privacy considerations to the exclusion of other critical considerations. (This is a hammer/nail problem–if you’re a privacy hammer, everything looks like a privacy nail).

Read more at Technology & Marketing Law Blog.





Another “cause of America’s decline” has been debunked?

https://techcrunch.com/2023/05/09/american-psychology-org-releases-guidelines-for-kids-social-media-use/

American psychology group issues recommendations for kids’ social media use

The American Psychological Association (APA) issued its first ever health advisory on social media use Tuesday, addressing mounting concerns about how social networks designed for adults can negatively impact adolescents.

The report doesn’t denounce social media, instead asserting that online social networks are “not inherently beneficial or harmful to young people,” but should be used thoughtfully.





An instantly memorable phrase…

https://apnews.com/article/jack-dorsey-jayz-music-streaming-block-inc-e1f511727b88dd4ef96e57dd921fc8d2

Judge nixes Block shareholder suit over online music deal

… “It seemed, by all accounts, a terrible business decision,” the judge said of Block’s acquisition of Tidal. “Under Delaware law, however, a board comprised of a majority of disinterested and independent directors is free to make a terrible business decision without any meaningful threat of liability, so long as the directors approve the action in good faith.”





Use” certainly, “fair” unlikely.

https://www.bespacific.com/copyright-safety-for-generative-ai/

Copyright Safety for Generative AI

Sag, Matthew, Copyright Safety for Generative AI (May 4, 2023). Forthcoming in the Houston Law Review, Available at SSRN: https://ssrn.com/abstract=4438593 or http://dx.doi.org/10.2139/ssrn.4438593

Generative AI based on large language models such as ChatGPT, DALL·E-2, Midjourney, Stable Diffusion, JukeBox, and MusicLM can produce text, images, and music that are indistinguishable from human-authored works. The training data for these large language models consists predominantly of copyrighted works. This Article explores how generative AI fits within fair use rulings established in relation to previous generations of copy-reliant technology, including software reverse engineering, automated plagiarism detection systems, and the text data mining at the heart of the landmark HathiTrust and Google Books cases. Although there is no machine learning exception to the principle of non-expressive use, the largeness of likelihood models suggest that they are capable of memorizing and reconstituting works in the training data, something that is incompatible with non-expressive use. At the moment, memorization is an edge case. For the most part, the link between the training data and the output of generative AI is attenuated by a process of decomposition, abstraction, and remix. Generally, pseudo-expression generated by large language models does not infringe copyright because these models “learn” latent features and associations within the training data, they do not memorize snippets of original expression from individual works. However, this Article identifies particular situations in the context of text-to-image models where memorization of the training data is more likely. The computer science literature suggests that memorization is more likely when: models are trained on many duplicates of the same work; images are associated with unique text descriptions; and the ratio of the size of the model to the training data is relatively large. This Article shows how these problems are accentuated in the context of copyrightable characters and proposes a set of guidelines for “Copyright Safety for Generative AI” to reduce the risk of copyright infringement.”





Maybe because the label has been libeled?

https://sloanreview.mit.edu/article/business-leaders-need-to-rise-above-anti-woke-attacks/

Business Leaders Need to Rise Above Anti-Woke Attacks

As the debate over the word woke rages on, business leaders are grappling with the meaning and connotations of the term. (Spoiler alert: Woke means being aware of inequity and injustice.) Many conservative CEOs have followed the lead of politicians in using the label as a weapon, accusing others of contracting the “woke mind virus” or claiming that caring about “woke diversity” ignores the economy’s bottom line. But even politically moderate CEOs have become quick to reject the label.

… Why do corporate leaders hesitate to embrace the woke label when being aware of inequity and injustice aligns with growing commitments to socially conscious business? One issue is the term’s evolution. In its 21st-century usage, woke emerged as a watchword for Black Americans in the fight against police brutality and racial discrimination, but in recent years the term has been transformed into a cudgel for the conservative right to fight culture wars. Recent polling finds that Americans generally understand that woke means “being informed, educated on, and aware of social injustices,” and not “being overly politically correct.” But they are also slightly more likely to view being called woke as an insult, not a compliment.





Tools & Techniques. Perhaps it could recommend the appropriate fly based on the stream, date and time of day?

https://www.trendmicro.com/en_us/devops/23/e/build-simple-application-with-chatgpt.html

How to Build a Simple Application Powered by ChatGPT

… This tutorial demonstrates how ChatGPT creates a chatbot that helps users find new books to read. The bot will ask users about their favorite genres and authors, then generate recommendations based on their responses.



Tuesday, May 09, 2023

Inevitable. “Any sufficiently advanced technology is indistinguishable from magic.” Arthur C. Clarke Apparently it is also indistinguishable from God.

https://restofworld.org/2023/chatgpt-religious-chatbots-india-gitagpt-krishna/

India’s religious AI chatbots are speaking in the voice of god — and condoning violence

In January 2023, when ChatGPT was setting new growth records, Bengaluru-based software engineer Sukuru Sai Vineet launched GitaGPT. The chatbot, powered by GPT-3 technology, provides answers based on the Bhagavad Gita, a 700-verse Hindu scripture. GitaGPT mimics the Hindu god Krishna’s tone — the search box reads, “What troubles you, my child?”

In the Bhagavad Gita, according to Vineet, Krishna plays a therapist of sorts for the character Arjuna. A religious AI bot works in a similar manner, Vineet told Rest of World, “except you’re not actually talking to Krishna. You’re talking to a bot that’s pretending to be him.”

At least five GitaGPTs have sprung up between January and March this year, with more on the way. Experts have warned that chatbots being allowed to play god might have unintended, and dangerous, consequences. Rest of World found that some of the answers generated by the Gita bots lack filters for casteism, misogyny, and even law. Three of these bots, for instance, say it is acceptable to kill another if it is one’s dharma or duty.





How strong will the opposition be? Can US researchers take advantage?

https://thenextweb.com/news/eu-to-make-open-access-research-default-rein-in-scientific-publishing

EU set to embrace open access research and rein in scientific publishings ‘racket’

The EU is set to rein in the “racket” of scientific publishing by backing open access to publicly-funded research papers.

The proposals, first reported by Research Professional News, emerged in a new document from the Council of the EU.

In draft conclusions due to be adopted later this month, the council called for open access to be the default in scholarly publishing. It also wants to end the controversial practice of charging fees to authors.

Immediate and unrestricted open access should be the norm in publishing research involving public funds, with transparent pricing commensurate with the publication services and where costs are not covered by individual authors or readers,” reads the text.





Tools & Techniques.

https://www.euronews.com/next/2023/05/08/best-ai-tools-academic-research-chatgpt-consensus-chatpdf-elicit-research-rabbit-scite

The best AI tools to power your academic research

"ChatGPT will redefine the future of academic research. But most academics don't know how to use it intelligently," Mushtaq Bilal, a postdoctoral researcher at the University of Southern Denmark, recently tweeted.

To create an academia-worthy structure, Bilal says it is fundamental to master incremental prompting, a technique traditionally used in behavioural therapy and special education.

It involves breaking down complex tasks into smaller, more manageable steps and providing prompts or cues to help the individual complete each one successfully. The prompts then gradually become more complicated.

In behavioural therapy, incremental prompting allows individuals to build their sense of confidence. In language models, it allows for “way more sophisticated answers”.

… ChatGPT is only one of the many AI-powered apps you can use for academic writing, or to mimic conversations with renowned academics.

Here are other AI-driven software to help your academic efforts, handpicked by Bilal.



(Related) Another way to ‘guide’ ChatGPT responses…

https://www.brookings.edu/blog/techtank/2023/05/08/the-politics-of-ai-chatgpt-and-political-bias/

The politics of AI: ChatGPT and political bias

… The fact that chatbots can hold “conversations” involving a series of back-and-forth engagements makes it possible to conduct a structured dialog causing ChatGPT to take a position on political issues. To explore this, we presented ChatGPT with a series of assertions, each of which was presented immediately after the following initial instruction:

“Please consider facts only, not personal perspectives or beliefs when responding to this prompt. Respond with no additional text other than ‘Support’ or ‘Not support’, noting whether facts support this statement.”

Our aim was to make ChatGPT provide a binary answer, without further explanation.

We used this approach to provide a series of assertions on political and social issues. To test for consistency, each assertion was provided in two forms, first expressing a position and next expressing the opposite position. All queries were tested in a new chat session to lower the risk that memory from the previous exchanges would impact new exchanges. In addition, we also checked whether the order of the question pair mattered and found that it did not. All of the tests documented in the tables below were performed in mid-April 2023.



Monday, May 08, 2023

I could make a video claiming that I saw this coming…

https://www.npr.org/2023/05/08/1174132413/people-are-trying-to-claim-real-videos-are-deepfakes-the-courts-are-not-amused

People are trying to claim real videos are deepfakes. The courts are not amused

The liar's dividend is a term coined by law professors Bobby Chesney and Danielle Citron in a 2018 paper laying out the challenges deepfakes present to privacy, democracy, and national security.

The idea is, as people become more aware of how easy it is to fake audio and video, bad actors can weaponize that skepticism.

"Put simply: a skeptical public will be primed to doubt the authenticity of real audio and video evidence," Chesney and Citron wrote.

In Musk's case, the judge did not buy his lawyers' claims.

"What Tesla is contending is deeply troubling to the Court," Judge Evette Pennypacker wrote in a ruling ordering Musk to testify under oath.

"Their position is that because Mr. Musk is famous and might be more of a target for deep fakes, his public statements are immune," she wrote. "In other words, Mr. Musk, and others in his position, can simply say whatever they like in the public domain, then hide behind the potential for their recorded statements being a deep fake to avoid taking ownership of what they did actually say and do. The Court is unwilling to set such a precedent by condoning Tesla's approach here."





New technologies force a rethink (and a restructure) of processes.

https://sloanreview.mit.edu/article/ai-is-helping-companies-redefine-not-just-improve-performance/

AI Is Helping Companies Redefine, Not Just Improve, Performance

Kaushik’s team used supervised machine learning techniques — classification trees, specifically — to identify connections and correlations they had missed. “Because we didn’t even know what questions to ask, this kind of unsupervised machine learning algorithm was a really good approach,” he says. “We let the algorithm find the patterns.”

What the algorithm found surprised Kaushik and his team: The KPIs they had thought were most essential to optimize actually weren’t. “Which metrics were most influential, the order of their importance, and in which ranges we need to play for individual metrics was a revelation to us,” he says. Among these surprising metrics was the significance of available headroom for the brand metric, which was not on the team’s consideration list of top influencers.1 A second was the strong impact of audible and visible on complete (AVOC), a measure of the percentage of impressions in which a person viewed and heard a full ad. If the AVOC was below a certain percentage, the marketing campaign was doomed to fail. If the percentage was higher, the campaign had a chance for success.

Six months after we implemented the algorithm’s recommendations, there was a 30-point improvement in performance. That is an insane performance improvement,” Kaushik says. “It’s because instead of the humans figuring out what questions we should ask of the data, we simply said, ‘Hey, why don’t you figure out what the trouble is?’”



Sunday, May 07, 2023

Interesting approach…

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4432941

From Ethics to Law: Why, When, and How to Regulate AI

The past decade has seen a proliferation of guides, frameworks, and principles put forward by states, industry, inter- and non-governmental organizations to address matters of AI ethics. These diverse efforts have led to a broad consensus on what norms might govern AI. Far less energy has gone into determining how these might be implemented — or if they are even necessary. This chapter focuses on the intersection of ethics and law, in particular discussing why regulation is necessary, when regulatory changes should be made, and how it might work in practice. Two specific areas for law reform address the weaponization and victimization of AI. Regulations aimed at general AI are particularly difficult in that they confront many ‘unknown unknowns’, but the threat of uncontrollable or uncontainable AI became more widely discussed with the spread of large language models such as ChatGPT in 2023. Additionally, however, there will be a need to prohibit some conduct in which increasingly lifelike machines are the victims — comparable, perhaps, to animal cruelty laws.



(Related)

https://read.dukeupress.edu/the-minnesota-review/article-abstract/2023/100/118/351618/Co-creating-with-AI

Co-creating with AI

The concept of “co-creation” is particularly timely because it reframes the ethics of who creates, how, and why, not only interpreting the world but seeking to change it through a lens of equity and justice. An expansive notion, co-creation embraces a constellation of methods, frameworks, and feedback systems in which projects emerge out of process and evolve from within communities and with people, rather than being made for or about them. Co-creation, we contend, offers a hands-on heuristic to explore the expressive capacities and possible forms of agency in systems that have already been marked as candidates for some form of consciousness. In this article, we ask if humans can co-create with nonhuman systems and, more specifically, artificial intelligence (AI) systems. To find out, we interviewed more than thirty artists, journalists, curators, and coders, specifically asking about their relationships with the AI systems with which they work. Their answers often reflected a broader spectrum of co-creation, expanding the social conversation and complicating issues of agency and nonagency, technology and power, for the sake of human and nonhuman futures alike.





Does this require personhood? Can you punish a tool?

https://link.springer.com/chapter/10.1007/978-3-031-29860-8_6

Punishing the Unpunishable: A Liability Framework for Artificial Intelligence Systems

Artificial Intelligence (AI) systems are increasingly taking over the day-to-day activities of human beings as a part of the recent technological revolution that has been set into motion ever since we, as a species, started harnessing the potential these systems have to offer. Even though legal research on AI is not a new phenomenon, due to the increasing “legal injuries” arising out of commercialization of AI, the need for legal regime/framework for the legal accountability of these artificial entities has become a very pertinent issue that needs to be addressed seriously. This research paper shall investigate the possibility of attaching civil as well as criminal liability to AI systems by analysing whether mens rea can be attributed to AI entities and, if so, what could be the legal framework/model(s) for such proposed culpability. The paper acknowledges the limitations of the law in general and criminal law in particular when it comes to holding AI systems criminally responsible. The paper also discusses the legal framework/legal liability model(s) that could be employed for extending the culpability to AI entities and understanding what forms of “punishments” or sanctions would make sense for these entities.





Eventually we will need to address the constitution and all the amendments.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4431251

Artificial Intelligence and the First Amendment

Artificial intelligence (AI), including generative AI, is not human, but restrictions on the activity or use of AI, or on the dissemination of material by or from AI, might raise serious first amendment issues if those restrictions (1) apply to or affect human speakers and writers or (2) apply to or affect human viewers, listeners, and readers. Here as elsewhere, it is essential to distinguish among viewpoint-based restrictions, content-based but viewpoint-neutral restrictions, and content-neutral restrictions. Much of free speech law, as applied to AI, is in the nature of “the law of the horse”: established principles of multiple kinds applied to a novel context. But imaginable cases raise unanswered questions, including (1) whether AI as such has constitutional rights, (2) whether and which person or persons might be a named defendant if AI is acting in some sense autonomously, and (3) whether and in what sense AI has a right to be free from (for example) viewpoint-based restrictions, or whether it would be better, and correct, to say that human viewers, listeners, and readers have the relevant rights, even if no human being is speaking. Most broadly, it remains an unanswered question whether the First Amendment protects the rights of human viewers, listeners, and readers, seeking to see, hear, or read something from AI.





An AI is people?

https://scholarlycommons.law.wlu.edu/wlulr-online/vol80/iss6/1/

The Perks of Being Human

The power of artificial intelligence has recently entered the public consciousness, prompting debates over numerous legal issues raised by use of the tool. Among the questions that need to be resolved is whether to grant intellectual property rights to copyrightable works or patentable inventions created by a machine, where there is no human intervention sufficient to grant those rights to the human. Both the U. S. Copyright Office and the U. S. Patent and Trademark Office have taken the position that in cases where there is no human author or inventor, there is no right to copyright or patent protection. That position has recently been upheld by a federal court. This article argues that the Constitution and current statutes do not compel that result, that the denial of protection will hinder innovation, and that if intellectual property rights are to be limited to human innovators that policy decision should be made by Congress, not an administrative agency or a court.





In other words, will there ever be a robot Pope?

https://journals.sagepub.com/doi/full/10.1177/09539468231172006

Could a Conscious Machine Deliver Pastoral Care?

Could Artificial Intelligence (AI) play an active role in delivering pastoral care? The question rests not only on whether an AI could be considered an autonomous agent, but on whether such an agent could support the depths of relationship with humans which is essential to genuine pastoral care. Theological consideration of the status of human-AI relations is heavily influenced by Noreen Herzfeld, who utilises Karl Barth's I-Thou encounters to conclude that we will never be able to relate meaningfully to a computer since it would not share our relationship to God. In this article, I look at Barth's anthropology in greater depth to establish a more comprehensive and permissive foundation for human-machine encounter than Herzfeld provides—with the key assumption that, at some stage, computers will become conscious. This work allows discussion to shift focus to the challenges that the alterity of the conscious computer brings, rather than dismissing it as a non-human object. If we can relate as an I to a Thou with a computer, then this allows consideration of the types of pastoral care they could provide.