Saturday, April 22, 2023

I don’t own a smartwatch, smartring or any other devices or apps that track my health, so I had not really considered how much personal data they gathered and shared.

https://www.makeuseof.com/best-symptom-tracker-apps-medical-health/

The 5 Best Symptom Tracker Apps to Support Your Medical Health

Paying attention to unusual or persistent symptoms is important for your health. Curating a list of symptoms can help you determine if you need medical help—whether that’s self-treatment, advice from a pharmacist, or a doctor’s appointment.

As medical appointments can be brief, presenting your doctor with a list of clear and accurate symptoms can help progress your diagnosis when time is short. Here are the best symptom tracker apps you can use to identify medical issues and help support your doctor appointments.





Another law to examine...

https://fpf.org/blog/the-montana-consumer-data-privacy-act-reminds-us-that-privacy-is-bipartisan/

THE ‘MONTANA CONSUMER DATA PRIVACY ACT’ REMINDS US THAT PRIVACY IS BIPARTISAN

On Friday, April 21st, the Montana State Legislature approved the ‘Montana Consumer Data Privacy Act (MCDPA) to be sent to the Governor’s desk. If enacted by Governor Gianforte, Montana would join the 6 states that have adopted comprehensive privacy frameworks. Notably, at almost every stage of the legislative process, the MCDPA received unanimous bipartisan support and strengthening amendments.

The MCDPA includes what would be the strongest baseline consumer privacy rights and protections of any Republican-led U.S. state, comparable in substance and scope to leading privacy frameworks in Connecticut and Colorado. Furthermore, the MCDPA is unlikely to require significant modifications to the compliance programs of organizations that are already subject to either of these existing state laws.





All that is not forbidden is mandatory! (Not just in sports.)

https://www.schneier.com/blog/archives/2023/04/hacking-pickleball.html

Hacking Pickleball

My latest book, A Hacker’s Mind, has a lot of sports stories. Sports are filled with hacks, as players look for every possible advantage that doesn’t explicitly break the rules. Here’s an example from pickleball, which nicely explains the dilemma between hacking as a subversion and hacking as innovation:

Some might consider these actions cheating, while the acting player would argue that there was no rule that said the action couldn’t be performed. So, how do we address these situations, and close those loopholes? We make new rules that specifically address the loophole action. And the rules book gets longer, and the cycle continues with new loopholes identified, and new rules to prohibit that particular action in the future.
Alternatively, sometimes an action taken as a result of an identified loophole which is not deemed as harmful to the integrity of the game or sportsmanship, becomes part of the game. Ernie Perry found a loophole, and his shot, appropriately named the “Ernie shot,” became part of the game. He realized that by jumping completely over the corner of the NVZ, without breaking any of the NVZ rules, he could volley the ball, making contact closer to the net, usually surprising the opponent, and often winning the rally with an un-returnable shot. He found a loophole, and in this case, it became a very popular and exciting shot to execute and to watch!

I don’t understand pickleball at all, so that explanation doesn’t make a lot of sense to me. (I watched a video explaining the shot; that helped somewhat.) But it looks like an excellent example.

The blog post also links to a 2010 paper that I wish I’d known about when I was writing my book: “Loophole ethics in sports,” by Øyvind Kvalnes and Liv Birgitte Hemmestad:

Abstract: Ethical challenges in sports occur when the practitioners are caught between the will to win and the overall task of staying within the realm of acceptable values and virtues. One way to prepare for these challenges is to formulate comprehensive and specific rules of acceptable conduct. In this paper we will draw attention to one serious problem with such a rule-based approach. It may inadvertently encourage what we will call loophole ethics, an attitude where every action that is not explicitly defined as wrong, will be seen as a viable option. Detailed codes of conduct leave little room for personal judgement, and instead promote a loophole mentality. We argue that loophole ethics can be avoided by operating with only a limited set of general principles, thus leaving more space for personal judgement and wisdom.



Thursday, April 20, 2023

So do we raise the bar or find questions AI can’t answer?

https://law.stanford.edu/2023/04/19/gpt-4-passes-the-bar-exam-what-that-means-for-artificial-intelligence-tools-in-the-legal-industry/

GPT-4 Passes the Bar Exam: What That Means for Artificial Intelligence Tools in the Legal Industry

Codex–The Stanford Center for Legal Informatics and the legal technology company Casetext recently announced what they called “a watershed moment.” Research collaborators had deployed GPT-4, the latest generation Large Language Model (LLM), to take—and pass—the Uniform Bar Exam (UBE). GPT-4 didn’t just squeak by. It passed the multiple-choice portion of the exam and both components of the written portion, exceeding not only all prior LLM’s scores, but also the average score of real-life bar exam takers, scoring in the 90th percentile.

Casetext’s Chief Innovation Officer and co-founder Pablo Arredondo, JD ’05, who is a Codex fellow, collaborated with Codex-affiliated faculty Daniel Katz and Michael Bommarito to study GPT-4’s performance on the UBE. In earlier work, Katz and Bommarito found that a LLM released in late 2022 was unable to pass the multiple-choice portion of the UBE. Their recently published paper, “GPT-4 Passes the Bar Exam quickly caught the national attention. Even The Late Show with Steven Colbert had a bit of comedic fun with the notion of robo-lawyers running late-night TV ads looking for slip-and-fall clients.

The rate of progress in this area is remarkable. Every day I see or hear about a new version or application. One of the most exciting areas is something called Agentic AI, where the LLMs (large language models) are set up so that they can “themselves” strategize about how to carry out a task, and then execute on that strategy, evaluating things along the way. For example, you could ask an Agent to arrange transportation for a conference and, without any specific prompting or engineering, it would handle getting a flight (checking multiple airlines if need be) and renting a car. You can imagine applying this to substantive legal tasks (i.e., first I will gather supporting testimony from a deposition, then look through the discovery responses to find further support, etc).

Another area of growth is “mutli-modal,” where you go beyond text and fold in things like vision. This should enable things like an AI that can comprehend/describe patent figures or compare written testimony with video evidence.



(Related)

https://www.bespacific.com/why-universities-should-return-to-oral-exams-in-the-ai-and-chatgpt-era/

Why universities should return to oral exams in the AI and ChatGPT era

The Conversation: “Imagine the following scenario. You are a student and enter a room or Zoom meeting. A panel of examiners who have read your essay or viewed your performance, are waiting inside. You answer a series of questions as they probe your knowledge and skills. You leave. The examiners then consider the preliminary pre-oral exam grade and if an adjustment up or down is required. You are called back to receive your final grade. This type of oral assessment – or viva voce as it was known in Latin – is a tried and tested form of educational assessment. No need to sit in an exam hall, no fear of plagiarism accusations or concerns with students submitting essays generated by an artificial intelligence (AI) chatbot. Integrity is 100% assured, in a fair, reliable and authentic manner that can also be easily used to assess multiple individual or group assignments. As services like ChatGPT continue to grow in terms of both its capabilities and usage – including in education and academia – is it high time for universities to revert to the time-tested oral exam?





The more explaining the better?

https://www.economist.com/interactive/science-and-technology/2023/04/22/large-creative-ai-models-will-transform-how-we-live-and-work

Large, creative AI models will transform lives and labour markets

They bring enormous promise and peril. In the first of three special articles we explain how they work

Chatgpt embodies more knowledge than any human has ever known. It can converse cogently about mineral extraction in Papua New Guinea, or about tsmc, a Taiwanese semiconductor firm that finds itself in the geopolitical crosshairs. gpt-4, the artificial neural network which powers Chatgpt, has aced exams that serve as gateways for people to enter careers in law and medicine in America. It can generate songs, poems and essays. Other “generative ai” models can churn out digital photos, drawings and animations.

Running alongside this excitement is deep concern, inside the tech industry and beyond, that generative ai models are being developed too quickly.





A review or something new?

https://www.bespacific.com/modern-monetary-theory-an-explanation/

Modern monetary theory: an explanation

Modern monetary theory: an explanation. Professor Richard Murphy. April 2023. Funding the Future. Formerly Tx Research UK. “Modern monetary theory (hereafter, MMT) is an explanation of the way in which money works in an economy. It also explains the consequent impact that the best use of money, using this understanding, might have on behaviour in that economy. The core suggestion made by MMT is that a government is constrained by the real productive capacity of its economy and not by the availability of money, which it can always create. Secondary insights are that money is created by government spending and is destroyed by taxation. […] What I offer here are my interpretation of what I think to be core MMT understanding. I suspect there is much common ground in that. There will be much less common ground on my interpretation of what the understanding means and on the supposed (and very largely, in my opinion, unnecessary) theoretical justification for it. So be it: that is what political economy is about.”



Wednesday, April 19, 2023

A facial recognition tool for the masses? Would this be admissible in court?

https://nypost.com/2023/04/18/creepy-ai-site-can-find-every-photo-of-you-online/

Creepy’ AI site can find every photo of you online: ‘Stalker’s dream’

Let’s face it — there’s just no escaping AI.

But now, a facial recognition website that uses a specialized bot to locate every single picture of a person that’s ever been shared online is rearing its ugly head.

Deemed a “stalker’s dream,” and the “most disturbing AI website on the internet” on Twitter, the site, known as PimEyes, is an identity search engine that’s said to be similar to, yet a bit more sophisticated than, Google’s reverse image search tool.

On its homepage, users are prompted to upload their photo in order to find out where their image has been published.

But the facial recognition service isn’t free — nor is it cheap.





If I gather enough of these, I may create my own guide.

https://dataconomy.com/blog/2023/04/18/basics-of-artificial-intelligence/

AI 101: A beginner’s guide to the basics of artificial intelligence

… While we may use AI chatbots and other AI-powered tools every day, many of us may not be familiar with the underlying principles and techniques that make these technologies possible. In this article, we’ll explore some of the fundamental concepts in artificial intelligence, from supervised and unsupervised learning to bias and fairness in AI. By understanding these basics of artificial intelligence, we can gain a deeper appreciation for the power and potential of this rapidly evolving field.





Sounds like the Donald Trump model as well…

https://www.bespacific.com/the-russian-firehose-of-falsehood-propaganda-model/

The Russian “Firehose of Falsehood” Propaganda Model

Rand – The Russian “Firehose of Falsehood” Propaganda Model Why It Might Work and Options to Counter It = “Since its 2008 incursion into Georgia (if not before), there has been a remarkable evolution in Russia’s approach to propaganda. This new approach was on full display during the country’s 2014 annexation of the Crimean peninsula. It continues to be demonstrated in support of ongoing conflicts in Ukraine and Syria and in pursuit of nefarious and long-term goals in Russia’s “near abroad” and against NATO allies. In some ways, the current Russian approach to propaganda builds on Soviet Cold War–era techniques, with an emphasis on obfuscation and on getting targets to act in the interests of the propagandist without realizing that they have done so. In other ways, it is completely new and driven by the characteristics of the contemporary information environment. Russia has taken advantage of technology and available media in ways that would have been inconceivable during the Cold War. Its tools and channels now include the Internet, social media, and the evolving landscape of professional and amateur journalism and media outlets.

    • High-volume and multichannel

    • Rapid, continuous, and repetitive

    • Lacks commitment to objective reality

    • Lacks commitment to consistency.

We characterize the contemporary Russian model for propaganda as “the firehose of falsehood” because of two of its distinctive features: high numbers of channels and messages and a shameless willingness to disseminate partial truths or outright fictions. In the words of one observer, “[N]ew Russian propaganda entertains, confuses and overwhelms the audience. ” Contemporary Russian propaganda has at least two other distinctive features. It is also rapid, continuous, and repetitive, and it lacks commitment to consistency…”





Perspective.

https://www.wral.com/story/duke-law-grad-goes-viral-after-tweets-explaining-use-of-ai-for-law-scenarios/20817182/

Duke Law grad goes viral after tweets explaining use of AI for law scenarios

… "ChatGPT and these technologies are not a substitute for a lawyer," [Yet. Bob] Pacifici explained. "[If] you've got matters of significance, whether legal or other wise, you need appropriate advisors. This is a privately held company, so anything you put into it can be held by the company."




Tuesday, April 18, 2023

Imagine uncorrected problems in your car’s computer. When you encounter the (rare?) situation that relies on that software to provide a solution, it does exactly the wrong thing.

https://www.theregister.com/2023/04/18/helicopter_crash_missing_software_patch/

Military helicopter crash blamed on failure to apply software patch

The patch in question prevents pilots of the MRH-90 Taipan from performing a “hot start” of the helo’s engine, a technique that sees the craft’s motor powered down and then restarted. The MRH-90 is not designed to do that, with safe procedure instead being to leave the engine idling until it is turned off at the end of a flight.

The ABC, quoting unnamed Army personnel, reported that a patch preventing hot starts has been available for years but has not been applied to all of the Australian Army’s Taipans.





How can I betray thee, let me count the ways.

https://www.makeuseof.com/shouldnt-trust-chatgpt-confidential-data/

Why You Shouldn't Trust ChatGPT With Confidential Information

ChatGPT is become a major security and privacy issue because too many of us are absentmindedly sharing our private information on it. ChatGPT logs every conversation you have with it, including any personal data you share. Still, you wouldn’t know this unless you’ve dug through OpenAI's privacy policy, terms of service, and FAQ page to piece it together.

According to Gizmodo, Samsung's employees mistakenly leaked confidential information via ChatGPT on three separate occasions in the span of 20 days. This is just one example of how easy it is for companies to compromise private information.

If employees use ChatGPT to look for bugs like they did with the Samsung leak, the code they type into the chat box will also be stored on OpenAI's servers. This could lead to breaches that have a massive impact on companies troubleshooting unreleased products and programs. We may even end up seeing information like unreleased business plans, future releases, and prototypes leaked, resulting in huge revenue losses.





Can you threaten by accident?

https://www.scotusblog.com/2023/04/supreme-court-first-amendment-counterman-whalen-colorado/

Colorado man’s First Amendment challenge will test the scope of protection for threatening speech

On Wednesday the Supreme Court will take up Counterman’s appeal to consider how courts should determine what constitutes “true threats,” which are statements not protected by the First Amendment. Should they use an objective test, that looks at whether a reasonable person would regard the statement as a threat of violence? Or should they instead use a subjective test, that requires prosecutors to show that the speaker intended to make a threat?

Both sides in Wednesday’s case agree that the issue is an important one. Counterman stresses that the “notion that a person can spend years in prison for a ‘speech crime’ committed by accident is chilling.” But the state of Colorado, which prosecuted Counterman, counters that Counterman’s messages frightened their recipient and disrupted her life. “This is precisely why threats of violence are not protected by the First Amendment,” the state says: to shield individuals from the fear of violence, which follows from the threats “no matter what the person making the threat intends.”

Colorado’s intermediate appeals court upheld Counterman’s conviction. It ruled that to determine whether Counterman’s statements qualified as a “true threat,” courts should apply an objective test that considers whether a reasonable person would regard the statement as a threat of violence. Because Counterman’s statements were true threats, the appeals court concluded, they were not protected by the First Amendment – and his conviction for stalking therefore did not violate the Constitution.





Does this mean AI has “arrived” or is merely ubiquitous?

https://www.freetech4teachers.com/2023/04/mla-and-apa-provide-guidance-for-citing.html

MLA and APA Provide Guidance for Citing Content Created by AI

It's a bit of an understatement to say that the rapid growth of AI-powered writing and drawing tools is raising many questions for teachers and students. One of those frequently asked questions is "how do you cite ChatGPT?"

Recently, the MLA and the APA have published guidance on how to cite content created through the use of AI tools like ChatGPT. You can read the MLA guide to citing content created by AI here. The APA guide's to citing content created ChatGPT can be read here.

There are many similarities between the two guides. There is one difference that's worth noting. The APA's guide includes a template for citing ChatGPT as an author. The MLA guide says not to treat generative AI tools like ChatGPT as an author.





Tools & Techniques.

https://www.makeuseof.com/chrome-extensions-create-tutorials/

6 Chrome Extensions to Automatically Create Step-by-Step Tutorials

Whether you’re training the new recruits or writing guides for your team to follow, creating step-by-step guides can be time-consuming, especially if you want to add screenshots with instructions.

Luckily, you don’t necessarily need to do it all by yourself. There are multiple Chrome extensions that record your screen while you perform a particular task and then convert it into a written guide. Below, we list the top six of these extensions.





Tools & Techniques.

https://www.bespacific.com/mastering-chatgpt-how-to-craft-effective-prompts-full-guide/

Mastering ChatGPT: How to Craft Effective Prompts (Full Guide)

gptbot.com: “Welcome to the fascinating world of artificial intelligence and natural language processing. As you might already know, ChatGPT, powered by OpenAI’s GPT-3 and GPT-4 architectures, has become one of the most versatile and powerful AI language models. It can generate human-like text responses, answer questions, create content, and even engage in conversation. However, to truly harness the potential of ChatGPT, it’s essential to understand how to effectively communicate with the AI. This starts with crafting the perfect prompts to guide the model and obtain the desired output. In this ultimate guide, we will delve into the art of prompt creation, discuss techniques for optimizing communication with ChatGPT, and share valuable tips and hacks to make the most of this cutting-edge technology. By the end of this guide, you’ll be equipped with the knowledge and skills to craft effective prompts, unleash the AI’s creativity, and avoid common pitfalls. So, let’s get started on this journey to mastering ChatGP…”





Tools & Techniques.

https://www.bespacific.com/chatgpt-gets-its-wolfram-superpowers/

ChatGPT Gets Its ‘Wolfram Superpowers

Stephen Wolfram (March 2023), “ChatGPT Gets Its ‘Wolfram Superpowers’: Early in January I wrote about the possibility of connecting ChatGPT to Wolfram|Alpha. And today—just two and a half months later—I’m excited to announce that it’s happened! Thanks to some heroic software engineering by our team and by OpenAI, ChatGPT can now call on Wolfram|Alpha—and Wolfram Language as well—to give it what we might think of as “computational superpowers”. It’s still very early days for all of this, but it’s already very impressive—and one can begin to see how amazingly powerful (and perhaps even revolutionary) what we can call “ChatGPT + Wolfram” can be. Back in January, I made the point that, as an LLM neural net, ChatGPT—for all its remarkable prowess in textually generating material “like” what it’s read from the web, etc.—can’t itself be expected to do actual nontrivial computations, or to systematically produce correct (rather than just “looks roughly right”) data, etc. But when it’s connected to the Wolfram plugin it can do these things..”





Another useful guide…

https://www.makeuseof.com/beginners-guide-to-kaggle/

A Beginner’s Guide to Kaggle for Data Science

Are you interested in data science? Learn how to get started with Kaggle, the world's largest data science community, in this beginner's guide.

Kaggle is an online community for data science and machine learning (ML) enthusiasts. It is a top learning tool for novices and pros, with realistic practice problems to sharpen your data science skills.



Monday, April 17, 2023

Number 5 clearly has the most growth potential.

https://www.makeuseof.com/ways-ai-can-help-cybercriminals/

5 Ways AI Can Help Cybercriminals

Artificial intelligence can be used for good or for bad. Here are just a few ways it's helping hackers and scammers right now.





Resource.

https://www.bespacific.com/law-professor-makes-digital-copyright-book-open-for-all/

Law Professor Makes Digital Copyright Book Open for All

Internet Archive Blogs: “After spending years researching the history of U.S. copyright law, Jessica Litman says she wants to make it easy for others to find her work. The law professor’s book, Digital Copyright, first published in 2001 by Prometheus Books, is available free online (read now ). After it went out of print in 2015, University of Michigan Press agreed to publish an open access edition of the book. Litman updated all the footnotes (some of which were broken links to web pages only available through preservation on Internet Archive) and made the updated book available under a CC-BY-ND license in 2017. “I wanted the book to continue to be useful,” Litman said. “Free copies on the web make it easy to read.” Geared for a general audience, the book chronicles how copyright laws were drafted, written, lobbied and enacted in Congress over time. Litman researched the legislative history of copyright law, including development of the 1976 Copyright Act, and spent two years in Washington, D.C., observing Congress leading up to the passage of the Digital Millennium Copyright Act in 1998…”



Sunday, April 16, 2023

Inserting AI into every aspect of the legal system?

http://sifisheriessciences.com/journal/index.php/journal/article/view/1357

ARTIFICIAL INTELLIGENCE AND FUTURE IN LAW

Artificial Intelligence is a technology that uses human-like intelligence to automate specific types of tasks. In the field of law, AI is being used to review documents and legal databases, saving time and effort. It is also being used to make legal research more efficient and effective. However, the idea of AI as a substitute for judges raises concerns related to legal reasoning, general propositions and universal principles, logical reasoning, and application of law to facts. The objective of this paper is to provide a comprehensive overview of the key issues and challenges in the field, as well as to present the latest research findings and developments. The paper begins by discussing the background and significance of the topic, followed by an analysis of the existing literature and research studies. The methodology used in this study is also described, including the data collection and analysis techniques. The paper then presents the main results and findings of the study, highlighting the most significant contributions to the field. The implications and applications of the findings are also discussed, along with their potential impact on future research and practice. The paper concludes by summarizing the key points and suggesting directions for future research. Overall, this paper provides valuable insights and knowledge for researchers, practitioners, and policymakers interested in this particular topic.



(Related)

http://lac.gmu.edu/publications/2023/How%20Intelligent%20is%20AI.pdf

How Intelligent is Artificial Intelligence?

Following the writing of my latest books on critical thinking in law and intelligence analysis (Schum and Tecuci, 2023; Tecuci and Schum, 2023; Tecuci, 2023), I was interviewed by Dr. Yvonne McDermott Rees (Professor of Law at Swansea University in the UK), from Evidence Dialogues (https://evidencedialogues.wordpress.com/ ). She asked me about ChatGPT and how this and other AI systems could be used in Law. These were, in essence, my answers.





Perhaps we need new rules for evidence. We definitely need tools to detect deep fakes.

https://www.salon.com/2023/04/15/deepfake-videos-are-so-convincing--and-so-easy-to-make--that-they-pose-a-political/

Deepfake videos are so convincing — and so easy to make — that they pose a political threat

No one wants to be falsely accused of saying or doing something that will destroy their reputation. Even more nightmarish is a scenario where, despite being innocent, the fabricated "evidence" against a person is so convincing that they are unable to save themselves. Yet thanks to a rapidly advancing type of artificial intelligence (AI) known as "deepfake" technology, our near-future society will be one where everyone is at great risk of having exactly that nightmare come true.

Deepfakes — or videos that have been altered to make a person's face or body appear to do something they did not in fact do — are increasingly used to spread misinformation and smear their targets. Political, religious and business leaders are already expressing alarm by the viral spread of deepfakes that maligned prominent figures like former US President Donald Trump, Pope Francis and Twitter CEO Elon Musk. Perhaps most ominously, a deepfake of Ukrainian President Volodomyr Zelenskyy attempted to dupe Ukrainians into believing their military had surrendered to Russia.





Update.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4394591

Global Information Technologies: Ethics and the Law

This is the 2023 edition of Global Information Technologies: Ethics and the Law (Second Edition West Academic Press), which uses the latest legal cases, statutory developments, and mass culture references to apply computer ethics in a global setting. Computer law and ethical dilemmas are presented in an applied format, using concrete legal disputes, regulatory actions, and court decisions to demonstrate that law is codified ethics. This thoroughly updated Second Edition addresses legal and ethical dilemmas created by recent advances in artificial intelligence, smart contracts, biometrics, drones, robotics, 3D printing, crypto currencies, smart contracts, the Internet of Things, and other evolving information technologies. Five leading ethical approaches: (1) Consequentialism, (2) Virtue and Duty Theory, (3) Conflict Perspective, (4) Social Contract Theory and (5) Libertarianism are operationalized in every chapter by applying them to recent legal developments and policy disputes. The five moral perspectives provide practical guidance on how to apply ethics and the law to diverse activities such as negotiating or litigating computer contracts, introducing software products into the marketplace, protecting website users from crimes and torts, and safeguarding online intellectual property rights. This is the first book to highlight the intersection between law and ethics in torts, cybercrimes, privacy, contracts, and all four branches of intellectual property law. Each substantive chapter ends with thoughtful review exercises to help the reader analyze the ethical and legal dilemmas posed by topics such as Internet monitoring, privacy, and intellectual property rights. Case studies are based upon legal opinions and regulations from the United States, the European Union, China, and the rest of the world.





The ancient Greeks had AI?

https://rednie.eco.unc.edu.ar/files/DT/232.pdf

Big Data, Algorithms, AI, Ethics, and the Economy: An Aristotelian Perspective

While a growing body of literature points to the advantages of using algorithms in big data processing, as well as applying them to artificial intelligence (AI), in order to achieve a desired output, it also warns about the pitfalls and perils in algorithm decision-making. Algorithms and AI are the machines and big data is the new oil. Criticisms come from different fields: legal, social, political, medical, and the economic. They argue that algorithms have the power to predict our wishes and behavior and, subsequently, to manage our life: they decide the music we listen to, the news we read, the information we obtain, the content we see online, the movies we watch, the health care we receive, the products we buy, and so on.

These achievements certainly represent an advancement in techniques that we must be willing to embrace. However, they confront us with the well-known technological ambivalence, that is, the fact that technology can be used for good or for bad. In this case, though, advances are global and radical. We are facing a new way of living with a profound anthropological impact, a new social order, the “algorithmic society”, which Balkin has described as, “a society organized around social and economic decisionmaking by algorithms, robots, and AI agents” (2018: 1151, nt. 1). Balkin asserts, “The Algorithmic Society features the collection of vast amounts of data about individuals and facilitates new forms of surveillance, control, discrimination and manipulation, both by governments and by private companies. Call this the problem of Big Data” (2018: 1153).