Saturday, January 21, 2023

Interesting failure of their Terms of Service. “You must allow us to ignore the GDPR if you want to use this service…: How many others will also fail?

https://thehackernews.com/2023/01/whatsapp-hit-with-55-million-fine-for.html

WhatsApp Hit with €5.5 Million Fine for Violating Data Protection Laws

At the heart of the ruling is an update to the messaging platform's Terms of Service that was imposed in the days leading to the enforcement of the General Data Protection Regulation (GDPR) in May 2018, requiring that users agree to the revised terms in order to continue using the service or risk losing access.

The complaint, filed by privacy non-profit NOYB, alleged that WhatsApp breached the regulation by compelling its users to "consent to the processing of their personal data for service improvement and security" by "making the accessibility of its services conditional on users accepting the updated Terms of Service."

"WhatsApp Ireland is not entitled to rely on the contract legal basis for the delivery of service improvement and security," the DPC said in a statement, adding the data collected so far amounts to a contravention of GDPR.





Are the cars smart enough to know what they’ve been asked to do is not right?

https://news.yahoo.com/self-driving-cars-create-opportunities-152405455.html?guccounter=1&guce_referrer=aHR0cHM6Ly9uZXdzLmdvb2dsZS5jb20v&guce_referrer_sig=AQAAACx_DY-Lhl4gKRDSG-2TP3395ulVIsYtDml74m8nMKotCcgsSTImDy0jSDBqVQV6ytCyjXSEOEVyShsepskAAWLWVuHRoNu-G7r9SI-wEgqDES06vGJ4LJYboz4LcYJS9aWiVdTeN7P8nH-1KJlLZKNCLYsDI67UkisboeTzFUQe

Self-driving cars create new opportunities for terrorist attacks, says FBI Director Chris Wray

The expanding use of self-driving cars opens up new ways for terrorists to harm Americans, FBI Director Christopher Wray warned Thursday at the World Economic Forum in Davos, Switzerland.

Speaking on a panel on national security, Wray said the FBI views autonomous vehicles as both a possible tool to cause physical harm and a potentially valuable source of personal data that could become a target.





Apparently, China wants to be the first “totally lawyer free” country.

https://www.dw.com/en/how-chinas-ai-is-automating-the-legal-system/a-64465988

How China's AI is automating the legal system

… A court in the city of Hangzhou located south of Shanghai started employing AI in 2019. The judge's assistant program called Xiao Zhi 3.0, or "Little Wisdom," first assisted in a trial of 10 people who had failed to repay bank loans.

Previously, it would have taken 10 separate trials to settle the issue, but with Xiao Zhi 3.0, all the cases were resolved in one hearing with one judge and a decision was available in just 30 minutes.

At first, Xiao Zhi 3.0 took over repetitive tasks such as announcing court procedures during hearings.

Now, the technology is used to record testimony with voice recognition, analyze case materials, and verify information from databases in real time.





I thought we had concluded that AI would start reading works created by AI, thus reinforcing a pro-AI bias…

https://www.bespacific.com/what-happens-when-ai-has-read-everything/

What Happens When AI Has Read Everything?

The Atlantic – The dream of an artificial mind may never become a reality if AI runs out of quality prose to ingest—and there isn’t much left. What Happens When AI Has Read Everything? “Artificial intelligence has in recent years proved itself to be a quick study, although it is being educated in a manner that would shame the most brutal headmaster. Locked into airtight Borgesian libraries for months with no bathroom breaks or sleep, AIs are told not to emerge until they’ve finished a self-paced speed course in human culture. On the syllabus: a decent fraction of all the surviving text that we have ever produced.”..

The trouble is that, like other high-end human cultural products, good prose ranks among the most difficult things to produce in the known universe. It is not in infinite supply, and for AI, not any old text will do: Large language models trained on books are much better writers than those trained on huge batches of social-media posts. (It’s best not to think about one’s Twitter habit in this context.) When we calculate how many well-constructed sentences remain for AI to ingest, the numbers aren’t encouraging. A team of researchers led by Pablo Villalobos at Epoch AI recently predicted that programs such as the eerily impressive ChatGPT will run out of high-quality reading material by 2027. Without new text to train on, AI’s recent hot streak could come to a premature end…”





Tools & Techniques.

https://www.entrepreneur.com/science-technology/how-to-use-ai-tools-like-chatgpt-in-your-business/441616

Here's How Your Business Can Use 3 Popular AI Content Creation Tools

… Artificial intelligence is not only altering the course of the internet but also impacting the future of business. While some fear that it will have harmful economic repercussions by replacing people in jobs, AI can also serve as a game-changing tool to grow a business and increase its efficiency — help with everything from lead generation to content creation.

Here are a few popular new platforms and how you can apply them:





Another way to use ChatGPT…

https://dilbert.com/strip/2023-01-17



Monday, January 16, 2023

I suspect this will further complicate an already complicated area.

https://www.insideprivacy.com/gdpr/gdpr-rights/court-of-justice-of-the-eu-decides-that-gdpr-right-of-access-allows-data-subjects-to-request-the-identity-of-each-data-recipient/

Court of Justice of the EU Decides that GDPR Right of Access Allows Data Subjects to Request the Identity of Each Data Recipient

On January 12, 2023, the Court of Justice of the EU (“Court”) decided that the GDPR’s right of access gives a data subject the choice between asking a controller for (i) the identity of each data recipient to whom the controller will or has disclosed the data subject’s personal data or (ii) only the categories of data recipients. The controller must comply with the data subject’s request, unless it is impossible to identify those recipients (e.g., because they are not yet known) or the controller demonstrates that the data subject’s access request is “manifestly unfounded or excessive.”

According to the Court, this interpretation of Article 15(1)(c) GDPR is in line with the GDPR’s principle of transparency and is necessary to enable data subjects to exercise their other GDPR rights (e.g., right of rectification, erasure and opposition). This interpretation is also confirmed by Article 19 GDPR, which expressly grants data subjects the right to receive from the controller the name of each data recipient, in the context of the controller’s obligation to inform all the recipients of the exercise of the data subject’s rights of rectification, erasure and opposition.





A failure to evolve or are we exposing weaknesses that have always been exploitable?

https://www.thebureauinvestigates.com/stories/2023-01-15/hacked-evidence-and-stolen-data-swamp-english-courts

Hacked evidence and stolen data swamp English courts

A multimillion-pound high court case between an authoritarian Gulf emirate and an Iranian-American businessman has revealed how hacked evidence is being used by leading law firms to advance their clients’ claims.

The case has included allegations that a former Metropolitan Police officer hired Indian hackers and that lawyers from a top City firm held a secret “perjury school” in the Swiss Alps to prepare false witness testimonies about how they got hold of illegally obtained information.

In November the Bureau of Investigative Journalism and the Sunday Times exposed the criminal activities of Aditya Jain, a 31-year-old computer security expert who set up a “hack-for-hire” operation from his apartment in Gurugram, India.

A striking feature of the English legal system is that a judge will accept hacked emails as evidence in court in the interests of justice unless persuaded to exclude it. Peter Ashford, a London solicitor and expert in the admissibility of evidence, claims the English system is “the most liberal”. He added: “Even if you’ve done the hacking, you’ve still got a pretty good chance of getting it in [to the court].”





Do you suppose they have a smarter version of ChatGPT than we do?

https://asia.nikkei.com/Business/China-tech/China-trounces-U.S.-in-AI-research-output-and-quality

China trounces U.S. in AI research output and quality

… Looking at quantity, the number of AI papers exploded from about 25,000 in 2012 to roughly 135,000 in 2021. This mirrors the AI boom that began around 2012, when deep learning came to the fore.

China has consistently stood atop the heap in terms of the volume of papers, the study shows. For 2021, it produced 43,000 papers – roughly twice as many as the U.S.



Sunday, January 15, 2023

Inextricably embedded ethics, I like it!

https://www.taylorfrancis.com/chapters/edit/10.4324/9781003189466-19/values-artificial-intelligence-systems-justin-biddle

Values in Artificial Intelligence Systems

This chapter argues that ethical values are inextricably embedded in Artificial Intelligence (AI) systems. AI systems assist in driving our cars and screen job applications for companies looking to hire new employees. In developing an AI system, researchers must make decisions about which problem is to be addressed and how this problem is to be framed. AI systems that involve the collection, use, and storage of data about human beings – including social media platforms, personal digital assistants, and contact tracing systems – have design features that significantly impact privacy. In creating an AI system that classifies individuals into categories, designers must decide which type of fairness they wish to encode into the system – which conceptions of fairness they wish to prioritize and which they are willing to violate. In creating an AI system that classifies individuals into categories, designers must decide which type of fairness they wish to encode into the system.





Are lawyers beginning to sweat?

https://www.cnet.com/tech/computing/an-ai-lawyer-will-challenge-speeding-tickets-in-court-next-month/

AI as Lawyer: It's Starting as a Stunt, but There's a Real Need

Next month, AI will enter the courtroom, and the US legal system may never be the same.

An artificial intelligence chatbot, technology programmed to respond to questions and hold a conversation, is expected to advise two individuals fighting speeding tickets in courtrooms in undisclosed cities. The two will wear a wireless headphone, which will relay what the judge says to the chatbot being run by DoNotPay, a company that typically helps people fight traffic tickets through the mail. The headphone will then play the chatbot's suggested responses to the judge's questions, which the individuals can then choose to repeat in court.

It's a stunt. But it also has the potential to change how people interact with the law, and to bring many more changes over time. DoNotPay CEO Josh Browder says expensive legal fees have historically kept people from hiring traditional lawyers to fight for them in traffic court, which typically involves fines that can reach into the hundreds of dollars.



(Related)

https://www.bizjournals.com/buffalo/news/2023/01/15/can-artificial-intelligence-displace-attorneys.html

Can artificial intelligence displace attorneys? A ‘conversation’ about law with OpenAI

… As an attorney, I cannot help but wonder if artificial intelligence has the ability to replace me? Can artificial intelligence formulate the same responses to client questions that I, an attorney of 25 years of experience would?

When we compared OpenAI versus lawyers, lawyers seem have the upper hand today, but how soon will that change?

Here is the request I entered into OpenAI (using the GPT-3 model): Describe the power of OpenAI and why that may scare a lawyer in six paragraphs.

OpenAI’s response: OpenAI’s research has had a profound impact on the AI field as a whole, as well as various industries. OpenAI’s breakthroughs in natural language processing (NLP) have enabled machines to communicate with humans in a more natural way. OpenAI has also made advances in reinforcement learning, which has enabled machines to learn from their environment and take action without explicit instructions. OpenAI has also developed bots that have been able to beat the world’s best players in games like Dota 2 and Go.

The power of OpenAI may scare a lawyer because of its potential to automate many aspects of the legal profession. For example, OpenAI has developed GPT-3, an NLP system that can perform legal research and write legal documents. This could potentially make the work of lawyers much easier, but could also lead to job losses in the legal profession. Additionally, OpenAI’s research could lead to the development of AI-powered systems that can make decisions on legal cases, potentially leading to a shift away from human-centric legal decision-making.

OpenAI’s advances could also lead to new challenges in the legal field. For example, OpenAI’s research on AGI could lead to the development of systems that can make decisions and take action autonomously. This could lead to difficult questions about the legal responsibility of these systems and how to regulate them. Additionally, OpenAI’s research could lead to new challenges in terms of data privacy, as OpenAI’s systems have access to large amounts of data that could be used for nefarious purposes.

The power of OpenAI could lead to a revolution in many fields, including the legal profession. While OpenAI’s research could lead to many benefits in the legal field, it could also lead to new challenges that lawyers will have to grapple with. For this reason, OpenAI’s power may be something that lawyers need to be aware of and prepare for.

Never mind that OpenAI’s response was four paragraphs, this is a pretty good answer, although it’s missing some fundamental issues.





Which will go first, lawyers or judges?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4319969

The Rule of Algorithm and the Rule of Law

Can AI adjudicative tools in principle better enable us to achieve the rule of law by replacing judges? This article argues that answers to this question have been excessively focussed on 'output' dimensions of the rule of law – such as conformity of decisions with the applicable law – at the expense of vital 'process' considerations such as explainability, answerability, and reciprocity. These process considerations do not by themselves warrant the conclusion that AI adjudicative tools can never, in any context, properly replace human judges. But they help bring out the complexity of the issues – and the potential costs – that are involved in this domain.





An interesting perspective?

https://link.springer.com/chapter/10.1007/978-3-031-17992-1_6

The Future

In the twenty-first century, the pace of change has become exponential. This means that we are now in a period of exponential ethics that we might call “hyper-ethics.” Hence, a reliable ethical framework is important to ensure that our technological advances do not yield deleterious outcomes. This chapter describes the various technological innovations that will remake our physical beings and our internal selves and that will result in artificial intelligence (AI), that is many orders of magnitude more intelligent than humans—a “transhumanist” future. A host of pivotal questions arises from these developments including how to address challenges such as fewer jobs, less equality, body enhancements, altered family structures, and radical life extension. How should Judaism respond to these fundamental issues which in essence require the reconsideration of what it means to be human?