Saturday, December 10, 2022

First but probably not the last…

https://sfstandard.com/business/a-san-francisco-law-firm-is-leading-the-charge-against-ai/

A San Francisco Law Firm Is Leading the Charge Against AI

… Experts say lawyers will be playing catch up as concerns mount about the technology taking others’ work without their permission. One San Francisco law firm is leading the charge when it comes to holding AI companies accountable.

In November, Joseph Saveri Law Firm filed a class-action lawsuit in U.S. Federal Court against a slew of defendants that includes two San Francisco companies, OpenAI and GitHub, along with GitHub’s owner Microsoft.

As far as we know, this is the first class-action case in the U.S. challenging the training and output of AI systems,” co-counsel Matthew Butterick wrote.





Something to ponder and prepare for?

https://www.politico.com/news/magazine/2022/12/09/revolutionary-conservative-legal-philosophy-courts-00069201

Critics Call It Theocratic and Authoritarian. Young Conservatives Call It an Exciting New Legal Theory.

At the center of this debate was Harvard law professor Adrian Vermeule, whose latest book served as the ostensible subject of the symposium. In conservative legal circles, Vermeule has become the most prominent proponent of “common good constitutionalism,” a controversial new theory that challenges many of the fundamental premises and principles of the conservative legal movement. The cornerstone of Vermeule’s theory is the claim that “the central aim of the constitutional order is to promote good rule, not to ‘protect liberty’ as an end in itself” — or, in layman’s terms, that the Constitution empowers the government to pursue conservative political ends, even when those ends conflict with individual rights as most Americans understand them. In practice, Vermeule’s theory lends support to an idiosyncratic but far-reaching set of far-right objectives: outright bans on abortion and same-sex marriage, sweeping limits on freedom of expression and expanded authorities for the government to do everything from protecting the natural environment to prohibiting the sale of porn.





Tools & Techniques. Something to experiment with?

https://www.jumpstartmag.com/top-5-ai-tools-for-content-writers/

Top 5 AI Tools for Content Writers

Content writing is one of the most in-demand jobs right now. However, even the best writers can struggle with creating high-quality content regularly. But with a bit of help from artificial intelligence (AI), you can quickly write engaging and informative content. While some tools generate content that is ready to go, others require a bit of work to meet your desired standards. All things considered, with AI content generator tools, the adage “teamwork makes it dream work” rings true. Together, you and the AI can produce quality content in no time.

However, it is worth noting that not all AI content-generation tools are created equal. Some require minimal input, while others need more direction. The key is to find the right tool for you and your team. This article will explore the top five AI content-generation tools (based on the author’s usage of each) to help you decide.



Friday, December 09, 2022

Interesting.

https://www.brookings.edu/research/the-geopolitics-of-ai-and-the-rise-of-digital-sovereignty/

The geopolitics of AI and the rise of digital sovereignty

On September 29, 2021, the United States and the European Union’s (EU) new Trade and Technology Council (TTC) held their first summit. It took place in the old industrial city of Pittsburgh, Pennsylvania, under the leadership of the European Commission’s Vice-President, Margrethe Vestager, and U.S. Secretary of State Antony Blinken. Following the meeting, the U.S. and the EU declared their opposition to artificial intelligence (AI) that does not respect human rights and referenced rights-infringing systems, such as social scoring systems.[1] During the meeting, the TTC clarified that “The United States and European Union have significant concerns that authoritarian governments are piloting social scoring systems with an aim to implement social control at scale. These systems pose threats to fundamental freedoms and the rule of law, including through silencing speech, punishing peaceful assembly and other expressive activities, and reinforcing arbitrary or unlawful surveillance systems.”[2]

To understand the extent to which we are moving towards varying forms of technological decoupling, this article first describes the unique positions of the European Union, United States and China concerning regulation of data and the governance of artificial intelligence. The article then discusses implications of these different approaches for technological decoupling, and then discusses implications for specific policies around AI, such as the U.S. Algorithmic Accountability Act, the EU’s AI Act, and China’s regulation of recommender engines.





Might be fun to try on laws and regulations…

https://martechseries.com/predictive-ai/ai-platforms-machine-learning/new-ai-service-summarizes-content-introducing-notedly/

New AI Service Summarizes Content: Introducing Notedly

A new artificial intelligence service by Syntak, LLC is making waves for being able to automatically condense documents into bullet points. Notedly prides itself on its ability to process – and understand – documents of any kind, cutting down read time by at least 50% and increasing content absorption capabilities.



Thursday, December 08, 2022

Are we sure the FBI is not working against us? (Perhaps they are already tapping iCloud data?)

https://www.macrumors.com/2022/12/08/fbi-privacy-groups-icloud-encryption/

FBI Calls Apple's Enhanced iCloud Encryption 'Deeply Concerning' as Privacy Groups Hail It As a Victory for Users

Apple yesterday announced that end-to-end encryption is coming to even more sensitive types of iCloud data, including device backups, messages, photos, and more, meeting the longstanding demand of both users and privacy groups who have rallied for the company to take the significant step forward in user privacy.





When you can’t shoot ‘em, nuke ‘em?

https://www.theregister.com/2022/12/07/san_francisco_terminates_killer_robots/

San Francisco terminates explosive killer cop bots

San Francisco legislators this week changed course on their killer robot policy, banning the police from using remote-control bots fitted with explosives. For now.

On Tuesday, the city's Board of Supervisors voted unanimously to explicitly prohibit lethal force by police robots following a public backlash and worldwide media attention. Under a previously approved policy, SF police robots under human control could have used explosives to kill suspects. The droids were not allowed to use guns.





Prove your innocence?

https://www.politico.eu/article/google-delete-search-result-fake-eu-court-rule/

Google must delete search results about you if they’re fake, EU court rules

People in Europe can get Google to delete search results about them if they prove the information is "manifestly inaccurate," the EU's top court ruled Thursday.

The case kicked off when two investment managers requested Google to dereference results of a search made on the basis of their names, which provided links to certain articles criticising that group’s investment model. They say those articles contain inaccurate claims.

Google refused to comply, arguing that it was unaware whether the information contained in the articles was accurate or not.

"The right to freedom of expression and information cannot be taken into account where, at the very least, a part – which is not of minor importance – of the information found in the referenced content proves to be inaccurate," the court said in a press release accompanying the ruling.

People who want to scrub inaccurate results from search engines have to provide sufficient proof that what is said about them is false. But it doesn't have to come from a court case against a publisher, for instance. They have "to provide only evidence that can reasonably be required of [them] to try to find," the court said.





Dealing with a free press…

https://dilbert.com/strip/2022-12-08



Wednesday, December 07, 2022

Is this another demonstration of the Ukraine’s ability to strike “deep” into Russia? I don’t think so. But if Russia thinks so, this could be the start of a Cyber War.

https://www.bleepingcomputer.com/news/security/massive-ddos-attack-takes-russia-s-second-largest-bank-vtb-offline/

Massive DDoS attack takes Russia’s second-largest bank VTB offline

Russia's second-largest financial institution VTB Bank says it is facing the worse cyberattack in its history after its website and mobile apps were taken offline due to an ongoing DDoS (distributed denial of service) attack.

"At present, the VTB technological infrastructure is under unprecedented cyberattack from abroad," stated a VTB spokesperson to TASS (translated).

The pro-Ukraine hacktivist group, 'IT Army of Ukraine,' has claimed responsibility for the DDoS attacks against VTB, announcing the campaign on Telegram at the end of November.





Selling your personal data.

https://www.bespacific.com/amazon-offering-users-2-dollars-month-for-track-phone-data-2022-12/

Amazon is offering customers $2 per month for letting the company monitor the traffic on their phones

Insider:

  • Amazon’s Ad Verification program offers select users $2 per month for sharing their traffic data.

  • It is part of Amazon’s Shopper Panel, an invite-only program that offers users financial rewards.

  • The voluntary program could raise privacy concerns over how Amazon handles customer data…

Under the company’s new invite-only Ad Verification program, Amazon is tracking what ads participants saw, where they saw them, and the time of day they were viewed. This includes Amazon’s own ads and third-party ads on the platform. Through the program, Amazon hopes to offer more personalized-ad experiences to customers that reflect what they have previously purchased, according to Amazon. “Your participation will help brands offer better products and make ads from Amazon more relevant,”Amazon wrote in its Shopper Panel FAQ…”





Agree this is interesting. It could be very difficult to implement.

https://www.schneier.com/blog/archives/2022/12/the-decoupling-principle.html

The Decoupling Principle

This is a really interesting paper that discusses what the authors call the Decoupling Principle:

The idea is simple, yet previously not clearly articulated: to ensure privacy, information should be divided architecturally and institutionally such that each entity has only the information they need to perform their relevant function. Architectural decoupling entails splitting functionality for different fundamental actions in a system, such as decoupling authentication (proving who is allowed to use the network) from connectivity (establishing session state for communicating). Institutional decoupling entails splitting what information remains between non-colluding entities, such as distinct companies or network operators, or between a user and network peers. This decoupling makes service providers individually breach-proof, as they each have little or no sensitive data that can be lost to hackers. Put simply, the Decoupling Principle suggests always separating who you are from what you do.

Lots of interesting details in the paper.





Is this common?

https://www.bespacific.com/legal-astroturfing/

Legal Astroturfing

Cheung, Alvin, Legal Astroturfing (November 17, 2022). Available at SSRN: https://ssrn.com/abstract=4279133 or http://dx.doi.org/10.2139/ssrn.4279133

This Article identifies the phenomenon of legal astroturfing and considers it in relation to other aspects of “democratic backsliding” and contemporary authoritarianism. In particular, I argue that legal astroturfing is especially pernicious not because it is illiberal or anti-democratic (although it is certainly both of these), but because it involves two layers of deception by the regime: legal astroturfing can obfuscate both what is being done, as well as who is doing it. I further argue that this deceptive element of legal astroturfing makes the tactic not merely illiberal or anti-democratic, but anti-legal.”





Things to come.

https://ourworldindata.org/brief-history-of-ai

The brief history of artificial intelligence: The world has changed fast – what might be next?

Despite their brief history, computers and AI have fundamentally changed what we see, what we know, and what we do. Little is as important for the future of the world, and our own lives, as how this history continues.





An interesting article, but one thing caught my eye...

https://blogs.microsoft.com/ai/a-conversation-with-kevin-scott-whats-next-in-ai/

A conversation with Kevin Scott: What’s next in AI

For example, I’ve been playing around with an experimental system I built for myself using GPT-3 designed to help me write a science fiction book, which is something that I’ve wanted to do since I was a teenager. I have notebooks full of synopses I’ve created for theoretical books, describing what the books are about and the universes where they take place. With this experimental tool, I have been able to get the logjam broken. When I wrote a book the old-fashioned way, if I got 2,000 words out of a day, I’d feel really good about myself. With this tool, I’ve had days where I can write 6,000 words in a day, which for me feels like a lot. It feels like a qualitatively more energizing process than what I was doing before.

This is the “copilot for everything” dream—that you would have a copilot that could sit alongside you as you’re doing any kind of cognitive work, helping you not just get more done, but also enhancing your creativity in new and exciting ways.





Tools & Techniques. (From CU Boulder)

https://phet.colorado.edu/

Interactive Simulations for Science and Math



Tuesday, December 06, 2022

If this is the Ukraine, things could get very interesting soon.

https://www.schneier.com/blog/archives/2022/12/crywiper-data-wiper-targeting-russian-sites.html

CryWiper Data Wiper Targeting Russian Sites

Kaspersky is reporting on a data wiper masquerading as ransomware that is targeting local Russian government networks.

The Trojan corrupts any data that’s not vital for the functioning of the operating system. It doesn’t affect files with extensions .exe, .dll, .lnk, .sys or .msi, and ignores several system folders in the C:\Windows directory. The malware focuses on databases, archives, and user documents.
So far, our experts have seen only pinpoint attacks on targets in the Russian Federation. However, as usual, no one can guarantee that the same code won’t be used against other targets.

Nothing leading to an attribution.

News article.

Slashdot thread.





It’s year-end wrap up time again.

https://www.makeuseof.com/biggest-data-breaches-2022/

The 5 Biggest Data Breaches of 2022

Here are some of the most notable data hacks of the past 12 months, taking into account the number of people affected and the type of info leaked.





Evidence evidence, everywhere you look there’s evidence.

https://www.pogowasright.org/law-enforcement-is-extracting-tons-of-data-from-vehicle-infotainment-systems/

Law Enforcement Is Extracting Tons Of Data From Vehicle Infotainment Systems

Tim Cushing writes:

For years, cars have collected massive amounts of data. And for years, this data has been extraordinarily leaky. Manufacturers don’t like to discuss how much data gets phoned home from vehicle systems. They also don’t like to discuss the attack vectors these systems create, either for malicious hackers or slightly less malicious law enforcement investigators.
The golden age of surveillance definitely covers cars and their infotainment systems. A murder investigation had dead-ended until cops decided to access the on-board computers in the victim’s truck, which led investigators to the suspect nearly two years after the investigation began.
And whatever investigators can’t access themselves will be sold to them.

Read more at TechDirt.

See also Cops Can Extract Data From 10,000 Different Car Models’ Infotainment Systems, via Joe Cadillic





Can you trust an electronic lawyer?

https://www.bespacific.com/the-supply-and-demand-of-legal-help-on-the-internet/

The Supply and Demand of Legal Help on the Internet

Hagan, Margaret, The Supply and Demand of Legal Help on the Internet (October 17, 2022). Margaret D. Hagan “The Supply and Demand of Legal Help on the Internet,” Legal Tech and the Future of Civil Justice, edited by David Freeman Engstrom. Cambridge University Press, Forthcoming., Available at SSRN: https://ssrn.com/abstract=4250390

Faith in technology as a way to narrow the civil justice gap has steadily grown alongside an expanding menu of websites offering legal guides, document assembly tools, and case management systems. Yet little is known about the supply and demand of legal help on the internet. This chapter mounts a first-of-its-kind effort to fill that gap by measuring website traffic across the mix of commercial, court-linked, and public interest websites that vie for eyeballs online. Commercial sites, it turns out, dominate over the more limited ecosystem of court-linked and public interest online resources, and yet commercial sites often engage in questionable practices, including the baiting of users with incomplete information and then charging for more. Search engine algorithms likely bolster that dominance. Policy implications abound for a new generation of A2J technologies focused on making people’s legal journeys less burdensome and more effective. What role should search engines play to promote access to quality legal information? Could they, or should they, privilege trustworthy sources? Might there be scope for public-private partnerships, or even a regulatory role, to ensure that online searches return trustworthy and actionable legal information?”





What should the court accept?

https://www.computerworld.com/article/3682149/biometrics-are-even-less-accurate-than-we-thought.html

Biometrics are even less accurate than we thought

… “Any biometric vendor or algorithm creator can submit their algorithm for review. NIST received 733 submissions for its fingerprint review and more than 450 submissions for its facial recognition reviews. NIST accuracy goals depend on the review and scenario being tested, but NIST is looking for an accuracy goal around 1:100,000, meaning one error per 100,000 tests.

"So far, none of the submitted candidates come anywhere close,” Grimes wrote, summarizing the NIST findings. “The best solutions have an error rate of 1.9%, meaning almost two mistakes for every 100 tests. That is a far cry from 1:100,000 and certainly nowhere close to the figures touted by most vendors. I have been involved in many biometric deployments at scale and we see far higher rates of errors — false positives or false negatives — than even what NIST is seeing in their best-case scenario lab condition testing. I routinely see errors at 1:500 or lower.”





Does this make it seem more like a scam?

https://www.makeuseof.com/platforms-pay-users-learn-crypto/

10 Platforms That Pay Users to Learn About Crypto

Learn-to-earn platforms have become a highly popular way of promoting various protocols, and more platforms are beginning to offer crypto rewards for online learners.

In many cases, all users need to do is register an account and open a wallet with learn-to-earn platforms to start receiving cryptocurrency. As a result, over time, it’s even possible to build a fair portfolio without having to make a deposit.





Perspective. AI as the author of well argued, documented lies...

https://stratechery.com/

AI Homework





Resource.

https://www.bespacific.com/futurepedia/

FUTUREPEDIA

Futurepedia is the largest AI tools directory on the internet. It is updated with 5+ new AI tools daily. Sign up to save and share your favourite AI tools.” Users may browse to identify AI tools similar to ones already submitted.



Monday, December 05, 2022

Fortunately, I never set of a false alarm while sitting in the bar… (Skiing is just too much like exercise for an old retired guy.)

https://www.pogowasright.org/911-dispatchers-say-skiers-are-accidentally-setting-off-apples-new-crash-detection-technology-without-realizing-triggering-emergency-calls/

911 dispatchers say skiers are accidentally setting off Apple’s new crash-detection technology without realizing, triggering emergency calls

Brittney Nguyen reports:

Emergency dispatchers in a county in Utah told a local news outlet that they’re seeing a rise in accidental 911 calls from skiers who have new Apple products with its crash-detection technology.
Suzie Butterfield, a Summit County Dispatch Center supervisor, told KSL.com that dispatchers have been getting phone calls alerting them to “a severe crash or they’ve been involved in a car accident.”
Apple’s crash-detection technology sends users a message with an alarm sound if it detects a crash. The message can be dismissed, but if it’s not within 20 seconds, the technology sends an automated message to the closest emergency call center with the caller’s GPS coordinates and their number to call back.

Read more at Business Insider.





Undue reliance? Using technology in ways never intended.

https://news.yahoo.com/colorado-grandmother-sues-police-detective-054334297.html

Colorado grandmother sues police detective following SWAT raid based on false 'Find my iPhone' ping

An elderly Colorado woman is suing a Denver police detective who ordered a SWAT raid on her house after it was falsely pinged by Apple's "Find my" app as the location of several stolen items — including six firearms and an old iPhone — according to a lawsuit filed Wednesday.





This might work on the east coast, but probably not in most of the US.

https://thenextweb.com/news/europe-take-note-france-bans-short-haul-flights

Europe, take note: France bans short-haul flights

France has been given the green light to ban short-haul domestic flights. Specifically, between locations where there is a train alternative that takes less than 2.5 hours.

Unlike many parts of the world, Europe already has thousands of kilometers of dedicated high-speed railway, with the leading countries in this regard including France, Germany, Spain, Italy, the Netherlands, Belgium, Denmark, Sweden, Poland, and the UK.





Useful?

https://www.makeuseof.com/dont-get-crypto-quick-explainer/

Don't Get Crypto? The Quick Jargon-Busting Crypto Explainer

There's no denying that the cryptocurrency realm is complex. There are so many cogs in this machine that understanding how it works can be nothing short of overwhelming. So, if you don't really get crypto but want to learn the basics, check out the guide below to get a solid grip on the dynamics of the cryptocurrency world.





Too many laws?

https://dilbert.com/strip/2022-12-05



Sunday, December 04, 2022

Intimate, as we normally define intimate. Is that different enough from ‘personal’ that we need a new law?

https://www.pogowasright.org/the-right-to-intimate-privacy-an-interview-with-danielle-citron/

The Right to Intimate Privacy: An Interview with Danielle Citron

Julia Angwin’s newsletter has a great interview with Danielle Citron, privacy law scholar and advocate for privacy rights. She starts by providing a brief recap of some of Citron’s credentials and accomplishments in the field:

In her new book, “The Fight for Privacy: Protecting Dignity, Identity and Love in the Digital Age,” Danielle Citron calls for a new civil right to be established protecting intimate privacy. This is my second newsletter interviewing Danielle, who is the leading legal scholar in the emerging field of cyber civil rights. Two years ago, I interviewed her about efforts to reform Section 230 of the Communications Decency Act—a law sometimes referred to as the Magna Carta of the internet.
Citron is the Jefferson Scholars Foundation Schenck Distinguished Professor in Law and Caddell and Chapman Professor of Law at the University of Virginia, where she is the director of the school’s LawTech Center. In 2019, Citron was named a MacArthur Fellow based on her work on cyberstalking and intimate privacy.

Here’s a snippet from the interview:

Angwin: You call for a civil right to intimate privacy. What does that mean?
Citron: Modern civil rights laws protect against invidious discrimination and rightly so. I want us also to conceive of civil rights as both a commitment for all to enjoy and something that provides special protection against discrimination. Because who is most affected and harmed by the sharing of intimate information? Women, non-White people, and LGBTQ+ individuals, many of whom often have more than one vulnerable identity.
Currently, the law woefully underprotects intimate privacy.

Go read the whole interview at TheMarkup.





First responder tools have to get it right! (The police tool belt?)

https://bulletin.cepol.europa.eu/index.php/bulletin/article/view/540

Mobile Forensics and Digital Solutions

Mobile devices have become an indispensable part of modern society and are used throughout the world on a daily basis. The proliferation of such devices has rendered them a crucial part of criminal investigations and has led to the rapid advancement of the scientific field of Mobile Forensics. The forensic examination of mobile devices provides essential information for authorities in the investigation of cases and their relative importance advances as more evidence and traces of criminal activity can be acquired through the analysis of the corresponding forensic artifacts. Data related to the device user, call logs, text messages, contacts, image and video files, notes, communication records, networking activity and application related data, among others, with correct technical interpretation and correlation through expert analysis, can significantly contribute to the successful completion of digital criminal investigations. The above underline the necessity for advanced forensic tools that will utilize the most prominent achievements in Data Science. In this paper, the current status of Mobile Forensics as a branch of Digital Forensics is examined by exploring the most important challenges that digital forensic examiners face and investigating whether Artificial Intelligence and Machine Learning solutions can revolutionize the daily practice with respect to digital forensics investigations. The utilization of these emerging technologies provides crucial tools and enhances the professional expertise of digital forensic scientists, paving the way to overcome the critical challenges of digital criminal investigations.





Another overview…

https://link.springer.com/chapter/10.1007/978-3-031-19039-1_5

AI Ethics and Policy

With rapid growth and adoption of emerging technologies like AI, ethical use of such technologies become paramount. Many such ethical principles also get formally codified into law and policy. We begin this chapter by differentiating between AI and digital ethics, the key difference being that the latter tends to have broader scope than the former. We then dive into the philosophy of ethics, followed by a discussion of how AI ethics is being incorporated into policy. Several countries are used as real-world examples, but to illustrate such policies in depth, we provide two case studies. The first of these is on the influential and much-discussed General Data Protection Regulation (GDPR) enacted in the European Union in the last decade. Although it is still controversial, and perhaps too early to say, whether enforcement of GDPR has been sufficiently strong or effective, the regulation has already been used to administer a number of fines and penalties on large corporations like British Airways and Marriott. The second case study is on the US-based National Defense Authorization Act (NDAA). We close the chapter with a discussion of AI ethics in higher education and research.





Over monitoring or over reaction to a little monitoring.

http://global-workplace-law-and-policy.kluwerlawonline.com/2022/11/23/governing-data-governing-technology-complementary-approaches-for-the-future-of-work/

Governing data, governing technology? Complementary approaches for the future of work

An employer today can learn about interactions among employees or with customers via sensors and a vast variety of softwares. Is the tone of voice friendly enough with customers? How much time is spent on emailing or away from the assigned desk? Scores, ‘idle’ or silent buttons, are making the workplace a place where data is constantly accumulated and processed through Artificial Intelligence (AI) and the Internet of Things (IoT). Breaks can lead to penalties, from reduced bonuses to more serious sanctions [1]. These are just examples that represent strong evidence in the labour law debate: the recourse to data is changing organisational models and increasing employers’ capability to monitor the workforce [2]. Thus, the self-determination and purpose limitation principles offered by the current General Data Protection Regulation (EU Reg 2016/679) are now standing under the magnifying glass: can they preserve the order of powers in subordinate employment that datafication is disrupting? Or does guaranteeing individual rights against a vast and complex surveillance society risk creating an unequal David-and-Goliath conflict? [3]

This contribution suggests that data protection law at work is and will be crucial in ensuring labour protection in datafied workplaces. The present focus, however, is dominated by AI and IoT needs to be complemented with the governance of technologies (thus not only of data flows) that place structural limitations on employees’ fundamental freedoms. A complementary approach that can be already recognised in the European Commission’s Industry 5.0 strategy, with the proposal for a regulation on artificial intelligence as one of the main (yet problematic) developments [4].





Do we need AI cops?

https://www.researchgate.net/profile/Ariadna_Ochnio/publication/365470169_What_are_the_main_problems_facing_EU_criminal_law_today/links/637641a354eb5f547cde7753/What-are-the-main-problems-facing-EU-criminal-law-today.pdf

What are the main problems facing EU criminal law today?

EU criminal law has to confront a number of issues affecting almost every area of social life, starting with the use of cutting-edge technology, based on artificial intelligence, including accelerating climate change and environmental degradation, refugee flows, and ending with the recurring crises of the rule of law. The policy of solving these problems will influence the direction of the development of EU criminal law in the near future. However, a wide array of problems make it difficult to discuss them all in one collective study. It is rather easier to identify some thematic circles around which the EU is now focusing its criminal justice strategies. For this reason, this book deals with a selected number of issues of EU criminal law.





Only six?

https://ora.ox.ac.uk/objects/uuid:9ed3716e-8aba-44fc-a70c-e6f0488cf130

Six human-centered artificial intelligence grand challenges

Widespread adoption of artificial intelligence (AI) technologies is substantially affecting the human condition in ways that are not yet well understood. Negative unintended consequences abound including the perpetuation and exacerbation of societal inequalities and divisions via algorithmic decision making. We present six grand challenges for the scientific community to create AI technologies that are human-centered, that is, ethical, fair, and enhance the human condition. These grand challenges are the result of an international collaboration across academia, industry and government and represent the consensus views of a group of 27 experts in the field of human-centered artificial intelligence. In essence, these challenges advocate for a human-centered approach to AI that (1) is centered in human well-being, (2) is designed responsibly, (3) respects privacy, (4) follows human-centered design principles, (5) is subject to appropriate governance and oversight, and (6) interacts with individuals while respecting human’s cognitive capacities. We hope that these challenges and their associated research directions serve as a call for action to conduct research and development in AI that serves as a force multiplier towards more fair, equitable and sustainable societies.