Saturday, December 17, 2022

Is it possible that an attack like this one could start a war? Imagine the pressure a government would be under to reply to a ransom demand!

https://www.makeuseof.com/fubotv-states-world-cup-outage-caused-by-cyberattack/

FuboTV States World Cup Outage Was Caused by Cyberattack

On December 15th, 2022, sports-focused streaming service FuboTV released a statement via Business Wire regarding the outage customers experienced during the France vs. Morocco World Cup soccer game on December 14th. This game was being streamed on FuboTV's platform, though customers reported at this time that they were having trouble accessing their accounts.

At the time of writing, FuboTV has not discussed the nature of the attack suffered on December 14th. We don't know if this was a kind of denial-of-service attack, zero-day exploit, or another method.

We also don't know if the cybercriminals responsible for this attack managed to get their hands on any valuable data, such as personal customer information.

It seems the investigation into this cybercrime is in its early stages. FuboTV has assured readers that it will provide more information as things progress.





Elaborate, but technically trivial.

https://www.msn.com/en-us/news/world/now-car-thieves-are-using-wildlife-cameras/ar-AA15879I

Now Car Thieves Are Using Wildlife Cameras

The scheme often starts with spotters checking out cars at local shows. When they find the make and model they need to steal, one of them will craftily hide a magnetic tracking device on the classic car. That allows them to find where the vehicle is being stored so they can then stake out that location.

That’s when these criminals will put up wildlife cameras around your property. It might sound bizarre, but these crafty individuals use the cameras to figure out when you usually come and go from your house, determining when it’s likely you’re not home but your classic car is.





Probably more good than evil…

https://www.freetech4teachers.com/2022/12/some-thoughts-about-ai-in-education.html

Some Thoughts About AI in Education

On Tuesday I published a short overview of ChatGPT which is a free artificial intelligence writing tool. I followed that up with a post on Wednesday morning about Canva’s new artificial intelligence writing tool called Magic Write. In both instances I mentioned that I think there are some good things that could come from these kinds of AI tools and there are some bad things that could come from these kinds of tools. Let’s take a look at some of each.

I’m old enough to remember teachers telling students that they couldn’t use internet sources in their research papers. And I remember many raging debates about whether or not students should look at Wikipedia. Hopefully, I’ll live long enough to remember the current debates about the use of AI in education.



Thursday, December 15, 2022

At what point should you be locked out of your car? Should we include a mental health check?

https://fpf.org/blog/driver-impairment-and-privacy-what-lies-ahead-for-driver-impairment-detection/

DRIVER IMPAIRMENT AND PRIVACY: WHAT LIES AHEAD FOR DRIVER IMPAIRMENT DETECTION?

The 2021 Infrastructure Act mandates that the US Department of Transportation issue a rule requiring the creation and implementation of monitoring systems to deter drivers impaired by alcohol, inattention, or drowsiness. The Department of Transportation (DOT) must establish a Federal mandatory motor vehicle safety standard to “passively monitor a motor vehicle driver’s performance to accurately detect if the driver may be impaired.” (“Advanced Drunk and Impaired Driving Prevention Technology,” Sec. 24220(b)(1)(A)(i)). Details in the statute are sparse; the DOT’s rule will likely establish many practical and technical details that the statute does not address.

Among the actions required under the 2021 law, the DOT is required to set a safety standard for the use of blood alcohol detection technology within three years, after which vehicle manufacturers will have between two-three years to install the systems in all new passenger motor vehicles manufactured after the effective date. In practice, such systems will be required for all new vehicles beginning November 2026, although they could be rolled out sooner. DOT’s National Highway Traffic Safety Administration (NHTSA) will lead the rulemaking.





I thought we would have seen many more ‘end of year’ summaries by now. Perhaps everyone is waiting for their AI to write the articles?

https://www.forbes.com/sites/lanceeliot/2022/12/15/ai-year-in-review-roundup-and-analysis-along-with-pearls-and-perils-entailing-ai-ethics-and-ai-law/?sh=6d2e99d42be4

AI Year-In-Review Roundup And Analysis Along With Pearls And Perils Entailing AI Ethics And AI Law

Here for your edification are the topmost AI trends and breakthroughs along with an especially honed look at the progress and perils regarding AI Ethics and AI Law. I’ll be walking you through my Forbes column coverage for all of 2022 and highlighting the big-time headline-grabbing AI proclamations and consternations.





Will this become a minimum standard for lawyers and doctors?

https://www.bespacific.com/riana-pfefferkorn-on-end-to-end-encryption-for-iphone-backups-to-icloud/

Riana Pfefferkorn on End-to-End Encryption for iPhone Backups to iCloud

LawFare Podcast: “Last week, Apple made an announcement about some new security features it would be offering to users. One of those features involves users’ ability to opt in to encryption for iPhone backups to iCloud. While this new feature will enhance data privacy and security for those users who choose to opt in, it may create additional challenges for law enforcement to obtain evidence in criminal investigations. To discuss the implications and potential impact of this new security feature, Lawfare senior editor Stephanie Pell sat down with Riana Pfefferkorn, research scholar at the Stanford Internet Observatory. They discussed the costs and benefits to users who may choose to opt in to this feature, how Apple’s choice to offer this feature plays into a broader conflict known as the Crypto Wars, and how this feature relates to another part of Apple’s announcement where it indicated that it would not be scanning all iPhones for child sexual abuse material before images were backed up to iCloud.”



Wednesday, December 14, 2022

Interesting, and another reason for humans in the loop. They can deliver the good news and let the AI deliver the bad.

https://knowledge.wharton.upenn.edu/article/how-do-customers-feel-about-algorithms/

How Do Customers Feel About Algorithms?

Customers feel good about a company when its representatives make decisions in their favor, such as approving their loan application or gold member status. But when an algorithm reaches the same favorable conclusion, those warm and fuzzy feelings tend to fade.

This surprising contradiction is revealed in a new paper that examines how customers react differently depending on whether a computer or a fellow human being decides their fate.

In the study, Wharton marketing professor Stefano Puntoni and his colleagues found that customers are happiest when they receive a positive decision from a person, less happy when the positive decision is made by an algorithm, and equally unhappy with both man and machine when the news is bad.





Are global technologies making it easier for laws (like GDPR) to have a global reach?

https://www.huntonprivacyblog.com/2022/11/30/italian-supreme-court-grants-global-delisting-order-under-national-law/

Italian Supreme Court Grants Global Delisting Order Under National Law

On November 15, 2022, the Italian Supreme Court held that an Italian court or competent data protection authority has jurisdiction to issue a global delisting order. A delisting order requires a search engine to remove certain search results about individuals if the data subject’s privacy interests prevail over the general right to expression and information, and the economic interest of the search engine. The case was brought by an Italian individual, who requested a worldwide delisting order, concerning all versions of the search engine, due to potential damage to the applicant’s professional interests outside of the European Union.





Consult, yes. Collaborate even. But keep AI control in the hands of the AI experts.

https://hbr.org/2022/12/the-risks-of-empowering-citizen-data-scientists

The Risks of Empowering “Citizen Data Scientists”

New tools are enabling organizations to invite and leverage non-data scientists — say, domain data experts, team members very familiar with the business processes, or heads of various business units — to propel their AI efforts. There are advantages to empowering these internal “citizen data scientists,” but also risks. Organizations considering implementing these tools should take five steps: 1) provide ongoing education, 2) provide visibility into similar use cases throughout the organization, 3) create an expert mentor program, 4) have all projects verified by AI experts, and 5) provide resources for inspiration outside your organization.





Trying to help congress understand. (A truly hopeless effort) Should be useful for normal people…

https://www.bespacific.com/crs-video-seminars-on-disruptive-technologies/

CRS Video Seminars on Disruptive Technologies

CRS Seminars on Disruptive Technologies: Videos – Updated December 8, 2022: CRS Seminars on Disruptive Technologies: Videos – “New technologies, and those that represent an evolutionary improvement of an existing tool or process, that exhibit the potential to have large-scale effects on social and economic activity are often referred to as “disruptive” technologies. They can disrupt existing markets, practices, and processes by displacing and replacing incumbent technologies and actors. The emergence of smartphones through the convergence of mobile phone and computing technologies, for example, profoundly affected the telecommunications sector— including its relevant market actors, service offerings, and hardware and software infrastructures. It has also impacted how individuals and groups communicate through voice, text, images, and video; consume and create media; access and disseminate information; and engage in leisure activities. The positive and negative short-, medium-, and long-term effects emerging technologies may have are difficult to predict and present a range of issues for Congress. Since the development trajectories and potential outcomes of emerging technologies are uncertain—some that show great promise may ultimately fail to develop as expected and others may have unintended yet profound impacts—systematic data to help guide policy development and legislation is sparse. To support Congress in examining these opportunities and issues, CRS has held a series of seminars for Congress designed to provide an opportunity for congressional staff to better understand the possible impacts of disruptive technologies of interest. In the seminars held to date, over 40 government and private-sector experts discussed technical, economic, policy, and legal aspects of 10 disruptive technology topics: advanced battery energy storage, artificial intelligence, autonomous vehicles, automation technologies and the future of work, blockchain, commercial spaceflight, cybersecurity, gene editing, mRNA technologies, and quantum information science. This report describes each of the seminars in the series and provides links to videos of them that are available on the CRS website.”



Tuesday, December 13, 2022

Imagine hackers taking control of a police robot…

https://www.csoonline.com/article/3682852/are-robots-too-insecure-for-lethal-use-by-law-enforcement.html#tk.rss_all

Are robots too insecure for lethal use by law enforcement?

In late November, the San Francisco Board of Supervisors voted 8-3 to give the police the option to launch potentially lethal, remote-controlled robots in emergencies, creating an international outcry over law enforcement use of “killer robots.” The San Francisco Police Department (SFPD), which was behind the proposal, said they would deploy robots equipped with explosive charges “to contact, incapacitate, or disorient violent, armed, or dangerous suspects” only when lives are at stake.

Missing from the mounds of media coverage is any mention of how digitally secure the lethal robots would be or whether an unpatched vulnerability or malicious threat actor could intervene in the digital machine’s functioning, no matter how skilled the robot operator, with tragic consequences. Experts caution that robots are frequently insecure and subject to exploitation and, for those reasons alone, should not be used with the intent to harm human beings.





Any reason to suspect that the ‘bad guys’ will comply?

https://arstechnica.com/information-technology/2022/12/china-bans-ai-generated-media-without-watermarks/

China bans AI-generated media without watermarks

China's Cyberspace Administration recently issued regulations prohibiting the creation of AI-generated media without clear labels, such as watermarks—among other policies—reports The Register. The new rules come as part of China's evolving response to the generative AI trend that has swept the tech world in 2022, and they will take effect on January 10, 2023.



(Related)

https://techcrunch.com/2022/12/10/openais-attempts-to-watermark-ai-text-hit-limits/

OpenAI’s attempts to watermark AI text hit limits

Did a human write that, or ChatGPT? It can be hard to tell — perhaps too hard, its creator OpenAI thinks, which is why it is working on a way to “watermark” AI-generated content.

In a lecture at the University of Texas at Austin, computer science professor Scott Aaronson, currently a guest researcher at OpenAI, revealed that OpenAI is developing a tool for “statistically watermarking the outputs of a text [AI system].” Whenever a system — say, ChatGPT — generates text, the tool would embed an “unnoticeable secret signal” indicating where the text came from.





More “we don’t need lawyers” tech.

https://techcrunch.com/2022/12/12/digip/

Digip digitizes the process of applying for trademarks

For businesses, protecting trademarks is often a lengthy and expensive process, especially if they have multiple brands. Digip digitizes much of the process, helping its customers file trademarks by themselves instead of going to law firms.

To file trademarks, businesses usually ask a lawyer to conduct trademark searches. They are billed per search, which adds up quickly if a business has multiple brands they need to trademark. Then they have to pay for a lawyer to file trademark applications. But the process doesn’t end there. Businesses also have to monitor their trademarks in markets where they own it, and that is another charge.

Digip combines all these steps into one online workflow. Instead of charging for different parts of the process, its customers pay a flat monthly or yearly subscription fee, plus application fees charged by trademark offices.





Tools & Techniques. It’s not just for teachers…

https://www.freetech4teachers.com/2022/12/get-your-free-copy-of-2022-23-practical.html

Get Your Free Copy of The 2022-23 Practical Ed Tech Handbook

If you didn't get your copy earlier this school year, The Practical Ed Tech Handbook is now available for free to anyone who is subscribed to The Practical Ed Tech Newsletter or who registers for it here.



Monday, December 12, 2022

I don’t think we could do this here. Needs a strong ‘top down’ advocate.

https://www.zdnet.com/article/china-wants-legal-sector-to-be-ai-powered-by-2025/

China wants legal sector to be AI-powered by 2025

China wants its judicial sector to be supported by an artificial intelligence (AI) infrastructure that must be in place by 2025. The directive aims to drive integration of AI with judicial work and enhance legal services.

The country's highest court said all courts were required to implement a "competent" AI system in three years, according to a report by state-owned newspaper China Daily, pointing to guidelines released by the Supreme People's Court.





So how did we let it get out of control?

https://www.bespacific.com/social-media-seen-as-mostly-good-for-democracy-across-many-nations-but-u-s-is-a-major-outlier/

Social Media Seen as Mostly Good for Democracy Across Many Nations, But U.S. is a Major Outlier

As people across the globe have increasingly turned to Facebook, Twitter, WhatsApp and other platforms to get their news and express their opinions, the sphere of social media has become a new public space for discussing – and often arguing bitterly – about political and social issues. And in the mind of many analysts, social media is one of the major reasons for the declining health of democracy in nations around the world. However, as a new Pew Research Center survey of 19 advanced economies shows, ordinary citizens see social media as both a constructive and destructive component of political life, and overall most believe it has actually had a positive impact on democracy. Across the countries polled, a median of 57% say social media has been more of a good thing for their democracy, with 35% saying it is has been a bad thing. There are substantial cross-national differences on this question, however, and the United States is a clear outlier: Just 34% of U.S. adults think social media has been good for democracy, while 64% say it has had a bad impact. In fact, the U.S. is an outlier on a number of measures, with larger shares of Americans seeing social media as divisive…”



(Related) Good for democracy does not mean popular with all governments.

https://www.bespacific.com/tracking-social-media-bans/

Tracking Social Media Bans

Center for Data Innovation: “Researchers at Surfshark, a cybersecurity company based in the Netherlands, have created a dataset tracking governments that have imposed restrictions on Internet service or social media companies from 2015 to the present. For each restriction, the dataset contains the dates, duration, affected population, available context, and notes on the restricted platforms such as Facebook, Twitter, Youtube, Instagram, Telegram, or Whatsapp. The dataset also lists local restrictions in India and disputed territory Jammu and Kashmir, as well as instances of miscellaneous outages or restrictions, such as telecommunications disruptions in Ukraine.”





Always interesting to see what changes. (What happened to Amazon?)

https://www.wsj.com/articles/microsoft-best-managed-companies-2022-11670630632?mod=djemalertNEWS

Microsoft Tops the Best-Managed Companies of 2022

The technology sector’s grip on the top spots in the annual Management Top 250 ranking slipped this year.

Microsoft Corp. held its ground, ranking No. 1 in this measure of the best-run companies in the U.S. for the third straight year. But unlike last year, when tech companies took the first five spots in the ranking, this year’s top five include General Motors Co. at No. 4 and Whirlpool Corp. at No. 5.

Amazon stayed in the top 10, but slipped to eighth from second and recorded the biggest decline in overall score of any company in this year’s Top 250. Meta posted the fourth-largest decline in overall score in the group and dropped from No. 31 in last year’s ranking to No. 130 this year.



Sunday, December 11, 2022

 I see it as a mere swing of the pendulum. I trust it will swing back.

https://re.public.polimi.it/handle/11311/1225493

Is ethics evaporating in the cyber era? Part 2: Feeling Framed

In continuation to the Part 1 published in this volume, this part discusses the oversupply of information and approaches the concerning rights we are alienating to enjoy digital technology. Instead of using the Internet as space for free exchange of ideas, it is being used as a tool for supervision, management, and control. There is an increasing merger of artificial intelligence and machine learning in any sector for analysing, optimizing, and even framing humans. Our digital “buddies” take note of our everyday life, our itinerary, our health parameters, our messages and our content. Big data centres, computer farms are the new “caveau” (Bank Vault) full of “our” data.





Do we ask the AI of it meant to commit a crime? Can we believe its answer?

https://repository.uchastings.edu/hastings_science_technology_law_journal/vol14/iss1/2/

The Artificially Intelligent Trolley Problem: Understanding Our Criminal Law Gaps in a Robot Driven World

Not only is Artificial Intelligence (AI) present everywhere in people’s lives, but the technology is also now capable of making unpredictable decisions in novel situations. AI poses issues for the United States’ traditional criminal law system because this system emphasizes mens rea’s importance in determining criminal liability. When AI makes unpredictable decisions that lead to crimes, it will be impractical to determine what mens rea to ascribe to the human agents associated with the technology, such as AI’s creators, owners, and users. To solve this issue, the United States’ legal system must hold AI’s creators, owners, and users strictly liable for their AI’s actions and also create standards that can provide these agents immunity from strict liability. Although other legal scholars have proposed solutions that fit within the United States’ traditional criminal law system, these proposals fail to strike the right balance between encouraging AI’s development and holding someone criminally liable when AI causes harm.

This Note illuminates this issue by exploring an artificially intelligent trolley problem. In this problem, an AI-powered self-driving car must decide between running over and killing five pedestrians or swerving out of the way and killing its one passenger; ultimately, the AI decides to kill the five pedestrians. This Note explains why the United States’ traditional criminal law system would struggle to hold the self-driving car’s owner, programmers, and creator liable for the AI’s decision, because of the numerous human agents this problem brings into the criminal liability equation, the impracticality of determining these agents’ mens rea, and the difficulty in satisfying the purposes of criminal punishment. Looking past the artificially intelligent trolley problem, these issues can be extended to most criminal laws that require a mens rea element. Criminal law serves as a powerful method of regulating new technologies, and it is essential that the United States’ criminal law system adapts to solve the issues that AI poses.





Good technology used poorly.

https://www.vice.com/en/article/5d3edx/apple-airtag-stalking-police-family-court

The Legal System Is Completely Unprepared for Apple AirTag Stalking

Apple has been under fire for stalking capabilities of its AirTag tracking devices for almost the entirety of the lifetime of the device, and this week, two women brought a lawsuit against Apple, claiming that the devices make it easy for stalkers to track victims. One of the women claims that her ex-boyfriend placed an AirTag in the wheel well of her car to track her. The other’s story is similar to Dozier’s: her estranged husband, she claimed, placed an AirTag in their child’s backpack in order to follow her.

Cynthia Godsoe, a professor of law at Brooklyn Law School, told me that the role of technology in family law is becoming more and more prevalent. Where someone used to have to hire a private investigator to follow someone around to build evidence against them in a custody or divorce case, she said, they can now use something like a tracking device—or even just Facebook posts to make a case against their ex.





Will there be liability for failure to speak?

https://ir.lawnet.fordham.edu/flr/vol91/iss3/5/

Let's Get Real: Weak Artificial Intelligence Has Free Speech Rights

The right to free speech is a strongly protected constitutional right under the First Amendment to the U.S. Constitution. In 2010, the U.S. Supreme Court significantly expanded free speech protections for corporations in Citizens United v. FEC. This case prompted the question: could other nonhuman actors also be eligible for free speech protection under the First Amendment? This inquiry is no longer a mere intellectual exercise: sophisticated artificial intelligence (AI) may soon be capable of producing speech. As such, there are novel and complex questions surrounding the application of the First Amendment to AI. Some commentators argue that AI should be granted free speech rights because AI speech may soon be sufficiently comparable to human speech. Others disagree and argue that First Amendment rights should not be extended to AI because there are traits in human speech that AI speech could not replicate.

This Note explores the application of First Amendment jurisprudence to AI. Introducing relevant philosophical literature, this Note examines theories of human intelligence and decision-making in order to better understand the process that humans use to produce speech, and whether AI produces speech in a similar manner. In light of the legal and philosophical literature, as well as the Supreme Court’s current First Amendment jurisprudence, this Note proposes that some types of AI are eligible for free speech protection under the First Amendment.





Not yet ready to replace all those judges…

https://lawresearchmagazine.sbu.ac.ir/article_102915.html?lang=en

The Challenges in Employing of AI Judge in Civil Proceedings

Artificial intelligence (AI), as one of the most important human achievements in the 21st century, is expanding its dominance in science, technology, industry, art, etc. this technology is spreading its shadow over various jobs in these fields. The field of law, and specifically proceedings and courtrooms, are reluctantly being influenced by this technology. This article aims to explain the challenges of employing this modern technology as a substitute for civil court judges. Despite all the AI‘s achievements and the opportunities it can bring to the judiciary, it seems this technology faces severe challenges in matters such as legal reasoning, impartiality, and public acceptance. This research, with a descriptive-analytical method, while explaining the shortcomings of AI in the field of judgment, reveals that AI, with its current capabilities, cannot be considered as a complete substitute for a human judge. This means that it would be more effective to use AI as a tool in the service of judges, helping them in handling and resolving disputes faster and more accurately. These challenges are compounded in Iranian law, which is affected by Feqh regarding judges' conditions and hindrances of the Iranian legal system compared with other legal systems in employing new technologies such as AI.





A sure fire conversation starter?

https://www.tandfonline.com/doi/abs/10.1080/13600834.2022.2154050

Artificially intelligent sex bots and female slavery: social science and Jewish legal and ethical perspectives

In this paper, we shed light on the question of whether it is morally permissible to enslave artificially intelligent entities by looking at up to date research from the social sciences – as well as the ancient lessons from Jewish law. The first part of the article looks at general ethical questions surrounding the ethics of AI and slavery by looking at contemporary social science research and the moral status of ‘Sex Bots’ – AI entities that are built for the purpose of satisfying human sexual desires. The second part presents a Jewish perspective on the obligation to protect artificial intelligent entities from abuse and raises the issue of the use of such entities in the context of sex therapy. This is followed by a review of slavery and in particular, female slavery in Jewish law and ethics. In the conclusions, we argue that both perspectives provide justification for the ‘Tragedy of the Master’ – that in enslaving AI we risk doing great harm to ourselves. This has significant and negative consequences for us – as individuals, in our relationships, and as a society that strives to value the dignity, autonomy, and moral worth of all sentient beings.