Saturday, February 04, 2023

No need to think it through, it’s for the children!

https://www.techdirt.com/2023/02/03/utah-lawmakers-rushing-through-bills-to-destroy-the-internet-for-kids/

Utah Lawmakers Rushing Through Bills To Destroy The Internet… ‘For The Children’

The evidence-free moral panic over social media keeps getting stupider, and when things get particularly stupid about the internet, you can pretty much rely on Utah politicians being there to proudly embrace the terrible ideas. The latest are a pair of bills that seem to be on the fast track, even in Utah’s short legislative session. The bills are HB311 from Rep. Jordan Teuscher and SB152 from Senator Michael McKell (author of a number of previous bad bills about the internet).

Both of these bills continue the unfortunate (and bipartisan) trend of taking away the autonomy of teenagers, treating them as if they’re babies who need to be watched over at every moment. It’s part of the typical moral panic that suggests that rather than teaching kids how to handle the internet and how to be prepared for real life, kids should effectively only be allowed to access a Disneyfied version of the internet.





Confusing. No matter how these questions are answered there is no benefit or penalty. So why ask them? (Suppose everyone, male and female, answered exactly the same way…)

https://www.pogowasright.org/florida-athletes-may-soon-be-required-to-submit-their-menstrual-history-to-schools/

Florida athletes may soon be required to submit their menstrual history to schools

Sommer Brugal and Andre Fernandez report:

A proposed draft of a physical education form in Florida could require all high school student athletes to disclose information regarding their menstrual history — a move that’s already drawing pushback from opponents who say the measure would harm students. The draft — published last month by the Florida High School Athletic Association, a group that oversees interscholastic athletic programs across the state — proposes making currently optional questions regarding a student’s menstrual cycle mandatory, as reported by the Palm Beach Post.

Read more at Miami Herald.





Judges, lawyers, soon a little software will replace them all.

https://www.vice.com/en/article/k7bdmv/judge-used-chatgpt-to-make-court-decision

A Judge Just Used ChatGPT to Make a Court Decision

A judge in Colombia used ChatGPT to make a court ruling, in what is apparently the first time a legal decision has been made with the help of an AI text generator—or at least, the first time we know about it.

Judge Juan Manuel Padilla Garcia, who presides over the First Circuit Court in the city of Cartagena, said he used the AI tool to pose legal questions about the case and included its responses in his decision, according to a court document dated January 30, 2023.



Friday, February 03, 2023

The story keeps changing. Perhaps Taylor Swift should buy the company and clean house?

https://www.cpomagazine.com/cyber-security/ticketmaster-says-bot-attack-is-to-blame-for-the-misfortunes-of-taylor-swift-fans/

Ticketmaster Says Bot Attack Is To Blame for the Misfortunes of Taylor Swift Fans

Taylor Swift fans were naturally excited for the announcement of her first concert tour since 2018, and Ticketmaster initially blamed a ticket website crash on the “historial” enthusiasm of millions showing up at the time of release. That story appears to have changed now that Live Nation executives have been brought in front of a Senate Judiciary Committee, with the claim now that a small army of bots were both purchasing tickets and attempting to breach the servers simultaneously.

The claim was not well-received by the committee members, who questioned why Ticketmaster had not been prepared for this possibility and noted that competitor SeatGeek did not experience similar issues in selling their share of the tickets.



Thursday, February 02, 2023

Doom, doom, all is doom.

https://www.csoonline.com/article/3687028/nation-state-threats-and-the-rise-of-cyber-mercenaries-exploring-the-microsoft-digital-defense-repo.html#tk.rss_all

Nation-State Threats and the Rise of Cyber Mercenaries: Exploring the Microsoft Digital Defense Report

This article explores the top trends in nation-state threats as identified in the Microsoft Digital Defense Report. These trends may be alarming, but the good news is that companies have a number of tools at their disposal.

To illuminate the evolving digital threat landscape and help the cyber community understand today’s most pressing threats, we released our annual Microsoft Digital Defense Report. This year’s report focuses on five key topics: cybercrime, nation-state threats, devices and infrastructure, cyber-influence operations, and cyber resiliency. With intelligence from 43 trillion daily security signals, organizations can leverage the findings presented in this report to strengthen their cyber defenses.

Today, we’re breaking down the report with an overview of the top three trends covered in section three on nation-state threats. Keep reading to learn more about this topic and for more information, download the full Microsoft Digital Defense Report.





A navigation guide.

https://www.insideprivacy.com/artificial-intelligence/nist-releases-new-artificial-intelligence-risk-management-framework/

NIST Releases New Artificial Intelligence Risk Management Framework

On January 26, 2023, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) released its Artificial Intelligence Risk Management Framework (the “Framework”) guidance document, alongside a companion AI RMF Playbook that suggests ways to navigate and use the Framework.





A sign that we should start paying attention…

https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/

ChatGPT sets record for fastest-growing user base - analyst note

ChatGPT, the popular chatbot from OpenAI, is estimated to have reached 100 million monthly active users in January, just two months after launch, making it the fastest-growing consumer application in history, according to a UBS study on Wednesday.

The report, citing data from analytics firm Similarweb, said an average of about 13 million unique visitors had used ChatGPT per day in January, more than double the levels of December.





I think we will have to accept ChatGPT as a tool, like a calculator. The rules seem to be emerging.

https://www.ksl.com/article/50570045/university-of-utah-outlines-how-students-should-utilize-chatgpts-ai-technology

University of Utah outlines how students should utilize ChatGPT's AI technology

"Recently, a new set of tools that utilize artificial intelligence to generate computer code, math problem results, written and visual content from a series of prompts have become more widely available. There are many possibilities for using and examining these tools related to education efforts and inquiry," said a letter to the U. student body from Lori McDonald, vice president of student affairs, and T. Chase Hagood, dean of undergraduate studies.

"Also, these tools may be used to complete assignments, exams and other academic efforts without proper attribution or an indication that they have been used," the letter continues.

The U.'s letter explained how ChatGPT should be used for student work, which is explained as any activities, assignments or practices to be evaluated, graded, given feedback on and received college credit in some way.

"Students should seek guidance from their instructors before utilizing AI generative tools for assignments," the letter said. "Faculty members will provide specific policies relating to using such tools for their courses as it makes sense for how and what learning looks like in those courses."

It also outlined that if a student uses ChatGPT for creative work, the student will need to "make evident any portion of the work generated by the AI tool and which AI tool they used."

"The U anticipates new waves of remarkable creativity and curiosity among faculty, staff, and students — there has never been a more exciting time to be at an R1 university!" the letter read.



(Related) Any tool can be misapplied.

https://www.csoonline.com/article/3687089/foreign-states-already-using-chatgpt-maliciously-uk-it-leaders-believe.html#tk.rss_all

Foreign states already using ChatGPT maliciously, UK IT leaders believe

Most UK IT leaders are concerned about malicious use of ChatGPT as research shows how its capabilities can significantly enhance phishing and BEC scams.

Most UK IT leaders believe that foreign states are already using the ChatGPT chatbot for malicious purposes against other nations. That’s according to a new study from BlackBerry, which surveyed 500 UK IT decision makers revealing that, while 60% of respondents see ChatGPT as generally being used for “good” purposes, 72% are concerned by its potential to be used for malicious purposes when it comes to cybersecurity. In fact, almost half (48%) predicted that a successful cyberattack will be credited to the technology within the next 12 months. The findings follow recent research which showed how attackers can use ChatGPT to significantly enhance phishing and business email compromise (BEC) scams.





Integrating AI into the organization.

https://dilbert.com/strip/2023-02-02



Wednesday, February 01, 2023

Imaging monitoring in real time with the ability to provide feedback like, “Please stop beating the old lady. It won’t look good on the evening news.”

https://www.axios.com/local/chicago/2023/02/01/police-body-camera-video-chicago-truleo

Chicago company Truleo uses AI to scan police bodycam footage





Good news for my Computer Security students!

https://gizmodo.com/kaspersky-dark-web-big-tech-layoffs-cybercrime-1850055032

Want a Six-Figure Job with Paid Time Off? Check Out These Dark Web Cybercrime Positions

Since times are tough in the tech industry, there's never been a better time to turn to cybercrime!





Tools & Techniques.

https://www.axios.com/2023/01/31/openai-chatgpt-detector-tool-machine-written-text

OpenAI releases tool to detect machine-written text

ChatGPT creator OpenAI today released a free web-based tool designed to help educators and others figure out if a particular chunk of text was written by a human or a machine.

OpenAI cautions the tool is imperfect and performance varies based on how similar the text being analyzed is to the types of writing OpenAI’s tool was trained on.



Tuesday, January 31, 2023

We’re all doomed!

https://www.ft.com/content/0cca6054-6fc9-4a94-b2e2-890c50d956d5

Shoshana Zuboff: ‘Privacy has been extinguished. It is now a zombie’

… “We have fantastic scholars, researchers, advocates who are focused on privacy, others who are focused on disinformation, others who are focused on the nexus with democracy,” she says, when we meet in London. This “Balkanisation” reduces the ability to pinpoint the “actual source of harm”: people’s data is treated as a costless resource, just as forests and other parts of nature were in centuries past.

Zuboff cites data that, in the US, which has no federal privacy law, people have their location exposed 747 times a day. In the EU, which she says has the “best regulation”, it’s 376. “It’s better, but it’s not nearly better enough.”





So you can honestly say you did not use ChatGPT...

https://venturebeat.com/ai/who-will-compete-with-chatgpt-meet-the-contenders-the-ai-beat/

Who will compete with ChatGPT? Meet the contenders

Here are four top players potentially making moves to challenge ChatGPT:



Monday, January 30, 2023

I see you searched for ChatGPT. That means your homework was written by an AI.

https://www.pogowasright.org/epic-urges-colorado-supreme-court-to-rule-reverse-keyword-warrants-are-unconstitutional-points-toward-effects-on-abortion-rights/

EPIC Urges Colorado Supreme Court to Rule Reverse Keyword Warrants are Unconstitutional, Points Toward Effects on Abortion Rights

From the folks at EPIC.org:

EPIC submitted an amicus brief in the case Colorado v. Seymour, urging the Colorado Supreme Court to rule that reverse keyword warrants are unconstitutional in the first case in the country to evaluate such warrants. Reverse keyword warrants are a dangerous new technique the police use that force technology companies like Google to search through millions or billions their users’ search histories in order to identify suspects in criminal cases. EPIC argued that the Colorado Supreme Court’s ruling will affect people across the country, especially people searching for abortion-related topics who may feel unwilling to do so after the U.S. Supreme Court’s Dobbs decision enabled many states to criminalize abortion. If reverse keyword warrants become commonplace, people will be increasingly afraid to use the internet to search important topics, knowing that such searches could expose them to law enforcement scrutiny. This is especially true for reproductive-health-related searches. EPIC regularly submits amicus briefs in cases involving police surveillance and the Fourth Amendment.





Another Fourth Amendment article.

https://www.pogowasright.org/article-orin-s-kerr-terms-of-service-and-fourth-amendment-rights/

Article: Orin S. Kerr: Terms of Service and Fourth Amendment Rights

Citation: Kerr, Orin S., Terms of Service and Fourth Amendment Rights (January 29, 2023). Available at SSRN: https://ssrn.com/abstract=

Abstract:

Almost everything you do on the Internet is governed by Terms of Service. The language in Terms of Service typically gives Internet providers broad rights to address potential account misuse. But do these terms alter Fourth Amendment rights, either diminishing or even eliminating constitutional rights in Internet accounts? In the last five years, many courts have ruled that they do. These courts treat Terms of Service like a rights contract: By agreeing to use an Internet account subject to broad Terms of Service, you give up your Fourth Amendment rights.
This Article argues that the courts are wrong. Terms of Service have little or no effect on Fourth Amendment rights. Fourth Amendment rights are rights against the government, not private parties. Terms of Service can define relationships between private parties, but private contracts cannot define Fourth Amendment rights. This is true across the range of Fourth Amendment doctrines, including the “reasonable expectation of privacy” test, consent, abandonment, third-party consent, and the private search doctrine. Courts that have linked Terms of Service and Fourth Amendment rights are mistaken, and their reasoning should be rejected.





As a non-lawyer, I expect to run into legal issues that I don’t understand. I hope that my lawyer friends can explain this one!

https://www.pogowasright.org/court-walmart-had-duty-to-track-down-man-who-used-false-name-to-obtain-prescription-and-left-without-getting-it/

Court: Walmart had duty to track down man who used false name to obtain prescription and left without getting it

Daniel Fisher reports:

AKRON, Ohio – A man who was prescribed antibiotics after crushing his finger but never picked them up from Walmart can sue the retailer for failing to track him down and make sure he received the drugs before the damaged finger led to a devastating infection and the loss of both legs, an Ohio appeals court ruled.
The court rejected Walmart’s argument it had no duty to the worker, who obtained the prescription under a false name and wasn’t in the company’s customer database.

Read more at Legal Newsline.



Sunday, January 29, 2023

Very Frankenstein.  Grab your torches and pitchforks. 

https://www.cambridge.org/core/journals/cambridge-quarterly-of-healthcare-ethics/article/what-do-chimeras-think-about/D09DD8677F262F7E26A89E3455F113BF

What Do Chimeras Think About?

Non-human animal chimeras, containing human neurological cells, have been created in the laboratory.  Despite a great deal of debate, the status of such beings has not been resolved.  Under normal definitions, such a being could either be unconventionally human or abnormally animal.  Practical investigations in animal sentience, artificial intelligence, and now chimera research, suggest that such beings may be assumed to have no legal rights, so philosophy could provide a different answer.  In this vein, therefore, we can ask: What would a chimera, if it could think, think about?  Thinking is used to capture the phenomena of a novel, chimeric being perceiving its terrible predicament as no more than a laboratory experiment.  The creation of a thinking chimera therefore forces us to reconsider our assumptions about what makes human beings (potentially) unique (and other sentient animals different), because, as such, a chimera’s existence bridges our social and legal expectations about definitions of human and animal.  Society has often evolved new social norms based on different kinds of (ir)rational contrivances; the imperative of non-contradiction, which is defended here, therefore requires a specific philosophical response to the rights of a thinking chimeric being.



Change the question…

https://link.springer.com/article/10.1007/s43681-023-00260-1

What would qualify an artificial intelligence for moral standing?

What criteria must an artificial intelligence (AI) satisfy to qualify for moral standing?  My starting point is that sentient AIs should qualify for moral standing.  But future AIs may have unusual combinations of cognitive capacities, such as a high level of cognitive sophistication without sentience.  This raises the question of whether sentience is a necessary criterion for moral standing, or merely sufficient.  After reviewing nine criteria that have been proposed in the literature, I suggest that there is a strong case for thinking that some non-sentient AIs, such as those that are conscious and have non-valenced preferences and goals, and those that are non-conscious and have sufficiently cognitively complex preferences and goals, should qualify for moral standing.  After responding to some challenges, I tentatively argue that taking into account uncertainty about which criteria an entity must satisfy to qualify for moral standing, and strategic considerations such as how such decisions will affect humans and other sentient entities, further supports granting moral standing to some non-sentient AIs.  I highlight three implications: that the issue of AI moral standing may be more important, in terms of scale and urgency, than if either sentience or consciousness is necessary; that researchers working on policies designed to be inclusive of sentient AIs should broaden their scope to include all AIs with morally relevant interests; and even those who think AIs cannot be sentient or conscious should take the issue seriously.  However, much uncertainty about these considerations remains, making this an important topic for future research.



Making Big Brother smaller?  

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4331633 

Does Big Brother exist? Face Recognition Technology in the United Kingdom

Face recognition technology (FRT) has achieved remarkable progress in the last decade due to the improvement of deep convolutional neural networks.  The United Kingdom (UK) law enforcement sector has been remarkably à l'avant-garde in employing this technology.  Smart CCTV cameras were allegedly first used in the UK, where the London Metropolitan Police Service has operated them since the 1998.  More recently, it was reported that businesses in the UK have been using a FRT system known as 'Facewatch' to share CCTV images with the police and identify suspected shoplifters entering their store.

The massive deployment of FRT has unsurprisingly tested the limits of the UK's democracy: where should the line be drawn between acceptable uses of this technology for collective or private purposes, and the protection of individual entitlements that are compressed by the employment of FRT?  Bridges v. South Wales Police case offered guidance on this issue.  After lengthy litigation, the Court of Appeal of England and Wales ruled in favour of the applicant, a civil rights campaigner who claimed that the active FRT deployed by the police at public gatherings infringed his rights.  The outcome of this case suggests that the use of FRT for law enforcement should be strictly regulated.

Although the Bridges case offered crucial directives on the balancing between individual rights and the lawful use of FRT for law enforcement purposes under the current UK rules, several ethical and legal questions still remain unsolved.  This chapter sheds light on the UK approach to FRT regulation and offers a threefold contribution to existing literature.  First, it provides an overview of sociological and regulatory attitudes towards this technology in the UK.  Second, the chapter discusses the Bridges saga and its implications.  Third, it offers reflections on the future of FRT regulation in the UK.



AI may pass the bar, but doesn’t seem like much real competition.  (Yet.) 

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4335905

ChatGPT Goes to Law School

How well can AI models write law school exams without human assistance?  To find out, we used the widely publicized AI model ChatGPT to generate answers on four real exams at the University of Minnesota Law School.  We then blindly graded these exams as part of our regular grading processes for each class.  Over 95 multiple choice questions and 12 essay questions, ChatGPT performed on average at the level of a C+ student, achieving a low but passing grade in all four courses.  After detailing these results, we discuss their implications for legal education and lawyering.  We also provide example prompts and advice on how ChatGPT can assist with legal writing.