Saturday, January 28, 2023

Fast recovery by fast food? Maybe the ‘fast’ part translates?

https://www.cpomagazine.com/cyber-security/kfc-pizza-hut-and-taco-bell-ransomware-attack-shuts-down-300-restaurants-in-the-uk/

KFC, Pizza Hut, and Taco Bell Ransomware Attack Shuts Down 300 Restaurants in the UK

KFC, Pizza Hut, and Taco Bell parent company Yum! Brands confirmed a ransomware attack that leaked company data and shut down restaurants in the United Kingdom.

Nearly 300 restaurants in the United Kingdom closed for one day after a ransomware attack affected “certain information technology systems.”

However, Yum! quickly mitigated the ransomware attack, and all outlets resumed operations within 24 hours.

With the ransomware being contained to a third of Yum! Brands UK outlets and the downtime being limited to 1 day – Yum! Brands have done relatively well recovering,” said Morten Gammelgard EVP, EMEA at BullWall. “The average amount of downtime for organizations when hit by Ransomware is approximately 24 days.”





A possible definition of Cyber War tools, but is it enough to generate an immediate response?

https://thehackernews.com/2023/01/ukraine-hit-with-new-golang-based.html

Ukraine Hit with New Golang-based 'SwiftSlicer' Wiper Malware in Latest Cyber Attack

"When you think about it, the growth in wiper malware during a conflict is hardly a surprise," Fortinet FortiGuard Labs researcher Geri Revay said in a report published this week. "It can scarcely be monetized. The only viable use case is destruction, sabotage, and cyberwar."





Makes you wonder what other ‘trigger words’ are in their list (and what words they ignore).

https://www.the74million.org/article/gaggle-drops-lgbtq-keywords-from-student-surveillance-tool-following-bias-concerns/

Gaggle Drops LGBTQ Keywords from Student Surveillance Tool Following Bias Concerns

Digital monitoring company Gaggle says it will no longer flag students who use words like “gay” and “lesbian” in school assignments and chat messages, a significant policy shift that follows accusations its software facilitated discrimination of LGBTQ teens in a quest to keep them safe.

A spokesperson for the company, which describes itself as supporting student safety and well-being, cited a societal shift toward greater acceptance of LGBTQ youth — rather than criticism of its product — as the impetus for the change as part of a “continuous evaluation and updating process.”

Though Gaggle’s software is generally limited to monitoring school-issued accounts, including those by Google and Microsoft, the company recently acknowledged it can scan through photos on students’ personal cell phones if they plug them into district laptops.





I wonder if there are tools that would let me know if my writing (my blog?) has been used to train an AI.

https://www.makeuseof.com/how-to-know-images-trained-ai-art-generator/

How to Know if Your Images Trained an AI Model (and How to Opt Out)

To many people's disbelief, living artists are discovering that their art has been used to train AI models without their consent. Using a web tool called "Have I Been Trained?", you can know in a matter of minutes if your images were fed to Midjourney, NightCafe, and other popular AI image generators.

If you find your image in one of the datasets used to train these AI systems, don't despair. Some organizations have developed ways to opt out of this practice, keeping your images from being scrapped from the internet and passed on to AI companies.



Friday, January 27, 2023

I love it when we get all philosophical.

https://theconversation.com/philosophers-have-studied-counterfactuals-for-decades-will-they-help-us-unlock-the-mysteries-of-ai-196392

Philosophers have studied ‘counterfactuals’ for decades. Will they help us unlock the mysteries of AI?

Suppose a person named Sara applies for a loan. The bank asks her to provide information including her marital status, debt level, income, savings, home address and age.

The bank then feeds this information into an AI system, which returns a credit score. The score is low and is used to disqualify Sara for the loan, but neither Sara nor the bank employees know why the system scored Sara so low.

Counterfactuals are claims about what would happen if things had played out differently. In an AI context, this means considering how the output from an AI system might be different if it receives different inputs. We can then supposedly use this to explain why the system produced the result it did.

Suppose the bank feeds its AI system different (manipulated) information about Sara. From this, the bank works out the smallest change Sara would need to get a positive outcome would be to increase her income.

The bank can then apparently use this as an explanation: Sara’s loan was denied because her income was too low. Had her income been higher, she would have been granted a loan.

Such counterfactual explanations are being seriously considered as a way of satisfying the demand for explainable AI, including in cases of loan applications and using AI to make scientific discoveries.

However, as researchers have argued, the counterfactual approach is inadequate.





When extinction seems eminent, the extinctees hire lawyers?

https://mashable.com/article/donotpay-artificial-intelligence-lawyer-experiment

DoNotPay's AI lawyer stunt cancelled after multiple state bar associations object

The robot lawyer was swiftly deactivated by real lawyers.

Last week DoNotPay(Opens in a new window) CEO Joshua Browder announced that the company's AI chatbot would represent a defendant in a U.S. court(Opens in a new window), marking the first use of artificial intelligence for this purpose. Now the experiment has been cancelled, with Browder stating he's received objections from multiple state bar associations.

"Bad news: after receiving threats from State Bar prosecutors, it seems likely they will put me in jail for 6 months if I follow through with bringing a robot lawyer into a physical courtroom," Browder tweeted on Thursday.(Opens in a new window) "DoNotPay is postponing our court case and sticking to consumer rights."





Do I need to know when ChatGPT helped with an article?

https://www.theguardian.com/science/2023/jan/26/science-journals-ban-listing-of-chatgpt-as-co-author-on-papers

Science journals ban listing of ChatGPT as co-author on papers

Some publishers also banning use of bot in preparation of submissions but others see its adoption as inevitable



(Related)

https://blog.medium.com/how-were-approaching-ai-generated-writing-on-medium-16ee8cb3bc89

How we’re approaching AI-generated writing on Medium

Transparency, disclosure, and publication-level guidelines



Thursday, January 26, 2023

Keeping score.

https://www.cpomagazine.com/data-protection/dla-piper-annual-gdpr-and-data-breach-report-2022-a-record-year-for-gdpr-fines-despite-drop-in-breach-count/

DLA Piper Annual GDPR and Data Breach Report: 2022 a Record Year for GDPR Fines Despite Drop in Breach Count

DLA Piper’s annual report covering EU data breaches and GDPR fines reports a record year in penalties, with a total of €2.92 billion levied throughout the bloc in 2022. This is in spite of a small drop in the overall breach count, but it is important to remember that fines are often assessed for complaints and cases that were initiated years before.

The report also indicates that the bloc’s regulators are making AI more of a priority, as concerns run rampant about everything from facial recognition tools to ChatGPT.





Guidelines or serious thought?

https://www.defenseone.com/policy/2023/01/when-may-robot-kill-new-dod-policy-tries-clarify/382215/

When May a Robot Kill? New DOD Policy Tries to Clarify

Did you think the Pentagon had a hard rule against using lethal autonomous weapons? It doesn’t. But it does have hoops to jump through before such a weapon might be deployed—and, as of Wednesday, a revised policy intended to clear up confusion.

The biggest change in the Defense Department’s new version of its 2012 doctrine on lethal autonomous weapons is a clearer statement that it is possible to build and deploy them safely and ethically but not without a lot of oversight.

That’s meant to clear up the popular perception that there’s some kind of a ban on such weapons. “No such requirement appears in [the 2012 policy] DODD 3000.09, nor any other DOD policy,” wrote Greg Allen, the director of the Artificial Intelligence Governance Project and a senior fellow in the Strategic Technologies Program at the Center for Strategic and International Studies.

What the 2012 doctrine actually says is that the military may make such weapons but only after a “senior level review process,” which no weapon has gone through yet, according to a 2019 Congressional Research Service report on the subject.





Tools & Techniques.

https://www.bespacific.com/nonprofits-release-free-tool-to-detect-ai-written-student-work/

Nonprofits release free tool to detect AI-written student work

Fast Company: “As concerns rise about students’ use of generative artificial intelligence like ChatGPT to complete schoolwork, a pair of education nonprofits have created a free system to help teachers detect AI-assisted essays. The tool, called AI Writing Check, was developed by the writing nonprofits Quill and CommonLit using an open-source AI model designed to detect the output of ChatGPT and related systems. It enables teachers (or anyone else) to copy and paste text and within a few seconds receive a determination on whether the work in question was written by ChatGPT. AI Writing Check, which the nonprofits began to develop in December, comes as surveys indicate growing concern among teachers over machine-generated essays. Other tools, including one called GPTZero, have also been released recently to detect automated writing…”



Wednesday, January 25, 2023

Well, maybe not everything

https://www.tomsguide.com/news/what-is-chatgpt

What is ChatGPT? Everything you need to know

The ChatGPT chatbot is so smart that The New York Times reports

(opens in new tab)

that it represents a "red alert" for Google's search business. And Google subsidiary DeepMind is reportedly releasing its own chatbot

(opens in new tab)

in beta, dubbed Sparrow, sometime in 2023.

Don’t worry, this article was still written by a human — though if you want to see how ChatGPT writes, check out the interview we conducted with the AI about what it is and what it can do. We know that lots of people are trying to figure out how to use this new technology and what its limitations are.

If you want to know how to use the chatbot AI check out our guide on how to use ChatGPT, but here we answer all your top questions about ChatGPT.





That seems a bit harsh.

https://www.bespacific.com/should-using-an-ai-text-generator-to-produce-academic-writing-be-plagiarism/

Should Using an AI Text Generator to Produce Academic Writing Be Plagiarism?

Frye, Brian L., Should Using an AI Text Generator to Produce Academic Writing Be Plagiarism? (December 3, 2022). Fordham Intellectual Property, Media & Entertainment Law Journal, Forthcoming, Available at SSRN: https://ssrn.com/abstract=4292283

AI text generators are becoming increasingly sophisticated. In particular, the OpenAI ChatGPT chatbot is capable of responding to a prompt with text that appears remarkably sophisticated. Many people are concerned that AI text generators like ChatGPT will present a huge problem for educators, because it will soon become impossible for them to determine whether a text was produced by a student or an AI. Should we be worried? Is using AI to generate academic writing a form of plagiarism? Who knows. Why don’t we ask the AI? I “wrote” this article by asking ChatGPT the questions in bold and copying its responses. My conclusion is that we have little to worry about. If students can provide satisfactory answers to your questions by using an AI text generator, then you are asking superficial questions. And if an AI text generator can compete with your scholarship, then you are superficial thinker.”



Tuesday, January 24, 2023

This is the first time I’ve heard of a hack. I hope they have evidence that it actually occurred and had a significant impact.

https://www.politico.com/news/2023/01/23/ticketmaster-cyberattack-taylor-swift-tickets-00079119

Ticketmaster says cyberattack disrupted Taylor Swift ticket sales

The disclosure comes ahead of grilling by lawmakers over antitrust concerns in the ticketing industry.

Ticketmaster was hit by a cyberattack in November that led to the problems with ticket sales for Taylor Swift’s upcoming U.S. tour, the president of its parent company plans to tell a congressional committee Tuesday.

A massive influx of traffic on the Ticketmaster website caused the slowdown in ticket sales, and part of that was due to a cyberattack, Joe Berchtold, president of Ticketmaster parent company Live Nation, will tell the Senate Judiciary Committee, according to prepared remarks.

During the Swift concert sales, Ticketmaster was “hit with three times the amount of bot traffic than we had ever experienced, and for the first time in 400 Verified Fan on-sales, they came after our Verified Fan access code servers,” Berchtold plans to say.

… “While the bots failed to penetrate our systems or acquire any tickets, the attack required us to slow down and even pause our sales,” Berchtold will say. In his testimony Berchtold describes an “arms race” between companies like Ticketmaster and the scalpers and cyber criminals looking to illegally obtain tickets for resale, and apologized to Swift and fans alike for the consumer experience.

Two people familiar with the cyberattack, granted anonymity to speak about the incident ahead of the hearing, said that a culprit for the attack — which took several hours for the company to address — has not yet been identified. They said Ticketmaster reported the attempted attack to the Federal Trade Commission and to the FBI, which are looking into the incident.





An over reaction to the use of AI? How much is too much?

https://ipwatchdog.com/2023/01/24/copyright-office-officially-cancels-registration-ai-graphic-novel/id=155686/

Copyright Office Officially Cancels Registration for AI Graphic Novel

If there is no ability to register a generative work with the USCO, or understanding of the points at which —or circumstance around when— the use of AI tools may qualify, it remains unclear as to where this leaves such works for purposes of ownership, exploitation…and protection.”

On Monday, January 23, the U.S. Copyright Office (USCO) officially cancelled the registration for a graphic novel that was made using the AI text-to-image tool, Midjourney. The USCO previously registered the work in September 2022. However, a month later, and following significant press attention, the Office issued a notice indicating that the registration may be cancelled. With Monday’s development, the cancellation is now final.

As part of the September 2022 Notice, the Office asked Kashtanova to provide details “to show that there was substantial human involvement in the process of creation of this graphic novel.”

Van Lindberg, Partner at Taylor English, served as Kashtanova’s counsel in drafting the response to the USCO’s cancellation notice. The response letter argued that there was sufficient creativity in the prompts and inputs used which, when combined with the artist’s use of and control over the tool, should have been sufficient for protection.





Perhaps these are earth shaking, but I doubt it. Still might be worth exploring…

https://www.inc.com/nick-hobson/if-youre-not-already-doing-these-10-productivity-hacks-in-chatgpt-youre-definitely-missing-out.html

If You're Not Already Doing These 10 Productivity Hacks in ChatGPT, You're Definitely Missing Out

In case you've missed the buzz, OpenAI just publicly launched its latest language generation robot, ChatGPT. ChatGPT is a powerful language model that uses deep learning techniques to generate human-like text. In other words, the robot is capable of answering all of your questions and prompts like an intelligent human would.

The model can be fine-tuned for specific tasks, such as language translation, summarization, and question answering. Here are 10 different ways you can use ChatGPT to do work for you and increase your productivity.





Seems like ChatGPT is all the rage.

https://www.deseret.com/2023/1/23/23562681/chatgpt-artificial-intelligence-critical-thinking-dark-ages

Perspective: ChatGPT and the dawn of the new Dark Ages

In a post-literate era, those who continue to read and to practice critical thinking skills will increasingly be unable to effectively communicate with those who don’t

There’s some larger questions here, though. With the advent of ChatGPT, there has been increasing discussion about whether Western civilization is moving into a “post-literate” era. We’re at a 40-year low in the U.S. in terms of young people reading for pleasure, according to Pew Research. Bosses complain their younger employees boast that they don’t read emails — even work emails — at all. Universities are dropping requirements for standardized test scores and even personal statements from applicants. Short TikTok videos and 280-character tweets are the limited and limiting daily fare of the rising generation. One high school student told a writer that “I should be on TikTok, because Andrew Tate is, and because it’s neither here nor there if I write books because his generation doesn’t read.”



Monday, January 23, 2023

Of course there’s an App for that.

https://hackaday.com/2023/01/21/all-your-keys-are-belong-to-keydecoder/

ALL YOUR KEYS ARE BELONG TO KEYDECODER

Physical security is often considered simpler than digital security since safes are heavy and physical keys take more effort to duplicate than those of the digital persuasion. [Maxime Beasse and Quentin Clement] have developed a smartphone app that can duplicate a key from a photo making key copying much easier.

KeyDecoder is an open source Android app that can generate all the necessary bitting info to duplicate a key from just an image. Luckily for the paranoid among us, the image must be taken with the key laying flat without a keyring on an ISO/CEI 7810 ID-1 ID or credit card. A passerby can’t just snap a photo of your keys across the room and go liberate your home furnishings, but it still would be wise to keep a closer eye on your keys now that this particular cat hack is out of the bag.





Ensuring that no one ever sees an alternative point of view?

https://www.bespacific.com/florida-teachers-told-to-remove-books-from-classroom-libraries-or-risk-felony-prosecution/

Florida teachers told to remove books from classroom libraries or risk felony prosecution

My Sun Coast: “Manatee County Schools Spokesperson Michael Barber confirms that communication has been sent to principals of schools to vet books teachers have in their classroom. In December, House Bill 1467 stated that School Library and Instructional Materials requires school district to adopt procedures for determining and reviewing content for library media centers. This has been extended to books in the classroom. Educators who reached out to ABC7 say that their books are being inspected Friday and books that don’t meet the guidelines will be removed. All books in teacher classrooms must be vetted to determine they have been approved. The Department of Education must publish and update a list of materials that were removed or discontinued by district school boards as a result of an objection and disseminate the list to school districts for consideration in their instructional materials selection… You can read the guidance in its entirety below: Per the new statutory changes to House Bill 1467 – Section 1006.40 (3) (d), F.S. All material in school and classroom libraries or included on a reading list must be:

  1. Free of Pornography and material prohibited under S. 847.012, F.S.

  2. Suited to student needs and their ability to comprehend the material presented.

  3. Appropriate for the grade level and age group for which the materials are used and made available.

Each elementary school must publish on its website, in a searchable format prescribed by the department, a list of all materials maintained in the school library media center or required as part of a school or grade-level reading list. Penalty for Violating Section 847.012, F.S. Any person violating any provision of this section commits a felony of the third degree, punishable as provided in S. 775.082, S. 775.083, or s. 775.084.”



(Related)

https://www.bespacific.com/students-want-new-books-thanks-to-restrictions-librarians-cant-buy-them/

Students want new books. Thanks to restrictions, librarians can’t buy them.

Washington Post: Schools are struggling to keep their shelves stocked as oversight by parents and school boards intensifies – “States and districts nationwide have begun to constrain what librarians can order. At least 10 states have passed laws giving parents more power over which books appear in libraries or limiting students’ access to books, a Washington Post analysis found. At the same time, school districts are passing policies that bar certain kinds of texts most often, those focused on issues of gender and sexuality — while increasing administrative or parental oversight of acquisitions…”





My English teacher friend strongly disagrees…

https://www.vice.com/en/article/xgyjm4/ai-writing-tools-like-chatgpt-are-the-future-of-learning-and-no-its-not-cheating

AI Writing Tools Like ChatGPT Are the Future of Learning & No, It’s Not Cheating

ChatGPT is the most advanced technology of its kind and its popularity is growing fast. Especially among students.

Already, in the US, some schools have already “banned” the URL to mitigate fears of negative effects on students. ChatGPT has quickly become synonymous with “cheating.”

But computer science experts, and even the universities themselves, say this technology is only the beginning of a new era of learning.

I think it’s an increase in human capability moment that we’re looking at right now,” co-director at Deakin University’s Centre for Research in Assessment and Digital Learning, Phillip Dawson, told VICE.

I think a student that graduates in five years’ time is going to be able to do so much more than what we are capable to do now because they’ll be using these sorts of tools.”

Dawson described ChatGPT as a writing tool and compared students using it to help them write essays to a pilot learning how to fly a modern plane.

Yeah, you need to be able to use all the instruments and you need to know how all those work, but you also need to be able to do it when all those instruments fail. You still need to be able to land that plane.”





Perspective.

https://thenextweb.com/news/how-will-chatgpt-dall-e-and-other-ai-tools-impact-the-future-of-work-we-asked-5-experts

How will ChatGPT, DALL-E and other AI tools impact the future of work? We asked 5 experts

From steam power and electricity to computers and the internet, technological advancements have always disrupted labor markets, pushing out some careers while creating others. Artificial intelligence remains something of a misnomer — the smartest computer systems still don’t actually know anything — but the technology has reached an inflection point where it’s poised to affect new classes of jobs: artists and knowledge workers.

To jump ahead to each response, here’s a list of each:

Creativity for all – but loss of skills?
Potential inaccuracies, biases and plagiarism
With humans surpassed, niche and ‘handmade’ jobs will remain
Old jobs will go, new jobs will emerge
Leaps in technology lead to new skills



Sunday, January 22, 2023

If I never use social media, am I flagged as ‘suspicious?’

https://www.igi-global.com/chapter/social-media-intelligence/317161

Social Media Intelligence: AI Applications for Criminal Investigation and National Security

This chapter aims at discussing how social media intelligence (SOCMINT) can be and has been applied to the field of criminal justice. SOCMINT is composed of a set of computer forensic techniques used for intelligence gathering on social media platforms. Through this chapter, readers will be able to better understand what SOCMINT is and how it may be helpful for criminal investigation and national security. Different aspects of SOCMINT are addressed, including application in criminal justice, intelligence gathering, monitoring, metadata, cyber profiling, social network analysis, tools, and privacy concerns. Further, the challenges and future research directions are discussed as well. This chapter is not meant to serve as a technical tutorial as the focus is on the concepts rather than the techniques.





I find the debate interesting but suspect the answer will always be “AI = person”

https://link.springer.com/article/10.1007/s12369-022-00958-y

Can Robots have Personal Identity?

This article attempts to answer the question of whether robots can have personal identity. In recent years, and due to the numerous and rapid technological advances, the discussion around the ethical implications of Artificial Intelligence, Artificial Agents or simply Robots, has gained great importance. However, this reflection has almost always focused on problems such as the moral status of these robots, their rights, their capabilities or the qualities that these robots should have to support such status or rights. In this paper I want to address a question that has been much less analyzed but which I consider crucial to this discussion on robot ethics: the possibility, or not, that robots have or will one day have personal identity. The importance of this question has to do with the role we normally assign to personal identity as central to morality. After posing the problem and exposing this relationship between identity and morality, I will engage in a discussion with the recent literature on personal identity by showing in what sense one could speak of personal identity in beings such as robots. This is followed by a discussion of some key texts in robot ethics that have touched on this problem, finally addressing some implications and possible objections. I finally give the tentative answer that robots could potentially have personal identity, given other cases and what we empirically know about robots and their foreseeable future.





Wrong by design?

https://scholarship.richmond.edu/pilr/vol26/iss1/8/

From Ban to Approval: What Virginia's Facial Recognition Technology Law Gets Wrong

Face recognition technology (FRT), in the context of law enforcement, is a complex investigative technique that includes a delicate interplay between machine and human. Compared to other biometric and investigative tools, it poses unique risks to privacy, civil rights, and civil liberties. At the same time, its use is generally unregulated and opaque. Recently, state lawmakers have introduced legislation to regulate face recognition technology, but this legislation often fails to account for the complexities of the technology, or to address the unique risks it poses. Using Virginia’s recently passed face recognition law and the legislative history behind it as an example, we show how legislation can fail to properly account for the harms of this technology.





AI as slave?

https://www.cambridge.org/core/journals/legal-studies/article/bridging-the-accountability-gap-of-artificial-intelligence-what-can-be-learned-from-roman-law/8B2B88D50E0A795F358C2F53958BDB43

Bridging the accountability gap of artificial intelligence – what can be learned from Roman law?

This paper discusses the accountability gap problem posed by artificial intelligence. After sketching out the accountability gap problem we turn to ancient Roman law and scrutinise how slave-run businesses dealt with the accountability gap through an indirect agency of slaves. Our analysis shows that Roman law developed a heterogeneous framework in which multiple legal remedies coexist to accommodate the various competing interests of owners and contracting third parties. Moreover, Roman law shows that addressing the various emerging interests had been a continuous and gradual process of allocating risks among different stakeholders. The paper concludes that these two findings are key for contemporary discussions on how to regulate artificial intelligence.