Saturday, June 10, 2023

If this is all you want…

https://fpf.org/blog/connecticut-shows-you-can-have-it-all/

CONNECTICUT SHOWS YOU CAN HAVE IT ALL

On June 3rd, Connecticut Senate Bill 3 (SB 3), an “Act Concerning Online Privacy, Data and Safety Protections,” cleared the state legislature following unanimous votes in the House and Senate. If enacted by Governor Lamont, SB 3 will amend the Connecticut Data Privacy Act (CTDPA) to create new rights and protections for consumer health data and minors under the age of 18, and also make small-but-impactful amendments to existing provisions of the CTDPA. The bill also contains some standalone sections, such as a section requiring the operators of online dating services within the state to implement new safety features, including a mechanism to report “harmful or unwanted” behavior.





Dodge the US Visa hassle, move the AI experts closer, wins points with Canada?

https://www.aroged.com/2023/06/10/microsoft-moves-ai-research-from-china-to-canada/

Microsoft moves AI research from China to Canada

It became known that Microsoft is transferring its leading researchers in the field of artificial intelligence from China to Canada, which threatens to close the most important training center for talented engineers from the Middle Kingdom. Beijing-based Microsoft Research Asia (MSRA) has begun issuing visas for top AI professionals to be transferred to its research institute in Vancouver. This is reported by the Financial Times, citing its own informed sources.

The report said the software giant plans to relocate between 20 and 40 MSRA employees to a new lab staffed with experts from around the world. It is noted that the so-called “Vancouver plan” is a response to increased political tensions between the US and China, as well as a maneuver aimed at preventing the best specialists from being poached into local technology companies. Two MSRA employees confirmed that they recently received job offers from Chinese Internet companies, but they turned them down and applied for a visa and subsequent relocation to Canada.





Anything that makes life easier…

https://www.theregister.com/2023/06/09/fbi_fisa_section_702_absolutely/

FBI: FISA Section 702 'absolutely critical' to spy on, err, protect Americans

The FISA is the federal law enacted in 1978 that allows the Feds to collect foreign intelligence domestically, and Section 702 permits the targeted surveillance of communications belonging to people outside the US, ideally to prevent criminal and terrorist acts.

Section 702 is set to expire at the end of the year unless Congress renews it. This pending deadline has seen law enforcement putting the full court press on lawmakers to ensure it stays intact, even as some of them — including US Senator Ron Wyden (D-OR) — have called for reform.

Abbate was the keynote speaker at Wednesday's Boston Conference on Cyber Security. During his talk, he told attendees that the FBI "cannot afford to lose" Section 702.





A job opening for my Security students? (Will the Broncos follow suit?)

https://www.databreaches.net/49ers-agree-to-settle-data-breach-class-action-lawsuit-must-create-new-it-positions/

49ers agree to settle data breach class action lawsuit, must create new IT positions

This site cannot keep up with all the class action litigation settlements, but when we do report on one, we try to see what the settlement requirements in terms of improving infosecurity and cybersecurity. Here’s one with a requirement, as reported by The Athletic:

The San Francisco 49ers agreed to settle a class action lawsuit stemming from a February 2022 ransomware attack on the team’s data servers that exposed personal information of over 20,000 employees, officials and fans. The plaintiffs filed settlement papers Thursday in California federal court.
The proposed settlement, which covers 20,930 individuals, requires the team to create a new position — executive vice president of technology — to oversee IT operations, and hire a dedicated cyber-security IT professional.

The ransomware attack was previously reported in February 2022.



Friday, June 09, 2023

Irresponsible government? I’m shocked! Shocked I tell you.

https://www.databreaches.net/high-court-sides-with-medicaid-fraudster-in-identity-theft-case/

High court sides with Medicaid fraudster in identity theft case

Alexandra Jones reports:

The Supreme Court unanimously shot down the government’s broad reading of identity theft law Thursday in a decision that will shorten the prison sentence of an Austin psychologist who defrauded Medicaid.
While the Government represents that prosecutors will act responsibly in charging defendants under its sweeping reading, this Court ‘cannot construe a criminal statute on the assumption that the Government will ‘use it responsibly,’’” Justice Sonia Sotomayor wrote for the court in the 21-page ruling.

Read more at Courthouse News.

So I hope some criminal defense attorneys blog about this decision and explain its implications for all the federal prosecutions of hackers where we often see aggravated identity theft charges tacked on, with mandatory two-year sentences to be served *consecutively* after other sentences.

And what will happen to all those defendants previously convicted and serving those sentences? Will this ruling make any difference or become grounds for a lot of appeals?





How refreshing! Quoting an AI to caution against AI.

https://reason.com/volokh/2023/06/08/another-judicial-order-related-to-lawyer-use-of-generative-ai/

Another Judicial Order Related to Lawyer Use of Generative AI

An order governing filings before Magistrate Judge Gabriel A. Fuentes (N.D. Ill.), adopted May 31 (paragraph breaks added):

The Court has adopted a new requirement in the fast-growing and fast-changing area of generative artificial intelligence ("AI") and its use in the practice of law. The requirement is as follows:
Any party using any generative AI tool in the preparation or drafting of documents for filing with the Court must disclose in the filing that AI was used and the specific AI tool that was used to conduct legal research and/or to draft the document.
Further, Rule 11 of the Federal Rules of Civil Procedure continues to apply, and the Court will continue to construe all filings as a certification, by the person signing the filed document and after reasonable inquiry, of the matters set forth in the rule, including but not limited to those in Rule 11(b)(2). Parties should not assume that mere reliance on an AI tool will be presumed to constitute reasonable inquiry, because, to quote a phrase, "I'm sorry, Dave, I'm afraid I can't do that …. This mission is too important for me to allow you to jeopardize it." 2001: A SPACE ODYSSEY (MetroGoldwyn-Mayer 1968).





I did not see this coming…

https://techcrunch.com/2023/06/08/mercedes-first-to-sell-vehicles-in-california-with-hands-free-eyes-off-automated-driving/

Mercedes first to sell vehicles in California with hands-free, eyes-off automated driving

Mercedes-Benz received a permit from California regulators that will allow the German automaker to sell or lease vehicles in the state equipped with a conditional automated driving system that allows for hands-off, eyes-off driving on certain highways.

The California Department of Motor Vehicles said Thursday it issued an autonomous vehicle deployment permit to Mercedes-Benz for its branded Drive Pilot system. The hands-off, eyes-off system can be used on designated California highways, including Interstate 15, under certain conditions without the active control of a human driver. This means drivers can watch videos, text or talk to a passenger (or even mess around with any number of third-party apps coming to new Mercedes models) without watching the road ahead or having their hands on the wheel.





Tools & Techniques.

https://www.bespacific.com/a-map-of-ai-for-education/

A Map of AI for Education

We introduce a new map of the current state-of-the-art: “One morning shortly after Thanksgiving, 2022, we woke up to discover that technological capability had advanced by five years while we were sleeping. It took another week or two for us to realize it, but that event, the launch of ChatGPT, may have a more far-reaching effect on K-12 education than on any other sector of life. In normal times, technology advances in step with its application, with the user experience, the interactions that unfold in and out of the classroom. Whiteboards become smart boards. But 2022–23 feels more like a dislocation. How will these remarkable advances emerge into the experience of students and teachers? We want to map that landscape in its earliest stage and watch how it evolves. Many of the possibilities we describe in more detail below are unexplored, while others have been substantially investigated by startups, researchers, and — as Ethan Mollick, a professor at Wharton, has emphasized — individual students and educators experimenting. It’s not easy to predict, but two paths seem possible. The first is what has almost always happened to new technology in the classroom: it rearranges the furniture. Laptops become expensive slide projectors. Personalized instruction winds up meaning worksheets with garish dashboards added. It was recently estimated that the average teacher uses 86 such tools regularly. The second path is that the inefficiency and dullness of the industrial way of schooling begin to disappear. Many of the teaching practices that learning science has shown to be most effective — such as active learning and frequent feedback — and most engaging for students — such as role play and project work — require significant time most teachers just don’t have. Could that change if every teacher had an assistant, a sort of copilot in the work of taking a class of students (with varying backgrounds, levels of engagement, and readiness-to-learn) from wherever they start to highly skilled, competent, and motivated young people? We will see.…”



Thursday, June 08, 2023

Clearly they calculate the risk to be less than their profits. Am I missing something?

https://www.reuters.com/technology/adobe-pushes-firefly-ai-into-big-business-with-financial-cover-2023-06-08/

Adobe pushes Firefly AI into big business, with financial cover

Adobe Inc said on Thursday it will offer Firefly, its artificial intelligence tool for generating images, to its large business customers, with financial indemnity for copyright challenges involving content made with the tools.

The move to include compensation comes amid a rise in lawsuits around the image data used in AI services from companies such as Stability AI and Midjourney that can generate imagery from just a few words of text.

Adobe earlier this year released a test version of Firefly, its own service which it says was created with legally safe image data.



(Related)

https://news.bloomberglaw.com/artificial-intelligence/openai-hit-with-first-defamation-suit-over-chatgpt-hallucination

OpenAI Hit With First Defamation Suit Over ChatGPT Hallucination

OpenAI LLC is facing a defamation lawsuit from a Georgia radio host who claimed the viral artificial intelligence program ChatGPT generated a false legal complaint accusing him of embezzling money.

The first-of-its-kind case comes as generative AI programs face heightened scrutiny over their ability to spread misinformation and “hallucinate” false outputs, including fake legal precedent.





Don’t worry, Officer Fox will protect your chickens…

https://www.politico.com/news/2023/06/07/ai-google-executive-healthcare-00100817

Use AI to regulate AI, Google executive says

Most of the focus in Washington on AI centers on how agencies should regulate its use by the private sector, with the FDA planning rules for its use in health care.

Regulators in charge of ensuring that artificial intelligence helps — and doesn’t harm — patients could use AI to do it, a former FDA official said Wednesday at POLITICO’s Health Care Summit.

Bakul Patel, who for years worked on digital health initiatives at the FDA before becoming head of digital health regulatory strategy at Google, said regulators need to think differently about how they set rules for the nascent technology.

“We need to start thinking: How do we use technology to … make technology a partner in the regulation?” he said.



Wednesday, June 07, 2023

It’s not ‘fear of AI,’ it’s ‘distrust of lazy lawyers.’ I hope this spreads to other professions.

https://www.huntonprivacyblog.com/2023/06/06/will-mandatory-generative-ai-use-certifications-become-the-norm-in-legal-filings/

Will Mandatory Generative AI Use Certifications Become the Norm in Legal Filings?

On June 2, 2023, Judge Brantley Starr of the U.S. District Court for the Northern District of Texas released what appears to be the first standing order regulating use of generative artificial intelligence (“AI”)—which has recently emerged as a powerful tool on many fronts—in court filings. Generative AI provides capabilities for ease of research, drafting, image creation and more. But along with this new technology comes the opportunity for abuse, and the legal system is taking notice.

Judge Starr’s new order requires the following:

All attorneys and pro se litigants appearing before the Court must, together with their notice of appearance, file on the docket a certificate attesting either that no portion of any filing will be drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence will be checked for accuracy, using print reporters or traditional legal databases, by a human being.

Read the full Client Alert.





Perspective.

https://www.databreaches.net/the-2023-verizon-dbir-is-out-get-your-copy-now/

The 2023 Verizon DBIR is out — get your copy now

Verizon’s top-notch annual Data Breach Investigations Report (DBIR) is out.

You can jump to the Executive Summary f the report, download the entire report, or view it online.

Here is its seven key insights infographic, below. Of the seven key insights, the figure that stands out the most to me is 74%:

74% of all breaches include the human element through Error, Privilege Misuse, Use of stolen credentials or Social Engineering





Perspective.

https://a16z.com/2023/06/06/ai-will-save-the-world/

Why AI Will Save the World

Why AI Can Make Everything We Care About Better

The most validated core conclusion of social science across many decades and thousands of studies is that human intelligence makes a very broad range of life outcomes better. Smarter people have better outcomes in almost every domain of activity:

What AI offers us is the opportunity to profoundly augment human intelligence to make all of these outcomes of intelligence – and many others, from the creation of new medicines to ways to solve climate change to technologies to reach the stars – much, much better from here.





Tools & Techniques.

https://cointelegraph.com/news/google-cloud-launches-free-courses-to-help-users-build-their-own-gpt-style-ai

Google Cloud launches free courses to help users build their own GPT-style AI

The new corpus includes nine courses and a set of labs that students can complete to earn a Google Cloud skills badge.



Tuesday, June 06, 2023

I suspect that if everything AI gets the same label, it will be ignored. There must be a better way...

https://thenextweb.com/news/eu-wants-tech-platforms-label-ai-generated-content-immediately

Label AI-generated content ‘immediately,’ EU urges big tech

The EU is pushing big tech to apply a new method for tackling AI disinformation: labels.

The bloc wants online platforms to mark any AI-generated photos, videos, and text, a top official announced on Monday.





Resource.

https://www.bespacific.com/awesome-privacy-guides/

Awesome Privacy Guides

Awesome Privacy – List of free, open source and privacy respecting services and alternatives to private services, such as those provided by Google/Alphabet. Anonymity, Privacy, and Security are often used interchangeably, but they actually represent distinct concepts. It is important to understand the differences between them. Read more in this section below. The primary focus of this list is to provide alternatives that prioritize privacy. These alternatives give you control over your data and do not collect or sell it.”

  • Privacy Tools – If you’re looking for a specific solution to something, these are the hardware and software tools we recommend in a variety of categories. Our recommended privacy tools are primarily chosen based on security features, with additional emphasis on decentralized and open-source tools. They are applicable to a variety of threat models ranging from protection against global mass surveillance programs and avoiding big tech companies to mitigating attacks, but only you can determine what will work best for your needs.





Tools & Techniques.

https://www.makeuseof.com/chatgpt-apps-to-analyze-chat-with-documents-pdfs/

6 ChatGPT Apps to Analyze and Chat With Your Documents and PDFs

ChatGPT has already wowed the world with how it takes information from the internet and condenses it into succinct answers for your queries. Not many people know that you can also ask ChatGPT to read your PDFs and chat about their contents. But if that's your objective, then these apps offer better options, from increased database sizes to creating chatbots from multiple documents.



Monday, June 05, 2023

Articles for anyone concerned about AI.

https://venturebeat.com/ai/ai-experts-challenge-doomer-narrative-including-extinction-risk-claims/

AI experts challenge ‘doomer’ narrative, including ‘extinction risk’ claims

Top AI researchers are pushing back on the current ‘doomer’ narrative focused on existential future risk from runaway artificial general intelligence (AGI). These include yesterday’s Statement on AI Risk, signed by hundreds of experts including the CEOs of OpenAI, DeepMind and Anthropic, which warned of a “risk of extinction” from advanced AI if its development is not properly managed.

Many say this ‘doomsday’ take, with its focus on existential risk from AI, or x-risk, is happening to the detriment of a necessary focus on current, measurable AI risks — including bias, misinformation, high-risk applications and cybersecurity. The truth is, most AI researchers are not focused on or highly-concerned about x-risk, they emphasize.

“It’s almost a topsy-turvy world,” Sara Hooker, head of the nonprofit Cohere for AI and former research scientist at Google Brain, told VentureBeat. “In the public discourse, [x-risk] is being treated as if it’s the dominant view of this technology.” But, she explained, at machine learning (ML) conferences such as the recent International Conference on Learning Representations (ICLR) in early May that attracts researchers from all over the world, x-risk was a “fringe topic.”



(Related)

https://www-ft-com.ezp.lib.cam.ac.uk/content/c1f6d948-3dde-405f-924c-09cc0dcf8c84

Sci-fi writer Ted Chiang: ‘The machines we have now are not conscious’

… “The machines we have now, they’re not conscious,” he says. “When one person teaches another person, that is an interaction between consciousnesses.” Meanwhile, AI models are trained by toggling so-called “weights” or the strength of connections between different variables in the model, in order to get a desired output. “It would be a real mistake to think that when you’re teaching a child, all you are doing is adjusting the weights in a network.”

Chiang’s main objection, a writerly one, is with the words we choose to describe all this. Anthropomorphic language such as “learn”, “understand”, “know” and personal pronouns such as “I” that AI engineers and journalists project on to chatbots such as ChatGPT create an illusion. This hasty shorthand pushes all of us, he says — even those intimately familiar with how these systems work — towards seeing sparks of sentience in AI tools, where there are none.

“There was an exchange on Twitter a while back where someone said, ‘What is artificial intelligence?’ And someone else said, ‘A poor choice of words in 1954’,” he says. “And, you know, they’re right. I think that if we had chosen a different phrase for it, back in the ’50s, we might have avoided a lot of the confusion that we’re having now.”

So if he had to invent a term, what would it be? His answer is instant: applied statistics.





Tools & Techniques.

https://www.bespacific.com/the-best-ways-to-scan-a-document-using-your-phone-or-tablet/

The Best Ways to Scan a Document Using Your Phone or Tablet

How to Geek: “Scanners had their moment, but nowadays it’s not as necessary to own one. However, that doesn’t mean you never need to scan a document or photo. Thankfully, you probably have some tools to do it without a scanner. If you find yourself scanning a lot of documents and photos, it’s a good idea to invest in an actual scanner. Most people only need to scan a few things a year, so we’ll show you some good alternatives.”



Sunday, June 04, 2023

 Interesting if you are tracking this stuff.  I can’t see training officers on every tool, but they should have an idea of the capabilities of each. 

https://www.nbcsandiego.com/news/local/san-diego-police-reveal-surveillance-technology-tools/3239207/

San Diego Police Reveals List of What Surveillance Technology Tools it Uses

If you've ever wondered what spy technology the San Diego Police Department uses to help solve crimes, now is your chance to find out.

The department published a list on Thursday of surveillance technologies it already uses or wishes to use this year.



Concern: If you go beyond the legal requirements are you handicapping your AI? 

https://dl.acm.org/doi/abs/10.1145/3593434.3593453 

Implementing AI Ethics: Making Sense of the Ethical Requirements

Society’s increasing dependence on Artificial Intelligence (AI) and AI-enabled systems require a more practical approach from software engineering (SE) executives in middle and higher-level management to improve their involvement in implementing AI ethics by making ethical requirements part of their management practices.  However, research indicates that most work on implementing ethical requirements in SE management primarily focuses on technical development, with scarce findings for middle and higher-level management.  We investigate this by interviewing ten Finnish SE executives in middle and higher-level management to examine how they consider and implement ethical requirements.  We use ethical requirements from the European Union (EU) Trustworthy Ethics guidelines for Trustworthy AI as our reference for ethical requirements and an Agile portfolio management framework to analyze implementation.  Our findings reveal a general consideration of privacy and data governance ethical requirements as legal requirements with no other consideration for ethical requirements identified.  The findings also show practicable consideration of ethical requirements as technical robustness and safety for implementation as risk requirements and societal and environmental well-being for implementation as sustainability requirements.  We examine a practical approach to implementing ethical requirements using the ethical risk requirements stack employing the Agile portfolio management framework.



Clean data, clean answers? 

https://www.intechopen.com/online-first/1121510

Ethics in Scientific Research - New Perspectives [Working Title]

Artificial Intelligence (AI) equips machines with the capacity to learn.  AI frameworks employing machine learning can discern patterns within vast data sets and construct intricate, interconnected systems that yield results that enhance the effectiveness of decision-making processes.  AI, in particular machine learning, has been positioned as an important element in contributing to as well as providing decisions in a multitude of industries.  The use of machine learning in delivering decisions is based on the data that is used to train the machine learning algorithms.  It is imperative that when machine learning applications are being considered that the data being used to train the machine learning algorithms are without bias, and the data is ethically used.  This chapter focuses on the ethical use of data in developing machine learning algorithms.  Specifically, this chapter will include the examination of AI bias and ethical use of AI, data ethics principles, selecting ethical data for AI applications, AI and data governance, and putting ethical AI applications into practice.



Warms the cockles of my auditor’s heart…

https://digitalcommons.law.scu.edu/chtlj/vol39/iss3/1/ 

ALGORITHMIC AUDITING: CHASING AI ACCOUNTABILITY

Calls for audits to expose and mitigate harms related to algorithmic decision systems are proliferating,3 and audit provisions are coming into force—notably in the E.U. Digital Services Act.4  In response to these growing concerns, research organizations working on technology accountability have called for ethics and/or human rights auditing of algorithms and an Artificial Intelligence (AI) audit industry is rapidly developing, signified by the consulting giants KPMG and Deloitte marketing their services.5  Algorithmic audits are a way to increase accountability for social media companies and to improve the governance of AI systems more generally.  They can be elements of industry codes, prerequisites for liability immunity, or new regulatory requirements.6  Even when not expressly prescribed, audits may be predicates for enforcing data-related consumer protection law, or what U.S. Federal Trade Commissioner Rebecca Slaughter calls “algorithmic justice.” 7  The desire for audits reflect a growing sense that algorithms play an important, yet opaque, role in the decisions that shape people’s life chances—as well as a recognition that audits have been uniquely helpful in advancing our understanding of the concrete consequences of algorithms in the wild and in assessing their likely impacts.8



A topic of interest.

https://link.springer.com/article/10.1007/s43681-023-00299-0

Navigating the legal landscape of AI copyright: a comparative analysis of EU, US, and Chinese approaches

This paper compares AI copyright approaches in the EU, US, and China, evaluating their effectiveness and challenges.  It examines the recognition of AI-generated works as copyrightable and the exclusive rights of copyright owners to reproduce, distribute, publicly display, and perform such works.  Differences in approaches, such as recognizing AI as a sui generis right holder in the EU and the broad fair use doctrine in the US, are highlighted.  This paper evaluates strengths and weaknesses of each approach, including enforcement and ownership of copyright in AI-generated works, and clarifies issues related to AI and copyright.  While the EU and US have more developed legal frameworks for AI copyright than China, all three approaches face challenges that need addressing.  This paper concludes by providing insight into the legal landscape of AI copyright and steps necessary for effective protection and use of AI-generated works.



This was not the first impression of educators.  (Panic) 

https://journals.sfu.ca/jalt/index.php/jalt/article/view/797

The use of ChatGPT in the digital era: Perspectives on chatbot implementation

The rapid advancement of technology has led to the integration of ChatGPT, an artificial intelligence (AI)-powered chatbot, in various sectors, including education.  This research aims to explore the perceptions of educators and students on the use of ChatGPT in education during the digital era.  This study adopted a qualitative research approach, using in-depth interviews to gather data.  A purposive sampling technique was used to select ten educators and 15 students from different academic institutions in Krabi, Thailand.  The data collected was analysed using content analysis and NVivo.  The findings revealed that educators and students generally have a positive perception of using ChatGPT in education.  The chatbot was perceived to be a helpful tool for providing immediate feedback, answering questions, and providing support to students.  Educators noted that ChatGPT could reduce their workload by answering routine questions and enabling them to focus on higher-order tasks.  However, the findings also showed some concerns regarding the use of ChatGPT in education.  Participants were worried about the accuracy of information provided by the chatbot and the potential loss of personal interaction with teachers.  The need for privacy and data security was also raised as a significant concern.  The results of this study could help educators and policymakers make informed decisions about using ChatGPT in education.



Tools & Techniques.

https://www.makeuseof.com/the-6-best-ai-tools-for-researchers-and-teachers/

The 6 Best AI Tools for Researchers and Teachers

Artificial intelligence can, when used correctly, offer several benefits for researchers and teachers. Here are some tools to help with your efforts.