Saturday, September 16, 2023

An outbreak of sanity? Not quite.

https://www.cpomagazine.com/data-privacy/privacy-advocates-celebrate-death-of-uk-online-safety-bill-clause-as-government-admits-encrypted-messaging-cant-be-scanned-without-breaking-it/

Privacy Advocates Celebrate Death of UK Online Safety Bill Clause as Government Admits Encrypted Messaging Can’t Be Scanned Without Breaking It

The most controversial portion of the United Kingdom’s Online Safety Bill appears to be dead in the water, as Ofcom has publicly admitted that the technology to create backdoors into encrypted messaging without breaking it does not exist and that the “spy clause” will not be enforced when the bill becomes law.

The Online Safety Bill remains otherwise intact, however, and the ministers involved with the issue appear to have not given up on the idea entirely. Minister Paul Scully said that companies will be directed to make their best efforts to develop technology to comply with the bill’s requirements for the monitoring and removal of child sexual abuse material from encrypted messaging platforms. The bill has not yet become law but is widely expected to before 2023 is out, with enforcement going into effect in mid-2024.





What would Clausewitz do?

https://www.newyorker.com/news/news-desk/ai-and-the-next-generation-of-drone-warfare

A.I. and the Next Generation of Drone Warfare

On August 28th, the Deputy Secretary of Defense, Kathleen Hicks, announced what she called the Replicator initiative—an all-hands-on-deck effort to modernize the American arsenal by adding fleets of artificially intelligent, unmanned, relatively cheap weapons and equipment. She described these machines as “attritable,” meaning that they can suffer attrition without compromising a mission. Imagine a swarm of hundreds or even thousands of unmanned aerial drones, communicating with each other as they collect intelligence on enemy-troop movements, and you will begin to understand the Deputy Secretary’s vision for Replicator. Even if a sizable number of the drones were shot down, the information they’d gathered would have already been recorded and sent back to human operators on the ground.





Useful thoughts?

https://www.thepublicdiscourse.com/2023/09/90834/

Artificial Mediocrity: The Hazard of AI in Education

AI-generated text provokes soaring hopes for limitless potential. Programs like Chat GPT and GrammarlyGO seem like wonderworkers. They “empower,” “assist,” and “inspire” their users. But after the first full semester of ChatGPT’s ubiquitous appearance in American classrooms, there is good reason to think that, far from helping students, chatbots imperil the very possibility of serious education. Chatbots replace disciplined learning with unthinking suggestibility and encourage students to avoid exercising practical judgment. Teachers must convey the hard reality that using a chatbot to skip the stages of an assignment that require organizing one’s own thoughts and research is not just dishonest, it is stultifying. It precludes excellence, and it encourages mediocrity.



Friday, September 15, 2023

Interact with some of the old AI systems.

https://www.nature.com/immersive/d41586-023-02822-z/index.html

A test of artificial intelligence

As debate rages over the abilities of modern AI systems, scientists are still struggling to effectively assess machine intelligence.





No expectation of privacy, except when viewed by a drone? (And we’re not even talking AI!)

https://www.pogowasright.org/eff-to-michigan-court-governments-shouldnt-be-allowed-to-use-a-drone-to-spy-on-you-without-a-warrant/

EFF to Michigan Court: Governments Shouldn’t Be Allowed to Use a Drone to Spy on You Without a Warrant

Hannah Zhao of EFF writes:

Should the government have to get a warrant before using a drone to spy on your home and backyard? We think so, and in an amicus brief filed last Friday in Long Lake Township v. Maxon, we urged the Michigan Supreme Court to find that warrantless drone surveillance of a home violates the Fourth Amendment.

In this case, Long Lake Township hired private operators to repeatedly fly drones over Todd and Heather Maxon’s home to take aerial photos and videos of their property in a zoning investigation. The Township did this without a warrant and then sought to use this documentation in a court case against them. In our brief, we argue that the township’s conduct was governed by and violated the Fourth Amendment and the equivalent section of the Michigan Constitution.

The Township argued that the Maxons had no reasonable expectation of privacy based on a series of cases from the U.S. Supreme Court in the 1980s. In those cases, law enforcement used helicopters or small planes to photograph and observe private backyards that were thought to be growing cannabis. The Court found there was no reasonable expectation of privacy—and therefore no Fourth Amendment issue—from aerial surveillance conducted by manned aircraft.

But, as we pointed out in our brief, drones are fundamentally different from helicopters or airplanes. Drones can silently and unobtrusively gather an immense amount of data at only a tiny fraction of the cost of traditional aircraft. In other words, the government can buy thousands of drones for the price of one helicopter and its hired pilot. Drones are also smaller and easier to operate. They can fly at much lower altitudes, and they can get into spaces—such as under eaves or between buildings—that planes and helicopters can never enter. And the noise created by manned airplanes and helicopters functions as notice to those who are being watched—it’s unlikely you’ll miss a helicopter circling overhead when you’re sunbathing in your yard, but you may not notice a drone.

Drone prevalence has soared in recent years, fueled by both private and governmental use. We have documented more than 1,471 law enforcement agencies across the United States that operate drones. In some cities, police have begun implementing drone as first responder programs, in which drones are constantly flying over communities in response to routine calls for service. It’s important to remember that communities of color are more likely to be the targets of governmental surveillance. And authorities have routinely used aerial surveillance technologies against individuals participating in racial justice movements. Under this backdrop, states like Florida, Maine, Minnesota, Nevada, North Dakota, and Virginia have enacted statutes requiring warrants for police use of drones.

Warrantless drone surveillance represents a formidable threat to privacy and it’s imperative for courts to recognize the danger that governmental drone use poses to our Fourth Amendment rights.

This article originally appeared at EFF.





Perspective.

https://www.bespacific.com/google-data-commons-ai/

Data Commons is using AI to make the world’s public data more accessible and helpful

Google Paper on Data Commons, September 12, 2023: “Publicly available data from open sources (e.g., United States Census Bureau (Census) [1], World Health Organization (WHO) [2], Intergovernmental Panel on Climate Change (IPCC) [3]) are vital resources for policy makers, students and researchers across different disciplines. Combining data from different sources requires the user to reconcile the differences in schemas, formats, assumptions, and more. This data wrangling is time consuming, tedious and needs to be repeated by every user of the data. Our goal with Data Commons (DC) is to help make public data accessible and useful to those who want to understand this data and use it to solve societal challenges and opportunities. We do the data processing and make the processed data widely available via standard schemas and Cloud APIs. Data Commons is a distributed network of sites that publish data in a common schema and interoperate using the Data Commons APIs. Data from different Data Commons can be ‘joined’ easily. The aggregate of these Data Commons can be viewed as a single Knowledge Graph. This Knowledge Graph can then be searched over using Natural Language questions utilizing advances in Large Language Models. This paper describes the architecture of Data Commons, some of the major deployments and highlights directions for future work.”

Data Sources Data in the Data Commons Graph comes from a variety of sources, each of which often includes multiple surveys. Some sources/surveys include a very large number of variables, some of which might not yet have been imported into Data Commons. The sources have been grouped by category and are listed alphabetically within each category.

    1. Agriculture

    2. Biomedical

    3. Crime

    4. Demographics

    5. Economy

    6. Education

    7. Energy

    8. Environment

    9. Health

    10. Housing

    11. We also maintain a list of upcoming data imports

Thursday, September 14, 2023

A carrot to go with the stick? Would a larger stick have been better?

https://www.databreaches.net/disclose-data-breaches-to-us-proactively-and-well-lower-any-fines-ico/

Disclose data breaches to us proactively, and we’ll lower any fines — ICO

Emma Woollacott reports:

British businesses could face lower fines if they proactively report data breaches, thanks to an agreement between the UK’s data protection regulator and cybersecurity agency.
The Information Commissioner’s Office (ICO) and National Cyber Security Centre (NCSC) say they plan to encourage engagement with the NCSC in the event of a breach, and allow meaningful engagement with the NCSC to lead to reduced regulatory penalties.

Read more at Forbes.

Woollacott cites the ICO’s report last year indicating that compliance with GDPR’s 72- hours deadline to report a breach to the ICO was only occurring in fewer than a third of breaches involving personal data since 2019. Offering the possibility of reduced fines for compliance — if couples with the ICO actually imposing fines for noncompliance — may work well.





Would you allow consultants to train their AI on your data? Would you trust AI trained on someone else’s data?

https://www.businessinsider.com/ey-ernst-young-consulting-invests-ai-strategy-training-model-tools-2023-9

EY has created its own large-language model — and says it will train all 400,000 employees to use it as part of a $1.4 billion investment

Ernst & Young is betting big on AI.

On Wednesday, the consulting and strategy giant announced it had completed a $1.4 billion investment into AI and that, over the last 18 months, it had developed a series of in-house artificial intelligence tools.

As part of its investment, the firm developed its own large language model, EY.ai EYQ, which will be used as an in-house chat interface. The model is currently trained on information publicly available on the internet, but the company hopes to train it on internal data, like more than a century's worth of tax figures, The Wall Street Journal reported.





What if it becomes mandatory? (Check the images)

https://www.science.org/doi/10.1126/sciadv.adi6492

3D-printed epifluidic electronic skin for machine learning–powered multimodal health surveillance

The amalgamation of wearable technologies with physiochemical sensing capabilities promises to create powerful interpretive and predictive platforms for real-time health surveillance. However, the construction of such multimodal devices is difficult to be implemented wholly by traditional manufacturing techniques for at-home personalized applications. Here, we present a universal semisolid extrusion–based three-dimensional printing technology to fabricate an epifluidic elastic electronic skin (e3-skin) with high-performance multimodal physiochemical sensing capabilities. We demonstrate that the e3-skin can serve as a sustainable surveillance platform to capture the real-time physiological state of individuals during regular daily activities. We also show that by coupling the information collected from the e3-skin with machine learning, we were able to predict an individual’s degree of behavior impairments (i.e., reaction time and inhibitory control) after alcohol consumption. The e3-skin paves the path for future autonomous manufacturing of customizable wearable systems that will enable widespread utility for regular health monitoring and clinical applications.





Why you should use AI?

https://www.bespacific.com/large-language-models-and-the-future-of-law/

Large Language Models and the Future of Law

Charlotin, Damien, Large Language Models and the Future of Law (August 22, 2023). Available at SSRN: https://ssrn.com/abstract=4548258 or http://dx.doi.org/10.2139/ssrn.4548258

Large Language Models (LLMs) have crashed into the scene in late 2022, with ChatGPT in particular bringing to the mainstream what has before this remained within the domain of the initiates. This paper introduces the main features of LLMs and related Artificial Intelligence (AI) to the legal community, while reviewing their potential application in a legal context, as well as the main questions and issues raised by their increasing presence in a jurist’s life. Adopting a structural approach, the analysis highlights the areas of legal activity that stand to gain – or lose – from the generalisation of LLMs in our workflow. The radical innovation represented by LLMs will force jurists to rethink their approach to the law, their own role in it, and the future of legal education and training.”



(Related)

https://www.bespacific.com/how-to-use-large-language-models-for-empirical-legal-research-2/

How to Use Large Language Models for Empirical Legal Research

Choi, Jonathan H., How to Use Large Language Models for Empirical Legal Research (August 9, 2023). Journal of Institutional and Theoretical Economics (Forthcoming), Minnesota Legal Studies Research Paper No. 23-23, Available at SSRN: https://ssrn.com/abstract=4536852

Legal scholars have long annotated cases by hand to summarize and learn about developments in jurisprudence. Dramatic recent improvements in the performance of large language models (LLMs) now provide a potential alternative. This Article demonstrates how to use LLMs to analyze legal documents. It evaluates best practices and suggests both the uses and potential limitations of LLMs in empirical legal research. In a simple classification task involving Supreme Court opinions, it finds that GPT-4 performs approximately as well as human coders and significantly better than a variety of prior-generation NLP classifiers, with no improvement from supervised training, fine-tuning, or specialized prompting.”





Resource.

https://www.bespacific.com/introducing-state-court-report/

Introducing State Court Report

We’re excited to introduce you to State Court Report, a nonpartisan source for news, resources, and commentary focused on state courts and state constitutional development. All too often, popular commentary has treated the U.S. Supreme Court as the only word that matters on constitutional rights. But recent federal court rulings limiting or eliminating rights under the U.S. Constitution have brought increased attention to state courts and constitutions as important, independent sources of rights as well. What’s been missing is a forum dedicated to covering legal news, trends, and cutting-edge scholarship related to state constitutional law, and a hub where noteworthy state supreme court cases and case materials from across the 50 states are easy to find and access. Enter State Court Report, a project of the Brennan Center. State Court Report features insights and analysis from academics, journalists, judges, and practitioners with diverse perspectives and expertise. We hope you’ll take some time to explore State Court Report. You can read commentary, analysis, and explainers across more than a dozen issue areas or learn more about a particular state. Our State Case Database highlights notable state constitutional decisions and pending cases to watch in state high courts nationwide. And our free newsletter will deliver the latest State Court Report articles straight to your inbox. We are also honored to feature guest essays from former U.S. Attorney General Eric H. Holder Jr. and former Michigan Chief Justice Bridget Mary McCormack. Nearly 50 years ago, Justice William J. Brennan Jr. wrote, “State courts no less than federal are and ought to be the guardians of our liberties.” In courthouses across the country, state constitutional questions are regularly being considered. We hope that State Court Report will help foster greater understanding and awareness of these legal developments and their significance.”



Wednesday, September 13, 2023

Always interesting.

https://www.gartner.com/en/newsroom/press-releases/2023-09-13-gartner-identifies-five-technologies-that-will-transform-the-digital-future-of-enterprises

Gartner Identifies Five Technologies That Will Transform the Digital Future of Enterprises

Gartner, Inc. today highlighted five technologies that will transform the digital future of organizations. They include digital humans, satellite communications, tiny ambient IoT, secure computation and autonomic robots.





Perspective.

https://fpf.org/blog/how-data-protection-authorities-are-de-facto-regulating-generative-ai/

HOW DATA PROTECTION AUTHORITIES ARE DE FACTO REGULATING GENERATIVE AI

Generative AI took the world by storm in the past year, with services like ChatGPT becoming “the fastest growing consumer application in history.” For generative AI applications to be trained and function immense amounts of data, including personal data, are necessary. It should be no surprise that Data Protection Authorities (‘DPAs’) were the first regulators around the world to take action, from opening investigations to actually issuing orders imposing suspension of the services where they found breaches of data protection law.

… Defined broadly, DPAs are supervisory authorities vested with the power to enforce comprehensive data protection law in their jurisdictions. In the past six months, as the popularity of generative AI was growing among consumers and businesses around the world, DPAs started opening investigations into how the providers of such services are complying with legal obligations related to how personal data are collected and used, as provided in their respective national data protection law. Their efforts are focusing currently on OpenAI as the provider of ChatGPT. Only two of the investigations have resulted until now in official enforcement action, be it preliminary, in Italy and South Korea. Here is a list of known open investigations, their timeline, and key concerns:





Useful?

https://www.bespacific.com/how-to-use-google-lens-on-the-iphone/

How to Use Google Lens on the iPhone

How to Geek: “What is Google Lens? – Google Lens is a neat little feature that can identify real-world objects, like signs, buildings, books, plants, and more using your phone’s camera. And it gives you more information about the object. It’s available in the Google Photos app, but if you don’t use Google Photos, you can now access Google Lens in the regular Google Search app. We’ll show you how to use Google Lens in both apps.”



Tuesday, September 12, 2023

It’s important if the Atlantic says so...

https://www.bespacific.com/the-atlantics-guide-to-privacy/

The Atlantic’s Guide to Privacy

The Atlantic’s Guide to Privacy [read free]: “In 2023, digital privacy is, in many ways, a fiction: Knowingly or not, we are all constantly streaming, beaming, being surveilled, scattering data wherever we go. Companies, governments, and our fellow citizens know more than we could ever imagine about our body, our shopping habits, even our kids. The question now isn’t how to protect your privacy altogether—it’s how to make choices that help you draw boundaries around what you most care about. Read on for our simple rules for managing your privacy, and get a list of personalized recommendations.”





Worth reading. What’s really going on?

https://aiguide.substack.com/p/can-large-language-models-reason

Can Large Language Models Reason?

What should we believe about the reasoning abilities of today’s large language models? As the headlines above illustrate, there’s a debate raging over whether these enormous pre-trained neural networks have achieved humanlike reasoning abilities, or whether their skills are in fact “a mirage.”

Reasoning is a central aspect of human intelligence, and robust domain-independent reasoning abilities have long been a key goal for AI systems. While large language models (LLMs) are not explicitly trained to reason, they have exhibited “emergent” behaviors that sometimes look like reasoning. But are these behaviors actually driven by true abstract reasoning abilities, or by some other less robust and generalizable mechanism—for example, by memorizing their training data and later matching patterns in a given problem to those found in training data?





Tools & Techniques. Perhaps a translation of Shakespeare?

https://www.zdnet.com/article/how-to-create-your-own-comic-books-with-ai/

How to create your own comic books with AI

You've dreamed of becoming a comic book artist but you lack one important skill, namely the ability to draw. Well, now AI can fulfill those dreams for you. Available as a space through Hugging Factory, the AI Comic Factory will design comic book pages for you based on your descriptions.

Describe the scenario you envision, choose a style, and then select a layout. You can even opt to add captions. In response, the AI will create the necessary panels to form an entire page. You can then produce one page after another and save or print each page. Here's how it works.





Interesting business model.

https://www.fastcompany.com/90951343/ai-essays-advertising-on-meta-and-tiktok

Companies that use AI to help you cheat at school are thriving on TikTok and Meta

This is the first full academic year where students have access to AI-powered chatbots like ChatGPT. While students around the world may be tempted to deploy such assistants, the tech is still far from perfect: So-called hallucinations remain commonplace in chatbots’ responses, with research suggesting GPT-4 makes up one in five citations.

As a result of chatbots’ unreliability, many essay mills, which produce content for a fee, are touting that they combine both AI and human labor to create an end product that is undetectable by software designed to catch cheating. And, according to a new analysis published in open-access repository arXiv, such mills are soliciting clients on TikTok and Meta platforms—despite the fact that the practice is illegal in a number of countries, including England, Wales, Australia, and New Zealand.



Monday, September 11, 2023

I like it! No matter how unlikely it is.

https://www.pogowasright.org/a-radical-proposal-for-protecting-privacy-halt-industrys-use-of-non-content/

A Radical Proposal for Protecting Privacy: Halt Industry’s Use of ‘Non-Content’

Law professor and privacy scholar Susan Landau writes:

. Following the spirit of consumer protection laws such as those requiring that cars must have seatbelts, we urge that, with narrow exceptions, regulations or legislation limit the uses of metadata and telemetry information to the purposes for which they were designed: delivery of content and better user experience on the device (or, in the case of augmented reality or virtual reality, for only those purposes off the device). We recommend allowing use for investigating fraud, ensuring security, including device and user identification (for security purposes only), and modeling to understand future business needs; these purposes are analogous to the business purposes to which AT&T put metadata in the pre-1990s age. Then allow two more purposes. First, for a limited period during a public health emergency, we recommend the use of data to provide information on public movement in aggregate. We also recommend allowing such information to be used for public or peer-reviewed research projects in the public interest such as for urban planning, including appropriate de-identification methods so that personal information is not exposed.

Read the entire piece on Lawfare, An expanded version of this article is now available in the Colorado Technology Law Journal as “Reversing Privacy Risks: Strict Limitations on the Use of Communications Metadata and Telemetry Information.”





What could possibly go wrong?

https://www.ft.com/content/783a9d91-cce3-4177-bfe0-5438aa3b892a

UK researchers start using AI for air traffic control

UK researchers have produced a computer model of air traffic control in which all flight movements are directed by artificial intelligence rather than human beings.

Their “digital twin” representation of airspace over England is the initial output of a £15mn project to determine the role that AI could play in advising and eventually replacing human air traffic controllers.

Dubbed Project Bluebird, the research is a partnership between National Air Traffic Services, the company responsible for UK air traffic control, the Alan Turing Institute, a national body for data science and AI, and Exeter university, with government funding through UK Research and Innovation, a state agency. Its first results were presented at the British Science Festival in Exeter.

Reasons for involving AI in air traffic control include the prospect of directing aircraft along more fuel-efficient routes to reduce the environmental impact of aviation, as well as cutting delays and congestion, particularly at busy airports such as London’s Heathrow.





Will they get it right? How could you tell if they failed?

https://techcrunch.com/2023/09/10/lexisnexis-generative-ai/?guccounter=1&guce_referrer=aHR0cHM6Ly9uZXdzLmdvb2dsZS5jb20v&guce_referrer_sig=AQAAAHdIHSgLXDh_utW1p1WxJxlHvCk__xdoeCVL6X8WES0SgRd1YH67EbFqECuF1ahzyO_Hc_aFIK_orPAco5uh9_Kktx0vbs2tCbEeIKN7gocM2txyv5DZG0WkcQo724UWF_6GevDrpi71NGs-fiKtxwO0H7hvJt_pCiwaPmCIofYA

LexisNexis is embracing generative AI to ease legal writing and research

Last June, just months after the release of ChatGPT from OpenAI, a couple of New York City lawyers infamously used the tool to write a very poor brief. The AI cited fake cases, leading to an uproar, an angry judge and two very embarrassed attorneys. It was proof that while bots like ChatGPT can be helpful, you really have to check their work carefully, especially in a legal context.

The case did not escape the folks at LexisNexis, a legal software company that offers tooling to help lawyers find the right case law to make their legal arguments. The company sees the potential of AI in helping reduce much of the mundane legal work that every lawyer undertakes, but it also recognizes these very real issues as it begins its generative AI journey.



Sunday, September 10, 2023

Imagine what you could talk your AI in to doing… Imagine what your AI could talk you in to doing…

https://www.makeuseof.com/how-to-speak-to-chatgpt/

Did You Know You Can Speak to ChatGPT?

Have you ever imagined conversing with an AI that understands and can respond to you with your voice? OpenAI's open-source speech recognition system called "Whisper" allows you to speak to ChatGPT and get answers to your questions.





Perhaps the change is too great to wrap your mind around?

https://www.psychologytoday.com/intl/blog/the-digital-self/202309/the-unfathomable-cognitive-landscape-of-ai

The Unfathomable Cognitive Landscape of AI

The age of disruptive innovation, which once captured our collective imagination and drove the engines of industry, seems almost nostalgic now. What we're witnessing today is a phenomenon far more intricate: stacked or compounded innovation.

Exponential advancements are layered on top of each other, each epoch-making in its own right. This goes beyond simple technological development; it challenges the very limits of human comprehension. Imagine not just machines that can imitate human behavior—this is old news.

The real conundrum is whether human minds can grasp the profound complexity these machines are starting to unfold. It's an inverse Turing Test of sorts, where the subject of examination is no longer the machine but humanity itself. The stakes are high; the consequences tear at the very fabric of humanity.