Saturday, April 29, 2023

Access is free.

https://www.databreaches.net/bakerhostetlers-9th-annual-data-security-incident-response-report/

BakerHostetler’s 9th annual Data Security Incident Response Report

BakerHostetler’s annual report is out, and as always, it is a great read because it provides statistics and analysis of the more than 1,100 data breach incidents the law firm handled in 2022. Ted Kobus provides a bit of the history of the firm’s Digital Assets and Management Group.

Go request access to the report.





Did OpenAI make substantial changes or merely promise to think about considering changes?

https://www.pogowasright.org/italy-restores-chatgpt-after-openai-responds-to-regulator/

Italy restores ChatGPT after OpenAI responds to regulator

Reuters reports:

The ChatGPT chatbot was reactivated in Italy after its maker OpenAI addressed issues raised by Italy’s data protection authority, the agency and the company confirmed on Friday.
Microsoft Corp-backed OpenAI took ChatGPT offline in Italy last month after the country’s data protection authority, also known as Garante, temporarily banned the chatbot and launched a probe over the artificial intelligence application’s suspected breach of privacy rules.
Garante had given a deadline till Sunday to OpenAI to address its concerns for allowing the chatbot to start operating again in the country.
Last month, Garante said ChatGPT has an “absence of any legal basis that justifies the massive collection and storage of personal data” to “train” the chatbot.

Read more at The Express Tribune.

For the chronology and more details, see the Garante’s website. (EN )



(Related) Meanwhile, in the UK…

https://www.theregister.com/2023/04/28/online_safety_bill_age_checks/

Online Safety Bill age checks? We won't do 'em, says Wikipedia

Wikipedia won't be age-gating its services no matter what final form the UK's Online Safety Bill takes, two senior folks from nonprofit steward the Wikimedia Foundation said this morning.

The bill, for those who need a reminder, styles itself as world-leading legislation which aims to make the UK "the safest place in the world to be online" and has come under fire not only for its calls for age verification but also for wording that implies breaking encryption, asking providers to make content available for perusal by law enforcement, either before encryption or somehow, magically, during.

In a statement to national UK broadcaster the BBC this morning, Rebecca MacKinnon, vice president of Global Advocacy at Wikimedia, said that to perform such verification would "violate our commitment to collect minimal data about readers and contributors."





The privacy violations seem to have done nothing to slow the lawsuits.

https://www.pogowasright.org/the-first-wrongful-death-case-for-helping-a-friend-get-an-abortion/

The first “wrongful death” case for helping a friend get an abortion

Mary Tuma reports

Your help means the world to me,” a grateful Brittni Silva texted her best friends, Jackie Noyola and Amy Carpenter, last July. “I’m so lucky to have y’all. Really.”
A month after the U.S. Supreme Court overturned Roe v. Wade, the Houston mother of two experienced an unplanned pregnancy with her now ex-husband and allegedly sought abortion care with the help of her friends.
[…]
Not only did Marcus Silva access the private conversations his ex-wife had with her friends, he also filed an unprecedented lawsuit in March accusing Carpenter, Loyola, and Texas abortion rights activist Aracely Garcia of wrongful death, alleging the trio “conspired” to help his ex-wife obtain medication to terminate her pregnancy with a self-managed abortion. Attorneys for Silva also hope to sue “into oblivion the manufacturer of the abortion pills procured. The complaint, filed in state court in Galveston County, Texas, seeks a stunning $1 million in damages from each woman.

Read more at The Intercept,





Too good to be true?

https://www.newscientist.com/article/2371097-fluent-answers-from-ai-search-engines-are-more-likely-to-be-wrong/

Fluent answers from AI search engines are more likely to be wrong

If you think search engines powered by artificial intelligence, such as Microsoft’s Bing Chat, are providing you with useful-sounding answers, it is more likely that they are wrong, researchers have found.

“In these current systems, accuracy is inversely correlated with perceived utility,” says Nelson Liu at Stanford University. “The things that look better end up being worse.”





Are we ready to toss out an anchor? An agency that governs ‘things Congress doesn’t understand?’

https://www.brookings.edu/blog/techtank/2023/04/28/artificial-intelligence-is-another-reason-for-a-new-digital-agency/

Artificial intelligence is another reason for a new digital agency

The torrid pace of artificial intelligence (AI) developments contrasts with the torpid processes for protecting the public interest impacted by the technology. Private and government oversight systems that were developed to deal with the industrial revolution are no match for the AI revolution.

AI oversight requires a methodology that is as revolutionary as the technology itself.

When confronted with the challenges of industrial technology, the American people responded with new concepts such as antitrust enforcement and regulatory oversight. Thus far, policymakers have failed to address the new realities of the digital revolution. Those realities only become more daunting with AI. The response to intelligent technology cannot repeat the regulatory cruise control we have experienced to date regarding digital platforms. Consumer facing digital services, whether platforms such as Google, Facebook, Microsoft, Apple, and Amazon, or AI services (being led by many of the same companies) require a specialized and focused federal agency staffed by appropriately compensated experts.





Too silly to share?

https://www.makeuseof.com/cow-encryptor-turn-secret-documents-into-series-of-moos/

Cow-encryptor Turns Your Secret Documents Into a Series of Moos

The app accepts any file with valid UTF-8 contents, and transforms the text into a series of "moos". A new file is created with the ".cow" extension.

The encryption appears to be a simple cipher with capitalization variations in each "mooooo" corresponding with a different plaintext character. "moooooo" corresponds to "a", for example, and "moooooO" is "b".



Friday, April 28, 2023

Perspective.

https://www.brookings.edu/blog/techtank/2023/04/25/weird-ai-understanding-what-nations-include-in-their-artificial-intelligence-plans/

WEIRD AI: Understanding what nations include in their artificial intelligence plans

In 2021 and 2022, the authors published a series of articles on how different countries are implementing their national artificial intelligence (AI) strategies. In these articles, we examined how different countries view AI and looked at their plans for evidence to support their goals. In the later series of papers, we examined who was winning and who was losing in the race to national AI governance, as well as the importance of people skills versus technology skills, and concluded with what the U.S. needs to do to become competitive in this domain.

Since these publications, several key developments have occurred in national AI governance and international collaborations. First, one of our key recommendations was that the U.S. and India create a partnership to work together on a joint national AI initiative. Our argument was as follows: “…India produces far more STEM graduates than the U.S., and the U.S. invests far more in technology infrastructure than India does. A U.S. -India partnership eclipses China in both dimensions and a successful partnership could allow the U.S. to quickly leapfrog China in all meaningful aspects of A.I.” In early 2023, U.S. President Biden announced a formal partnership with India to do exactly what we recommended to counter the growing threat of China and its AI supremacy.

Second, as we observed in our prior paper, the U.S. federal government has invested in AI, but largely in a decentralized approach. We warned that this approach, while it may ultimately develop the best AI solution, requires a long ramp up and hence may not achieve all its priorities.

Finally, we warned that China is already in the lead on the achievement of its national AI goals and predicted that it would continue to surpass the U.S. and other countries. News has now come that China is planning on doubling its investment in AI by 2026, and that the majority of the investment will be in new hardware solutions. The U.S. State Department also is now reporting that China leads the U.S. in 37 out of 44 key areas of AI. In short, China has expanded its lead in most AI areas, while the U.S. is falling further and further behind.





How LLMs work.

https://www.nytimes.com/interactive/2023/04/26/upshot/gpt-from-scratch.html

Watch an A.I. Learn to Write by Reading Nothing but Jane Austen

The core of an A.I. program like ChatGPT is something called a large language model: an algorithm that mimics the form of written language.

While the inner workings of these algorithms are notoriously opaque, the basic idea behind them is surprisingly simple. They are trained by going through mountains of internet text, repeatedly guessing the next few letters and then grading themselves against the real thing.

To show you what this process looks like, we trained six tiny language models starting from scratch. We’ve picked one trained on the complete works of Jane Austen, but you can choose a different path by selecting an option below. (And you can change your mind later.)



Thursday, April 27, 2023

Interesting, but everyone involved needs to know what ‘filters’ are in place.

https://www.engadget.com/palantir-shows-off-an-ai-that-can-go-to-war-180513781.html

Palantir shows off an AI that can go to war

… “LLMs and algorithms must be controlled in this highly regulated and sensitive context to ensure that they are used in a legal and ethical way,” the video begins. To do so, AIP's operation is based on three "key pillars," the first being that AIP will deploy across a classified system, able to parse in real-time both classified and non-classified data, ethically and legally. The company did not elaborate on how that would work. The second pillar is that users will be able to toggle the scope and actions of every LLM and asset on the network. The AIP itself will generate a secure digital record of the entire operation, "crucial for mitigating significant legal, regulatory, and ethical risks in sensitive and classified settings," according to the demo. The third pillar are AIP's "industry-leading guardrails" to prevent the system from taking unauthorized actions.

A "human in the loop" to prevent such actions does exist in Palantir's scenario, though from the video, the "operator" appears to do little more than nod along with whatever AIP suggests. The demo also did not elaborate on what steps are being taken to prevent the LLMs that the system relies on from "hallucinating" pertinent facts and details.





A new can of worms! Is this something Trump will use?

https://www.reuters.com/legal/elon-or-deepfake-musk-must-face-questions-autopilot-statements-2023-04-26/

Elon, or deepfake? Musk must face questions on Autopilot statements

A California judge on Wednesday ordered Tesla CEO Elon Musk to be interviewed under oath about whether he made certain statements regarding the safety and capabilities of the carmaker’s Autopilot features.

… Musk will likely be asked about a 2016 statement cited by plaintiffs, in which he allegedly said: "A Model S and Model X, at this point, can drive autonomously with greater safety than a person. Right now.”

Tesla opposed the request in court filings, arguing that Musk cannot recall details about statements.

In addition Musk, “like many public figures, is the subject of many ‘deepfake’ videos and audio recordings that purport to show him saying and doing things he never actually said or did,” Tesla said.

… “Their position is that because Mr. Musk is famous and might be more of a target for deep fakes, his public statements are immune,” Pennypacker wrote, adding that such arguments would allow Musk and other famous people “to avoid taking ownership of what they did actually say and do.”





What better authority?

https://www.bespacific.com/role-of-chatgpt-in-law-according-to-chatgpt/

Role of chatGPT in Law: According to chatGPT

Biswas, Som, Role of chatGPT in Law: According to chatGPT (March 30, 2023). Available at SSRN: https://ssrn.com/abstract=4405398 or http://dx.doi.org/10.2139/ssrn.4405398

ChatGPT is a language model developed by OpenAI that can provide support to paralegals and legal assistants in various tasks. Some of the uses of ChatGPT in the legal field include legal research, document generation, case management, document review, and client communication. However, ChatGPT also has limitations that must be taken into consideration, such as limited expertise, a lack of understanding context, the risk of bias in its responses, the potential for errors, and the fact that it cannot provide legal advice. While ChatGPT can be a valuable tool for paralegals and legal assistants, it is important to understand its limitations and use it in conjunction with the expertise and judgment of licensed legal professionals. The author acknowledges asking chatGPT questions regarding its uses for law. Some of the uses that it states are possible now and some are potentials for the future. The author has analyzed and edited the replies of chat GPT.”





Outsider trading?

https://www.bloomberg.com/news/articles/2023-04-26/jpmorgan-s-ai-puts-25-years-of-federal-reserve-talk-into-a-hawk-dove-score?leadSource=uverify%20wall

JPMorgan Creates AI Model to Analyze 25 Years of Fed Speeches

A week before the Federal Reserve’s next meeting, JPMorgan Chase & Co. unveiled an artificial intelligence-powered model that aims to decipher the central bank’s messaging and uncover potential trading signals.

Based off of Fed statements and central-banker speeches going back 25 years, the firm’s economists including Joseph Lupton employed a ChatGPT-based language model to detect the tenor of policy signals, effectively rating them on a scale from easy to restrictive in what JPMorgan is calling the Hawk-Dove Score.

Plotting the index against a range of asset performances, the economists found that the AI tool can be useful in potentially predicting changes in policy — and give off tradeable signals. For instance, they discovered that when the model shows a rise in hawkishness among Fed speakers between meetings, [Do they keep the Fed speakers under surveillance? Bob] the next policy statement has gotten more hawkish and yields on one-year government bonds advanced.





A report from Stanford and Georgetown.

https://fsi9-prod.s3.us-west-1.amazonaws.com/s3fs-public/2023-04/adversarial_machine_learning_and_cybersecurity_v7_pdf_1.pdf

Adversarial Machine Learning and Cybersecurity: Risks, Challenges, and Legal Implications

… This report is meant to accomplish two things. First, it provides a high-level discussion of AI vulnerabilities, including the ways in which they are disanalogous to other types of vulnerabilities, and the current state of affairs regarding information sharing and legal oversight of AI vulnerabilities. Second, it attempts to articulate broad recommendations as endorsed by the majority of participants at the workshop.



Wednesday, April 26, 2023

I don’t think “Move fast and break things” was meant to include breaking the law. (I might be wrong.)

https://www.technologyreview.com/2023/04/25/1072177/a-cambridge-analytica-style-scandal-for-ai-is-coming/

A Cambridge Analytica-style scandal for AI is coming

Can you imagine a car company putting a new vehicle on the market without built-in safety features? Unlikely, isn’t it? But what AI companies are doing is a bit like releasing race cars without seatbelts or fully working brakes, and figuring things out as they go.

This approach is now getting them in trouble. For example, OpenAI is facing investigations by European and Canadian data protection authorities for the way it collects personal data and uses it in its popular chatbot ChatGPT. Italy has temporarily banned ChatGPT, and OpenAI has until the end of this week to comply with Europe’s strict data protection regime, the GDPR. But in my story last week, experts told me it will likely be impossible for the company to comply, because of the way data for AI is collected: by hoovering up content off the internet.





Will we copy this or ignore it?

https://www.computerworld.com/article/3694571/amazon-facebook-twitter-on-eu-list-of-companies-facing-dsa-content-rules.html#tk.rss_all

Amazon, Facebook, Twitter on EU list of companies facing DSA content rules

The EU Commission has announced 19 large online platforms and search engines that will face new content moderation rules under the Digital Services Act.

The legislation, passed last year, introduced a specific regime for Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs), all of which have more than 45 million users in the EU.

Amazon Store, Facebook, Instagram, LinkedIn, Snapchat, TikTok, Twitter, Wikipedia and YouTube are just some of the 17 companies the EU Commission designated VLOP in its announcement Tuesday. The VLOSEs are Bing and Google Search.

The companies listed by the Commission will be required to comply with the full set of new obligations under the DSA by August 25. Those obligations include: various features meant to empower users, such as the right to opt-out from recommendation systems based on profiling; strengthening protection of minors; more diligent content moderation policies to help reduce disinformation; and greater transparency and accountability.

By the August deadline, the designated platforms and search engines will need to show the EU Commission that they have successfully adapted their systems, resources, and processes to become compliant, set up an independent system of compliance, and have carried out and reported their first annual risk assessment to the Commission.

Failure to comply with the DSA will result in fines of up to 6% of a company’s global turnover.





If your clients are using AI, you have to be able to audit AI. (This seems to be a late start and perhaps less effort than I expected.)

https://www.wsj.com/articles/pricewaterhousecoopers-to-pour-1-billion-into-generative-ai-cac2cedd

PricewaterhouseCoopers to Pour $1 Billion Into Generative AI

PricewaterhouseCoopers LLP plans to invest $1 billion in generative artificial intelligence technology in its U.S. operations over the next three years, working with Microsoft Corp. and ChatGPT-maker OpenAI to automate aspects of its tax, audit and consulting services.

… For PwC, the goal isn’t only to develop and embed generative AI into its own technology stack and client-services platforms, but also advising other companies on how best to use generative AI, while helping them build those tools, said Mohamed Kande, PwC’s vice chair and co-leader of U.S. consulting solutions and global advisory leader.





With links to several of the presentations…

https://fpf.org/blog/fpf-at-the-2023-iapp-global-privacy-summit/

FPF AT THE 2023 IAPP GLOBAL PRIVACY SUMMIT

Earlier this month, IAPP held its annual Global Privacy Summit (GPS) in Washington, DC. FPF played a major role in bringing together a team of seven renowned privacy experts on 11 panel discussions and varying peer-to-peer roundtables ranging from U.S. privacy law to AI tech and regulation to regional contractual frameworks for data transfers.





Change. Ready or not, change.

https://www.ft.com/content/dc556ab8-9661-4d93-8211-65a44204f358

The rapid rise of generative AI threatens to upend US patent system

Intellectual property laws cannot handle possibility artificial intelligence could invent things on its own

When members of the US supreme court refused this week to hear a groundbreaking case that sought to have an artificial intelligence system named as the inventor on a patent, it appeared to lay to rest a controversial idea that could have transformed the intellectual property field.

The justices’ decision, in the case of Thaler vs Vidal, leaves in place two lower court rulings that only “natural persons” can be awarded patents. The decision dealt a blow to claims that intelligent machines are already matching human creativity in important areas of the economy and deserve similar protections for their ideas.

But while the court’s decision blocked a potentially radical extension of patent rights, it has done nothing to calm growing worries that AI is threatening to upend other aspects of intellectual property law.

The US Patent and Trademark Office opened hearings on the issue this week, drawing warnings that AI-fuelled inventions might stretch existing understandings of how the patent system works and lead to a barrage of litigation.



(Related)

https://patentlyo.com/patent/2023/04/patenting-inventions-contributions.html

Guidance on Patenting Inventions with AI Contributions

The following are my remarks given on April 25, 2023 to the USPTO as part of their AI listening session:





Tools & Techniques. (Because we might need them…)

https://searchengineland.com/ai-chatgpt-content-detectors-395957

16 of the best AI and ChatGPT content detectors compared

We tested the top detection tools for AI-generated content. Here's what they are good and bad at, plus what to expect when using them.



(Related) Something useful?

https://www.freetech4teachers.com/2023/04/a-round-up-of-15-ai-resources-created.html

A Round-up of 15 AI Resources Created Without Using AI

… I've manually assembled the following collection of AI tools for teachers and related AI resources.





Tools & Techniques. Be careful about revealing sensitive information...

https://www.zdnet.com/article/this-ai-chatbot-can-sum-up-any-pdf-and-any-question-you-have-about-it/

This AI chatbot can sum up any PDF and answer any question you have about it

Regardless of whether it is a 90-page slide deck or a lengthy research paper, PDFs in the classroom or workplace are often tedious to wade through. ChatPDF is here to help.

As the name implies, ChatPDF allows you to chat with your PDF.

ChatPDF runs on OpenAI's GPT 3.5 large language model and can answer any question you have about the PDF you upload. The chatbot can even give you a full summary of the PDF without you having to read it.

Free plan users are limited to three PDF uploads of 120 pages or less a day. However, if you need more access, you can upgrade to a plus plan for $5 per month.



(Related)

https://techcrunch.com/2023/04/25/news-app-artifact-can-now-summarize-stories-using-ai-including-in-fun-styles/?guccounter=1

News app Artifact can now summarize stories using AI, including in fun styles

Artifact, the personalized news aggregator from Instagram’s founders, is further embracing AI with the launch of a new feature that will now summarize news articles for you. The company announced today it’s introducing a tool that generates article summaries with a tap of a button, in order to give readers the ability to understand the “high-level points” of an article before they read. For a little extra fun, the feature can also be used to summarize news in a certain style — like “explain like I’m five,” [Should be popular with Congress. Bob] in the style of Gen Z speech, or using only emojis, for example.

These styles aren’t really meant to be useful; they’re just there to add a little whimsy to the feature and potentially encourage users to try the new feature.

To use the AI summaries feature, tap on the “Aa” button found on the menu above an individual news article; then tap the new “Summarize” option. The company confirmed it’s leveraging OpenAI’s technologies via its API to generate text summaries.

However, the company cautions users that the feature should not replace actually reading the news, as AI isn’t perfect.



Tuesday, April 25, 2023

Imaging AI Regulations written by an AI...

https://fedscoop.com/congress-gets-40-chatgpt-plus-licenses/

Congress gets 40 ChatGPT Plus licenses to start experimenting with generative AI

Congressional offices have begun using OpenAI’s popular and controversial generative AI tool ChatGPT to experiment with the technology internally, a senior official within the Office of the Chief Administrative Officer’s House Digital Services said Friday.

The House recently created a new AI working group for staff to test and share new AI tools in the congressional office environment and now the House of Representatives‘ digital service has obtained 40 licenses for ChatGPT Plus, which were distributed earlier this month.

The purchase of the licenses comes amid widespread debate over how artificial intelligence technology should be used and regulated across the private sector and within government. This represents one of the earliest examples of ChatGPT being used as part of the policymaking process.





They are coming faster and faster. When will Congress notice?

https://fpf.org/blog/tenn-makes-nine-tennessee-information-protection-act-set-to-become-newest-comprehensive-state-privacy-law/

TENN. MAKES NINE? ‘TENNESSEE INFORMATION PROTECTION ACT’ SET TO BECOME NEWEST COMPREHENSIVE STATE PRIVACY LAW

On Friday April 21, Nashville lawmakers approved the Tennessee Information Protection Act (TIPA) following unanimous votes. Tennessee now joins Iowa, Indiana, and Montana as the four states in 2023 that have advanced baseline privacy legislation governing the collection, use, and transfer of consumer data.

TIPA is closely modeled on the Virginia Consumer Data Protection Act (VCDPA) that was enacted in March 2021 and went into effect on January 1 of this year.





Welcome to my Social Media (as long as you love me and my agenda)

https://www.reuters.com/legal/us-supreme-court-decide-if-public-officials-can-block-critics-social-media-2023-04-24/

U.S. Supreme Court to decide if public officials can block critics on social media

The U.S. Supreme Court, exploring free speech rights in the social media era, on Monday agreed to consider whether the Constitution's First Amendment bars government officials from blocking their critics on platforms like Facebook and Twitter.





Not intuitive and probably not the right approach.

https://www.makeuseof.com/tag/machine-learning-algorithms/

What Is Machine Learning? Intelligent Algorithms Explained

… Machine learning is a branch of computer science that focuses on giving AI the ability to learn tasks in a way that mimics human learning. This includes developing abilities, such as image recognition, without programmers explicitly coding AI to do these things. Instead, the AI is able to use training data to identify patterns and make predictions.





I need all the guidance I can get.

https://www.bespacific.com/how-to-use-ai-to-do-practical-stuff-a-new-guide-2/

How to use AI to do practical stuff: A new guide

One Useful Thing – Ethan Mollick People often ask me how to use AI. Here’s an overview with lots of links. “We live in an era of practical AI, but many people haven’t yet experienced it, or, if they have, they might have wondered what the big deal is. Thus, this guide. It is a modified version of one I put out for my students earlier in the year, but a lot has changed. It is an overview of ways to get AI to do practical things. Why people keep missing what AI can do. Large Language Models like ChatGPT are extremely powerful, but are built in a way that encourages people to use them in the wrong way. When I talk to people who tried ChatGPT but didn’t find it useful, I tend to hear a similar story. The first thing people try to do with AI is what it is worst at; using it like Google: tell me about my company, look up my name, and so on. These answers are terrible. Many of the models are not connected to the internet, and even the ones that are make up facts. AI is not Google. So people leave disappointed. Second, they may try something speculative, using it like Alexa, and asking a question, often about the AI itself. Will AI take my job? What do you like to eat? These answers are also terrible. With one exception, most of the AI systems have no personality, are not programmed to be fun like Alexa, and are not an oracle for the future. So people leave disappointed. If people still stick around, they start to ask more interesting questions, either for fun or based on half-remembered college essay prompts: Write an article on why ducks are the best bird. Why is Catcher in the Rye a good novel? These are better. As a result, people see blocks of text on a topic they don’t care about very much, and it is fine. Or the see text on something they are an expert in, and notice gaps. But it not that useful, or incredibly well-written. They usually quit around now, convinced that everyone is going to use this to cheat at school, but not much else. All of these uses are not what AI is actually good at, and how it can be helpful. They can blind you to the real power of these tools. I want to try to show you some of why AI is powerful, in ways both exciting and anxiety-producing.”



Monday, April 24, 2023

We respect your privacy except when it interferes with our surveillance.

https://www.schneier.com/blog/archives/2023/04/uk-threatens-end-to-end-encryption.html

UK Threatens End-to-End Encryption

In an open letter, seven secure messaging apps—including Signal and WhatsApp—point out that the UK’s Online Safety Bill could destroy end-to-end encryption:

As currently drafted, the Bill could break end-to-end encryption, opening the door to routine, general and indiscriminate surveillance of personal messages of friends, family members, employees, executives, journalists, human rights activists and even politicians themselves, which would fundamentally undermine everyone’s ability to communicate securely.
The Bill provides no explicit protection for encryption, and if implemented as written, could empower OFCOM to try to force the proactive scanning of private messages on end-to-end encrypted communication services – nullifying the purpose of end-to-end encryption as a result and compromising the privacy of all users.
In short, the Bill poses an unprecedented threat to the privacy, safety and security of every UK citizen and the people with whom they communicate around the world, while emboldening hostile governments who may seek to draft copy-cat laws.

Both Signal and WhatsApp have said that they will cease services in the UK rather than compromise the security of their users world-wide.





Mañana?

https://theconversation.com/ai-has-social-consequences-but-who-pays-the-price-tech-companies-problem-with-ethical-debt-203375

AI has social consequences, but who pays the price? Tech companies’ problem with ‘ethical debt’

As public concern about the ethical and social implications of artificial intelligence keeps growing, it might seem like it’s time to slow down. But inside tech companies themselves, the sentiment is quite the opposite. As Big Tech’s AI race heats up, it would be an “absolutely fatal error in this moment to worry about things that can be fixed later,” a Microsoft executive wrote in an internal email about generative AI, as The New York Times reported.

In other words, it’s time to “move fast and break things,” to quote Mark Zuckerberg’s old motto. Of course, when you break things, you might have to fix them later – at a cost.

In software development, the term “technical debt” refers to the implied cost of making future fixes as a consequence of choosing faster, less careful solutions now. Rushing to market can mean releasing software that isn’t ready, knowing that once it does hit the market, you’ll find out what the bugs are and can hopefully fix them then.

However, negative news stories about generative AI tend not to be about these kinds of bugs. Instead, much of the concern is about AI systems amplifying harmful biases and stereotypes and students using AI deceptively. We hear about privacy concerns, people being fooled by misinformation, labor exploitation and fears about how quickly human jobs may be replaced, to name a few. These problems are not software glitches. Realizing that a technology reinforces oppression or bias is very different from learning that a button on a website doesn’t work.

As a technology ethics educator and researcher, I have thought a lot about these kinds of “bugs.” What’s accruing here is not just technical debt, but ethical debt.





Tools & Techniques. This type of app could be applied to anything that you wished to identify. Think bugs, rare books, etc.

https://www.cbsnews.com/news/merlin-bird-id-app-cornell-lab-of-ornithology/

What's that bird? Merlin Bird ID app works magic

The app allows users to identify birds by a picture, or by songs and calls, and can create a digital scrapbook of the birds you discover.

Originally launched in 2014, early versions of the app could ID 400 North American species. Today, the app (available free for both iPhone and Android) can identify more than 6,000 bird species across six continents.

Apple: Merlin Bird ID runs on iPhones and iPads with iOS 15 or newer, and M1/M2-equipped Apple computers. Download app here.

Android: Merlin Bird ID runs on devices with Android 6 or newer. Download app here.