Saturday, February 25, 2023

Is the loss of control inevitable?

https://www.pewresearch.org/internet/2023/02/24/the-future-of-human-agency/

The Future of Human Agency

Experts are split about how much control people will retain over essential decision-making as digital systems and AI spread. They agree that powerful corporate and government authorities will expand the role of AI in people’s daily lives in useful ways. But many worry these systems will diminish individuals’ ability to control their choices





Some thoughts, but are they the right thoughts?

https://openai.com/blog/planning-for-agi-and-beyond/

Planning for AGI and beyond

Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.

If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.

AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity.

On the other hand, AGI would also come with serious risk of misuse, drastic accidents, and societal disruption. Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right.





Oh, the horror!

https://www.washingtonpost.com/technology/2023/02/24/woke-ai-chatgpt-culture-war/

The right’s new culture-war target: ‘Woke AI’

… “This is going to be the content moderation wars on steroids,” said Stanford law professor Evelyn Douek, an expert in online speech. “We will have all the same problems, but just with more unpredictability and less legal certainty.”

After ChatGPT wrote a poem praising President Biden, but refused to write one praising former president Donald Trump, the creative director for Sen. Ted Cruz (R-Tex.), Leigh Wolf, lashed out.

The damage done to the credibility of AI by ChatGPT engineers building in political bias is irreparable,” Wolf tweeted on Feb. 1.



(Related) On the other hand…

https://www.wsj.com/articles/chatgpt-heralds-an-intellectual-revolution-enlightenment-artificial-intelligence-homo-technicus-technology-cognition-morality-philosophy-774331c6

ChatGPT Heralds an Intellectual Revolution

Generative artificial intelligence presents a philosophical and practical challenge on a scale not experienced since the start of the Enlightenment.

A new technology bids to transform the human cognitive process as it has not been shaken up since the invention of printing. The technology that printed the Gutenberg Bible in 1455 made abstract human thought communicable generally and rapidly. But new technology today reverses that process. Whereas the printing press caused a profusion of modern human thought, the new technology achieves its distillation and elaboration. In the process, it creates a gap between human knowledge and human understanding. If we are to navigate this transformation successfully, new concepts of human thought and interaction with machines will need to be developed. This is the essential challenge of the Age of Artificial Intelligence.





These might also apply to student writing.

https://www.makeuseof.com/why-content-writers-cant-rely-ai-chatbots/

8 Reasons Why Content Writers Can't Rely on AI Chatbots

Although convenient, chatbots and writing tools aren’t perfect. We've listed the top reasons why blindly relying on them compromises the quality of your articles and diminishes your credibility.



Friday, February 24, 2023

Interesting.

https://www.psychologytoday.com/us/blog/hot-thought/202302/why-is-chatgpt-so-smart-and-so-stupid

Why Is ChatGPT So Smart and So Stupid?

Along with millions of users, I have been experimenting with ChatGPT, which is OpenAI’s public version of its large language model GPT-3. In answers to hard questions, ChatGPT sometimes delivers insightful answers that would be a credit to an excellent Ph.D. student. Other times, however, it makes idiotic and obnoxious mistakes. I give reasons why ChatGPT is sometimes so smart, contrasting reasons why it is sometimes so stupid, and lessons to be learned from it about the differences between human and artificial intelligence.



(Related)

https://www.bespacific.com/section-230-wont-protect-chatgpt/

Section 230 Won’t Protect ChatGPT

Lawfare, Matt Perault: “The emergence of products fueled by generative artificial intelligence (AI) such as ChatGPT will usher in a new era in the platform liability wars. Previous waves of new communication technologies—from websites and chat rooms to social media apps and video sharing services—have been shielded from legal liability for content posted on their platforms, enabling these digital services to rise to prominence. But with products like ChatGPT, critics of that legal framework are likely to get what they have long wished for: a regulatory model that makes tech platforms responsible for online content. The question is whether the benefits of this new reality outweigh its costs. Will this regulatory framework minimize the volume and distribution of harmful and illegal content? Or will it stunt the growth of ChatGPT and other large language models (LLMs), litigating them out of mainstream use before their capacity to have a transformational impact on society can be understood? Will it tilt the playing field toward larger companies that can afford to hire massive teams of lawyers and bear steep legal fees, making it difficult for smaller companies to compete? In this article, I explain why current speech liability protections do not apply to certain generative AI use cases, explore the implications of this legal exposure for the future deployment of generative AI products, and provide an overview of options for regulators moving forward.”





What does this suggest? High resolution cameras to see a face from 30,000 feet. A really good database of terrorist faces?

https://www.newscientist.com/article/2360475-us-air-force-is-giving-military-drones-the-ability-to-recognise-faces/

US Air Force is giving military drones the ability to recognise faces

The US Air Force has completed a project to develop face recognition software for autonomous drones, sparking concerns that individuals could be targeted and killed





Describing a new and useful skill. How to talk to an AI.

https://theconversation.com/how-to-perfect-your-prompt-writing-for-chatgpt-midjourney-and-other-ai-generators-198776

How to perfect your prompt writing for ChatGPT, Midjourney and other AI generators

Generative AI is having a moment. ChatGPT and art generators such as DALL-E 2, Stable Diffusion and Midjourney have proven their potential, and now millions are wracking their brains over how to get their outputs to look something like the vision in their head.

This is the goal of prompt engineering: the skill of crafting an input to deliver a desired result from generative AI.





New technology means new ways to communicate.

https://www.makeuseof.com/federal-judge-rules-emojis-count-financial-advice/

Federal Judge Rules That Emojis Count as Financial Advice

Just to prove we're living in an interesting timeline, a federal judge has ruled that using certain emojis while tweeting specifically signals to users that they should expect a financial return on their investments.

It's an extreme ruling that may affect crypto-Twitter, where you regularly see tweets adorned with profit indicator emojis like the Rocket, the Chart with Upward Trend, and Money Bag to catch the interest of would-be investors.





This article should have been written by an AI. (Lawyers being obsolete and all…)

https://www.bespacific.com/teaching-to-the-tech-law-schools-and-the-duty-of-technology-competence/

Teaching to the Tech: Law Schools and the Duty of Technology Competence

Brescia, Raymond H., Teaching to the Tech: Law Schools and the Duty of Technology Competence (February 16, 2023). Washburn Law Journal, Forthcoming, Available at SSRN: https://ssrn.com/abstract=4361552

As a result of a wide range of emerging technologies, the American legal profession is at a critical inflection point. Some may argue that lawyers face dramatic threats not only to their business models but also to their very usefulness in the face of new technologies that may mean some form of legal guidance will be available to virtually every American with a little bit of computer savvy and access to digital technologies. At the same time, in recent years, the profession has largely imposed upon itself a duty of technology competence, which imposes an array of obligations regarding the use and proliferation of new practice technologies. Since lawyers are obligated to maintain this duty of technology competence, law schools should also have an obligation to teach technology competence as a core professional skill. Even with the significant changes that are likely afoot in the legal profession on account of the emergence of new technologies, a duty on lawyers to maintain technology competence, and the likely burden on law schools to prepare students for it, the precise contours of this duty of technology competence are themselves hardly defined. To understand the full scope and potential consequences of the likely impact of technologies on the American legal profession, we should consider another point in its history, another inflection point, where technology had dramatic effects on the practice of law: the last decades of the nineteenth century. Then, technology impacted all aspects of practice—not only the means by which lawyers practiced their craft, but also the type of work they did and the subject matter of that work. In this Essay, I explore the contours of a robust duty of technology competence, what I call a thick version of that duty. As part of this exploration, I describe efforts of law schools from across the country that are teaching different aspects of this broader duty. I also attempt to set forth a program for law schools moving forward that will impart in all law students a muscular version of technology competence. Such a version will prepare them to practice not just today, but also tomorrow and for the rest of their professional lives.”





To fill that spare time…

https://mashable.com/uk/deals/best-free-online-courses-from-mit

The best online MIT courses available for free this week

TL;DR: You can find a wide range of online courses(Opens in a new tab) from MIT on edX, covering topics like machine learning, programming, entrepreneurship, and more. Some of the best examples of these courses are available for free for a limited time.



Thursday, February 23, 2023

If this is a demonstration of capability, I wonder what percentage of food distribution it is possible to shut down at one time?

https://www.databreaches.net/cyberattack-on-food-giant-dole-temporarily-shuts-down-north-america-production-company-memo-says/

Cyberattack on food giant Dole temporarily shuts down North America production, company memo says

Sean Lyngaas reports:

A cyberattack earlier this month forced produce giant Dole to temporarily shut down production plants in North America and halt food shipments to grocery stores, according to a company memo about the incident obtained by CNN.

The previously unreported hack — which a source familiar with the incident said was ransomware — led some grocery shoppers to complain on Facebook in recent days that store shelves were missing Dole-made salad kits.

Read more at CNN.





Sounds like an attempt to establish precedent?

https://www.theregister.com/2023/02/23/covington_sec_amicus/

Lawyers join forces to fight common enemy: The SEC and its probes into cyber-victims

More than 80 law firms say they are "deeply troubled" by the US Securities and Exchange Commission's demand that Covington & Burling hand over names of its clients whose information was stolen by Chinese state-sponsored hackers.

In an amicus brief filed this week, 83 firms with a total of more than 50,000 attorneys employed backed their fellow lawyers in Covington's ongoing battle with America's financial watchdog.

The government agency has put Covington in an impossible situation, asking the law firm to breach attorney-client privilege by identifying customers involved in the cyberattack, and doesn't even have a good reason outside of "mere curiosity" for doing so, the attorneys argued in the friends of the court filing.

"Not only would the SEC breach well-established principles of confidentiality in the service of this fishing expedition, it would turn attorneys into witnesses against their own clients, while offering no guarantees that it will not disseminate the information to other parts of the government, the press, and the public," the court documents [PDF ] say.





I’m sure Congress should be watching…

https://thenextweb.com/news/predictive-policing-project-shows-even-eu-lawmakers-can-be-targets

Predictive policing project shows even EU lawmakers can be targets

Predictive policing has exposed a new group of future criminals: MEPs.

A new testing systems has spotlighted five EU politicians as “at risk” of committing future crimes. Luckily for them, it’s not a tool that’s used by law enforcement, but one designed to highlight the dangers of such systems.

The project is the brainchild of Fair Trials, a criminal justice watchdog. The NGO is campaigning for a ban on predicting policing, which uses data analytics to forecast when and where crimes are likely to happen — and who may commit them.





Again, the answer is in the question.

https://www.axios.com/2023/02/22/chatgpt-prompt-engineers-ai-job

AI's rise generates new job title: Prompt engineer

… “Writing a really great prompt for a chatbot persona is an amazingly high-leverage skill and an early example of programming in a little bit of natural language,” Sam Altman, CEO of ChatGPT creator OpenAI, said on Twitter Monday.

When prompted to define “prompt engineering,” ChatGPT itself told Axios that “effective prompt engineering is critical for generating high-quality outputs from generative AI models, as it can help ensure that the model generates content that is relevant, coherent, and consistent with the desired output.”





We need to be ready.

https://www.schneier.com/blog/archives/2023/02/cyberwar-lessons-from-the-war-in-ukraine.html

Cyberwar Lessons from the War in Ukraine

The Aspen Institute has published a good analysis of the successes, failures, and absences of cyberattacks as part of the current war in Ukraine: “The Cyber Defense Assistance Imperative ­ Lessons from Ukraine.

Its conclusion:

Cyber defense assistance in Ukraine is working. The Ukrainian government and Ukrainian critical infrastructure organizations have better defended themselves and achieved higher levels of resiliency due to the efforts of CDAC and many others. But this is not the end of the road—the ability to provide cyber defense assistance will be important in the future. As a result, it is timely to assess how to provide organized, effective cyber defense assistance to safeguard the post-war order from potential aggressors.
The conflict in Ukraine is resetting the table across the globe for geopolitics and international security. The US and its allies have an imperative to strengthen the capabilities necessary to deter and respond to aggression that is ever more present in cyberspace. Lessons learned from the ad hoc conduct of cyber defense assistance in Ukraine can be institutionalized and scaled to provide new approaches and tools for preventing and managing cyber conflicts going forward.

I am often asked why where weren’t more successful cyberattacks by Russia against Ukraine. I generally give four reasons: (1) Cyberattacks are more effective in the “grey zone” between peace and war, and there are better alternatives once the shooting and bombing starts. (2) Setting these attacks up takes time, and Putin was secretive about his plans. (3) Putin was concerned about attacks spilling outside the war zone, and affecting other countries. (4) Ukrainian defenses were good, aided by other countries and companies. This paper gives a fifth reasons: they were technically successful, but keeping them out of the news made them operationally unsuccessful.





Perspective.

https://www.science.org/content/article/scientists-explore-ai-written-text-journals-hammer-policies

As scientists explore AI-written text, journals hammer out policies

Many ask authors to disclose use of ChatGPT and other generative artificial intelligence





I must admit to being Wally-esque.

https://dilbert.com/strip/2023-02-23



Wednesday, February 22, 2023

Is incremental change over time is enough?

https://theconversation.com/war-in-ukraine-accelerates-global-drive-toward-killer-robots-198725

War in Ukraine accelerates global drive toward killer robots

The U.S. military is intensifying its commitment to the development and use of autonomous weapons, as confirmed by an update to a Department of Defense directive. The update, released Jan. 25, 2023, is the first in a decade to focus on artificial intelligence autonomous weapons. It follows a related implementation plan released by NATO on Oct. 13, 2022, that is aimed at preserving the alliance’s “technological edge” in what are sometimes called “killer robots.”

Both announcements reflect a crucial lesson militaries around the world have learned from recent combat operations in Ukraine and Nagorno-Karabakh: Weaponized artificial intelligence is the future of warfare.

We know that commanders are seeing a military value in loitering munitions in Ukraine,” Richard Moyes, director of Article 36, a humanitarian organization focused on reducing harm from weapons, told me in an interview. These weapons, which are a cross between a bomb and a drone, can hover for extended periods while waiting for a target. For now, such semi-autonomous missiles are generally being operated with significant human control over key decisions, he said.



(Related)

https://nymag.com/intelligencer/2023/02/on-with-kara-swisher-trae-stephens-on-autonomous-warfare-ai.html

Trae Stephens on the Ethics of AI Warfare Kara Swisher talks to the Anduril co-founder about autonomous weapons and tech as deterrence.

Artificial intelligence and machine learning may suddenly seem to be everywhere, but that’s not true in the defense sector, despite the growing ubiquitousness of drone warfare and the apparently unlimited amount of money the U.S. gives to defense contractors. One company trying to outflank the big defense firms with higher tech is Anduril, which has been selling surveillance, reconnaissance, and counter-drone technologies to the U.S., including a “smart wall” system for the southern border. Last fall, it introduced its first weapon, a drone-based “loitering munition.”

In the latest episode of On With Kara Swisher, Kara grills Anduril co-founder Trae Stephens about the company’s approach to defense and its implications. They also discuss spy balloons, the war in Ukraine, AI bias, and the challenge of cutting China out of the supply chain. As seen in the excerpt below, they also get into Saint Augustine and the ethics of autonomous weapons as well as why Stephens believes big defense contractors are still struggling to innovate.





Would this be worse than asking Google?

https://futurism.com/the-byte/openai-ceo-ai-medical-advice

OPENAI CEO SAYS AI WILL GIVE MEDICAL ADVICE TO PEOPLE TOO POOR TO AFFORD DOCTORS

AN "AI MEDICAL ADVISOR"? WHAT COULD GO WRONG?





This could be interesting…

https://arstechnica.com/tech-policy/2023/02/reddit-should-have-to-identify-users-who-discussed-piracy-film-studios-tell-court/

Reddit should have to identify users who discussed piracy, film studios tell court

Film studios that filed a copyright infringement lawsuit against a cable Internet provider are trying to force Reddit to identify users who posted comments about piracy.

The lawsuit was filed in 2021 against cable company RCN in the US District Court in New Jersey by Bodyguard Productions, Millennium Media, and other film companies over downloads of 34 movies such as Hellboy, Rambo V: Last Blood, Tesla, and  he Hitman's Bodyguard. In an attempt to prove that RCN turned a blind eye to users downloading copyrighted movies, the plaintiffs sent a subpoena to Reddit last month seeking identifying information for nine users.

Plaintiffs specifically asked Reddit for "IP address registration and logs from 1/1/2016 to present, name, email address and other account registration information" for nine users. Reddit's response provided at least some information about one user but no information on any of the other eight. According to the film studios, Reddit argued that "the requests for identifying information associated with the additional eight accounts are more in the nature of a fishing expedition and are neither relevant nor permissible under the First Amendment."

Now, the studios want a federal court to force Reddit's hand. The film companies last week filed a motion to compel Reddit to respond to the subpoena in US District Court for the Northern District of California. The latest filing and the ongoing dispute over the subpoena were detailed in a TorrentFreak article published Saturday.





Often interesting…

https://www.databreaches.net/thoughts-on-dubin-v-united-states-and-the-aggravated-identity-theft-statute/

Thoughts on Dubin v. United States and the Aggravated Identity Theft Statute

Law professor Orin Kerr writes:

On February 27, the Supreme Court will hear argument in Dubin v. United States, a case on the Aggravated Identity Theft Statute, 18 U.S.C. § 1028A. This statute comes up often in the context of computer crimes, and its interpretation raises some interesting and important questions. So I thought I would blog about the case and offer some impressions.
I’ll start with the statutory problem that prompts the Dubin case; then turn to the case itself; and conclude with my own views.

Read more at Reason.





As a scifi fan, I would welcome well written stories no matter the source. I can see the problem of so many submissions that might need extra review.

https://www.theguardian.com/technology/2023/feb/21/sci-fi-publisher-clarkesworld-halts-pitches-amid-deluge-of-ai-generated-stories

Sci-fi publisher Clarkesworld halts pitches amid deluge of AI-generated stories

Founding editor says 500 pitches rejected this month and their ‘authors’ banned, as influencers promote ‘get rich quick’ schemes

One of the most prestigious publishers of science fiction short stories has closed itself to submissions after a deluge of AI-generated pitches overwhelmed its editorial team.

Clarkesworld, which has published writers including Jeff VanderMeer, Yoon Ha Lee and Catherynne Valente, is one of the few paying publishers to accept open submissions for short stories from new writers.

But that promise brought it to the attention of influencers promoting “get rich quick” schemes using AI, according to founding editor Neil Clarke.

In a typical month, the magazine would normally receive 10 or so such submissions that were deemed to have plagiarised other authors, he wrote in a blogpost. But since the release of ChatGPT last year pushed AI language models into the mainstream, the rate of rejections has rocketed.

In January, Clarke said, the publisher rejected 100 submissions, banning their “authors” from submitting again. In February to date, he has banned more than 500.



(Related)

https://www.techradar.com/news/the-amazon-kindle-store-could-soon-be-overrun-with-chatgpt-authored-books

The Amazon Kindle store could soon be overrun with ChatGPT-authored books

The Amazon Kindle has been a real boon for self-publishing authors, but its virtual book store risks being overrun by a particularly prolific new scribe: ChatGPT.

As spotted by Reuters, there are already 200 e-books on Amazon's Kindle store that list ChatGPT as the author or co-author. But because Amazon doesn't require that authors disclose whether or not they've used AI, that's likely a huge underestimation of the number of titles that AI tools have either written or co-created.

The ChatGPT-created books are published through Amazon’s Kindle Direct publishing arm, which releases over 1.4 million self-published books

(opens in new tab)

every year and sells them alongside ones written by big-name authors.



Resource?

https://www.bespacific.com/salesforce-offers-5-guidelines-to-reduce-ai-bias/

Salesforce offers 5 guidelines to reduce AI bias

Tech Republic: “Salesforce, which last year introduced its Einstein AI framework behind its Customer 360 platform, has published what it says is the industry’s first Guidelines for Trusted Generative AI. Written by Paula Goldman, chief ethical and humane use officer, and Kathy Baxter, principal architect of ethical AI at the company, the guidelines are meant to help organizations prioritize AI-driven innovation around ethics and accuracy — including where bias leaks can spring up and how to find and cauterize them. Baxter, who also serves as a visiting AI fellow at the National Institute of Standards and Technology, said there are several entry points for bias in machine learning models used for job screening, market research, healthcare decisions, criminal justice applications and more. However, she noted, there is no easy way to measure what constitutes a model that is “safe” or has exceeded a certain level of bias or toxicity.”





Call it background because techies don’t care about history.

https://www.trendmicro.com/en_us/research/23/b/ransomware-evolution-part-1.html

A Deep Dive into the Evolution of Ransomware Part 1

This 3-part blog series takes an in-depth look at the evolution of ransomware business models, from the early stages to current trends.





Not sure I understand…

https://blog.ericgoldman.org/archives/2023/02/quick-debrief-on-the-gonzalez-v-google-oral-arguments.htm

Quick Debrief on the Gonzalez v. Google Oral Arguments





Interesting return to ad sponsored TV?

https://restofworld.org/2023/amazon-minitv-india-free-streaming/

Amazon’s plan to lure shoppers with free streaming is working in India

If the e-commerce giant cracks the ad-free, shop-as-you-watch code in the country, nothing stops it from rolling miniTV out internationally.

Amazon chairman Jeff Bezos famously said that every time an Amazon Studios production wins a Golden Globe, it helps the company’s e-commerce arm sell more shoes. He was describing Amazon’s “flywheel strategy, where users who are Prime subscribers shop more, browse more, and watch more of the platform’s award-winning content in order to make the most of their membership.

In India, Amazon is experimenting with a new but similar content-to-commerce strategy through miniTV, an ad-supported streaming service inside its shopping app. The hope is that it will lure young audiences with free content, eventually turning them into online shoppers, former and current company executives told Rest of World.



Tuesday, February 21, 2023

Innovation or evolution?

https://www.popularmechanics.com/technology/robots/a42958546/artificial-intelligence-theory-of-mind-chatgpt/

AI Has Suddenly Evolved to Achieve Theory of Mind

A new study conducted by Michal Kosinski, a computational psychologist from Stanford University, used several iterations of OpenAI’s GPT neural network—from GPT-1 to the latest GPT-3.5—to perform “Theory of Mind” (ToM) tests, a series of experiments first developed in 1978 to measure the complexity of a chimpanzee’s mind to predict the behavior of others.

These tests involve solving normal, mundane scenarios that humans could easily deduce the outcome. For example, one scenario involves mislabeling a bag of popcorn as “chocolate” and then the test asks the AI to infer what the human’s response will be once the bag is opened. Kosinski’s team used “sanity checks” to analyze how well GPT networks understood the scenario and the human’s predicted response. The results were published online on arXiv, the pre-print server.

While early versions of GPT-1, first released in 2018, scored poorly on the test, the neural network showed stunning improvement stretched across different iterations and spontaneously developed “Theory of Mind” capability of a 9-year-old human by November 2022 (the release of the latest GPT-3.5). Kosinski says this could be a “watershed moment” for AI, as the ability to understand and predict human behavior would make these engines much more useful.





The danger is always that leap before you look… FOMO!

https://www.wsj.com/articles/from-ceos-to-coders-employees-experiment-with-new-ai-programs-32e1768a?mod=djemalertNEWS

From CEOs to Coders, Employees Experiment With New AI Programs

ChatGPT’s release has sparked a rush of early adopters eager to speed up tasks or avoid being left behind

Shortly after the release of OpenAI’s ChatGPT in November, Jeff Maggioncalda, the CEO of online education company Coursera Inc., jumped into the technology to see if it could save him time.

He began using the chatbot to draft company letters and notes, and asked his executive assistant to try the same for drafting replies to his inbound emails She prompts ChatGPT based on how she thinks he would respond, and he edits the answers it generates before sending.

I spend way more time thinking and way less time writing,” Mr. Maggioncalda said. “I don’t want to be the one who doesn’t use it, because someone who is using it is going to have a lot of advantages.”

AI experts caution, however, that such tools should only be used to support people who are already experts in their domain. Generative AI has been shown to spew disturbing content and misinformation, while other concerns have surfaced over intellectual property theft and privacy.

The purpose that it is serving is not to inform you about things you don’t know. It’s really a tool for you to be able to do what you do better,” said Margaret Mitchell, chief ethics scientist at AI research startup Hugging Face.





Forensics. Another tool I had not considered. Like those that undo redactions…

https://www.schneier.com/blog/archives/2023/02/the-insecurity-of-photo-cropping.html

The Insecurity of Photo Cropping

The Intercept has a long article on the insecurity of photo cropping:

One of the hazards lies in the fact that, for some of the programs, downstream crop reversals are possible for viewers or readers of the document, not just the file’s creators or editors. Official instruction manuals, help pages, and promotional materials may mention that cropping is reversible, but this documentation at times fails to note that these operations are reversible by any viewers of a given image or document.
[…]
Uncropped versions of images can be preserved not just in Office apps, but also in a file’s own metadata. A photograph taken with a modern digital camera contains all types of metadata. Many image files record text-based metadata such as the camera make and model or the GPS coordinates at which the image was captured. Some photos also include binary data such as a thumbnail version of the original photo that may persist in the file’s metadata even after the photo has been edited in an image editor.





Or turn it into an audio book (see earlier in this blog for how)

https://www.bespacific.com/free-pdf-of-dennis-kennedys-innovation-outcomes-in-law-book/

Free PDF of Dennis Kennedy’s Innovation Outcomes in Law Book

Dennis Kennedy Blog: “I’ve decided that I want as many people as possible to read my book, Successful Innovation Outcomes in Law: A Practical Guide for Law Firms, Law Departments and Other Legal Organizations, so I’m now making it available as a FREE PDF download – Successful Innovation Outcomes in Law – 2023 PDF Version. If you like the book and PDF is not the best format for you (or you want to support the author), you can still buy the book in paperback and Kindle formats on Amazon. I thought that giving this version of the book away was a great way to celebrate the 20th blogiversary of DennisKennedy.Blog and promote my new Law Department Innovation Library. Enjoy the free book and let everyone you know who might be interested in it (or need it) that it is now available in this Successful Innovation Outcomes in Law – 2023 PDF Version.





Am I editor enough to make AI written stories my own?

https://www.reuters.com/technology/chatgpt-launches-boom-ai-written-e-books-amazon-2023-02-21/

ChatGPT launches boom in AI-written e-books on Amazon

Until recently, Brett Schickler never imagined he could be a published author, though he had dreamed about it. But after learning about the ChatGPT artificial intelligence program, Schickler figured an opportunity had landed in his lap.