Saturday, November 25, 2023

Should we think of this as a more elaborate prompt?

https://hbr.org/2023/11/can-genai-do-strategy

Can GenAI Do Strategy?

To many people, the idea that AI can be a source of new ideas sounds counterintuitive. After all, ChatGPT essentially just collates and processes the sum total of all the answers to a question that people have already come up with. It seems almost inevitable that ChatGPT’s strategic advice would simply parrot the most common solutions to a problem in a sort of reversion to the mean.

The reason why that isn’t inevitable became apparent when Wolfram linked ChatGPT to its Mathematica software. When people first tried to use ChatGPT to solve math problems, they quickly found that it wasn’t very good at math problems because it primarily relies on language recognition. Although the AI could write a good university motivation letter, perhaps, it sure couldn’t come up with an original proof for Pythagoras’s theorem.

But that changed as soon as people cross-connected GPT to Wolfram’s software, and the AI proved able to solve complex math showing progress step-by-step. Inspired by that, we wondered what would happen if we cross-connected ChatGPT to a strategic framework. What we’ve discovered is that the virtual strategist turns out to be quite original, and certainly deserves a place on your company’s team.





Friday, November 24, 2023

This should be easy to figure out. Apparently it is not.

https://www.nbcnews.com/news/us-news/little-recourse-teens-girls-victimized-ai-deepfake-nudes-rcna126399

For teen girls victimized by ‘deepfake’ nude photos, there are few, if any, pathways to recourse in most states

Since the 2023 school year kicked into session, cases involving teen girls victimized by the fake nude photos, also known as deepfakes, have proliferated worldwide, including at high schools in New Jersey and Washington state.

Local police departments are investigating the incidents, lawmakers are racing to enact new measures that would enforce punishments against the photos’ creators, and affected families are pushing for answers and solutions.

Apps that purport to “undress” clothed photos have also been identified as possible tools used in some cases and have been found available for free on app stores. These modern deepfakes can be more realistic-looking and harder to immediately identify as fake.

New Jersey State Sen. Jon Bramnick said law enforcement expressed concerns to him that the incident would only rise to a “cyber-type harassment claim, even though it really should reach the level of a more serious crime.”

If you attach a nude body to a child’s face, that to me is child pornography,” he said.





Tools & Techniques. Anything you can learn should help.

https://www.databreaches.net/how-to-calculate-the-cost-of-a-data-breach/

How to Calculate the Cost of a Data Breach

Matt Kelly, CEO of RadicalCompliance.com notes that knowing statistics about the average cost of a data breach isn’t really much help to organizations. Organizations need to know know how to calculate the potential costs at their own organization, he writes, adding, “Only then — when you have a solid sense of how a breach might affect your business — can you develop sensible, risk-based compliance measures to push those costs down.”

So how should an organization go about estimating those costs? , it’s complicated, but Kelly breaks things down into groups of costs and how to begin to estimate them.

Read more at Hyperproof.io.



Thursday, November 23, 2023

I can see where this might grab the Board’s attention.

https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/

OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say

Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.





Automating law.

https://www.lawgazette.co.uk/commentary-and-opinion/ai-risks-and-rewards/5117968.article

AI risks and rewards

In a recent paper, a number of academics who specialise in lawtech from the UK, the US, Canada, Australia and Singapore jointly warned of the dangers to the rule of law posed by the use of artificial intelligence by the courts, tribunals and judiciary.

We conclude that the introduction of new technology needs to be controlled by the judiciary, to maintain public confidence in the legal system and rule of law. There is clear scope for the judiciary to use emerging technology to support their decision making and to create efficiency savings, which in turn can promote access to justice. Claims that algorithmic decision-making is ‘better’ in terms of reduced bias and increased transparency risks erosion of the principle that legal decisions should be made by humans.

Our paper breaks down the trial process into a number of parts: litigation advice, trial preparation, judicial guidance, pretrial negotiations, digital courts/tribunals and judicial algorithms. It explains the core technology and explains how the main risks are around the provision of a validated and accurate dataset, transparency and bias.

The paper can be downloaded here.





Perspective.

https://searchengineland.com/ai-content-creation-beginners-guide-434932

AI content creation: A beginner’s guide

In a world where even your toaster wants to discuss AI over breakfast, SEO professionals are currently mastering how to integrate it into their strategies – especially their content strategies.

For those who are still trying to grasp the concept of how AI and SEO can work synergistically for content, this article is for you. I’ll discuss:





Resources. (I suggest the Steven Wolfram book.)

https://www.kdnuggets.com/a-comprehensive-list-of-resources-to-master-large-language-models

A Comprehensive List of Resources to Master Large Language Models

Large Language Models (LLMs) have now become an integral part of various applications. This article provides an extensive list of resources for anyone interested to dive into the world of LLMs.



Wednesday, November 22, 2023

Which US political party has the best AI techies?

https://www.japantimes.co.jp/news/2023/11/22/world/politics/ai-javier-milei-argentina-presidency/

How AI shaped Milei's path to Argentina presidency

In the final weeks of campaigning, Argentine President-elect Javier Milei published a fabricated image depicting his Peronist rival Sergio Massa as an old-fashioned communist in military garb, his hand raised aloft in salute.

The apparently AI-generated image drew some 3 million views when Milei posted it on a social media account, highlighting how the rival campaign teams used artificial intelligence technology to catch voters' attention in a bid to sway the race.

The use of increasingly accessible AI tech in political campaigning is a global trend, tech and rights specialists say, raising concerns about the potential implications for important upcoming elections in countries including the United States, Indonesia, and India next year.

A slew of new "generative AI" tools such as Midjourney are making it cheap and easy to create fabricated pictures and videos.





Perspective.

https://scitechdaily.com/the-limits-of-ai-why-chatgpt-isnt-truly-intelligent/

The Limits of AI: Why ChatGPT Isn’t Truly “Intelligent”

The intent of Chemero’s paper is to stress that the LLMs are not intelligent in the way humans are intelligent because humans are embodied: Living beings who are always surrounded by other humans and material and cultural environments.

This makes us care about our own survival and the world we live in,” he says, noting that LLMs aren’t really in the world and don’t care about anything.

Reference: “LLMs differ from human cognition because they are not embodied” by Anthony Chemero, 20 November 2023, Nature Human Behaviour. DOI: 10.1038/s41562-023-01723-5





Perspective.

https://www.pewresearch.org/short-reads/2023/11/21/what-the-data-says-about-americans-views-of-artificial-intelligence/

What the data says about Americans’ views of artificial intelligence

Pew Research Center surveys show that Americans are increasingly cautious about the growing role of AI in their lives generally. Today, 52% of Americans are more concerned than excited about AI in daily life, compared with just 10% who say they are more excited than concerned; 36% feel a mix of excitement and concern.





Probably not everything, but a start. (And something to hang on the wall?)

https://www.visualcapitalist.com/sp/9-problems-with-generative-ai-in-one-chart/

9 Problems with Generative AI, in One Chart

In the rapidly evolving landscape of artificial intelligence, generative AI tools are demonstrating incredible potential. However, their potential for harm is also becoming more and more apparent.

Together with our partner VERSES, we have visualized some concerns regarding generative AI tools using data from a variety of different sources.





Resources.

https://www.makeuseof.com/best-ethical-hacking-courses/

The Best Online Ethical Hacking Courses for Beginners



Monday, November 20, 2023

Interested in AI Lawyering?

https://www.bespacific.com/genai-for-law-cases-and-policies/

GenAI for Law – Cases and Policies

On this episode of law.MIT.edu’s IdeaFlow, we’ll explore and discuss two examples of how generative AI can be applied to legal use cases, namely: judicial caselaw research and handling privacy policies.





Maybe we can sell it as a ‘Right to Choose?’

https://www.cpomagazine.com/data-protection/the-power-of-preference-in-the-wake-of-privacy-regulations/

The Power of Preference in the Wake of Privacy Regulations

In the ever-evolving privacy landscape, the average individual is becoming increasingly concerned with the security of their personal data. To prevent the mishandling of sensitive information, new regulations addressing data privacy are cropping up all over the globe.

One recent example of this is the updates to Quebec’s Act to Modernize Legislative Provisions Respecting the Protection of Personal Information, more commonly known as “Law 25.” First introduced in September 2022, Law 25 initially tasked businesses with implementing a handful of data security measures, including (but not limited to) designating a staff member in charge of protecting personal information and taking reasonable measures to protect the victims of confidentiality incidents.

As of September 2023, more robust guidelines have been introduced under Law 25. Private businesses operating in Quebec must now:

  • Develop a policy on practices that regulates the business governance on the protection of personal information.

  • Obtain an individual’s free and informed consent to collect, communicate, and use their personal information and comply with these new consent rules.

  • Respect individuals’ rights to de-indexation and cessation of dissemination – meaning individuals can revoke a company’s right to collect, index, and share their data at any time.

  • Conduct a privacy impact assessment before disclosing personal information outside of Quebec.

To summarize, respecting consumer preferences and increasing transparency surrounding the collection and use of their personal data is now codified by Quebecois law, and private businesses must take note.





Perspective.

https://stratechery.com/2023/openais-misalignment-and-microsofts-gain/

OpenAI’s Misalignment and Microsoft’s Gain

I have, as you might expect, authored several versions of this Article, both in my head and on the page, as the most extraordinary weekend of my career has unfolded. To briefly summarize:

  • On Friday, then-CEO Sam Altman was fired from OpenAI by the board that governs the non-profit; then-President Greg Brockman was removed from the board and subsequently resigned.

  • Over the weekend rumors surged that Altman was negotiating his return, only for OpenAI to hire former Twitch CEO Emmett Shear as CEO.

  • Finally, late Sunday night, Satya Nadella announced via tweet that Altman and Brockman, “together with colleagues”, would be joining Microsoft.

This is, quite obviously, a phenomenal outcome for Microsoft. The company already has a perpetual license to all OpenAI IP (short of artificial general intelligence ), including source code and model weights; the question was whether it would have the talent to exploit that IP if OpenAI suffered the sort of talent drain that was threatened upon Altman and Brockman’s removal. Indeed they will, as a good portion of that talent seems likely to flow to Microsoft; you can make the case that Microsoft just acquired OpenAI for $0 and zero risk of an antitrust lawsuit.

Microsoft’s gain, meanwhile, is OpenAI’s loss, which is dependent on the Redmond-based company for both money and compute: the work its employees will do on AI will either be Microsoft’s by virtue of that perpetual license, or Microsoft’s directly because said employees joined Altman’s team. OpenAI’s trump card is ChatGPT, which is well on its way to achieving the holy grail of tech — an at-scale consumer platform — but if the reporting this weekend is to be believed, OpenAI’s board may have already had second thoughts about the incentives ChapGPT placed on the company (more on this below).

The biggest loss of all, though, is a necessary one: the myth that anything but a for-profit corporation is the right way to organize a company.





Resource.

https://www.bespacific.com/amazon-announces-ai-ready/

Amazon announces “AI Ready”

Amazon announces “AI Ready,” a new commitment designed to provide free AI skills training to 2 million people globally by 2025. To achieve this goal, we’re launching new initiatives for adults and young learners, and scaling our existing free AI training programs—removing cost as a barrier to accessing these critical skills. Hiring AI-skilled talent is a priority among 73% of employers—but three out of four who consider it a priority can’t find the AI talent they need. The three new initiatives are:

    • Eight new and free AI and generative AI courses

    • Amazon Web Services (AWS) Generative AI Scholarship, providing more than 50,000 high school and university students globally with access to a new generative AI course on Udacity

    • New collaboration with Code.org designed to help students learn about generative AI



Sunday, November 19, 2023

Is this inevitable? When (not if) one sides starts using AI can the other side fail to retaliate?

https://tech.hindustantimes.com/tech/news/in-a-first-in-argentina-candidates-roll-out-ai-powered-deepfakes-political-campaigns-5-things-to-know-71700209821620.html

In a first in Argentina, candidates roll out AI-powered deepfakes, political campaigns: 5 things to know

1. According to a report by the New York Times, the ongoing electoral campaigns in Argentina are witnessing both the main presidential candidates, Javier Milei of the Liberty Advances party, and Sergio Massa of the Union for the Homeland, using AI to build themselves up, while simultaneously putting the opposition down. To garner public attention, both candidates are using AI-created posters which can be seen on the streets of Buenos Aires.

2. One of the posters depicts Mr. Massa adorned with medals pointing to the sky while surrounded by older people looking up at him in hope. The other candidate, Mr. Milei, responded to it by depicting himself as a cartoon lion, while also putting out an AI-created image of Massa as a communist leader in a post.

3. In another use of AI, Mr. Massa’s team has created deepfakes with his face being put on famous scenes from films like Clockwork Orange and Fear and Loathing in Las Vegas which show the lead character as somewhat unhinged.

4. Talking about the potential of AI, Mr. Massa told the New York Times, “I didn’t have my mind prepared for the world that I’m going to live in. It’s a huge challenge. We’re on a horse that we have to ride but we still don’t know its tricks.”

5. Mr. Massa was also shown a deepfake that was created by his campaign which had Mr. Millei talking about how the human organ market would function. In response, Mr. Massa said, “I don’t agree with that use”. A spokesperson for him later clarified that such posts were clearly labeled AI-generated and are only for putting down political points, and entertainment.



(Related) Imagine this tool used by individuals rather than a formal campaign. Asynchronous politics?

https://arstechnica.com/information-technology/2023/11/from-toy-to-tool-dall-e-3-is-a-wake-up-call-for-visual-artists-and-the-rest-of-us/

From toy to tool: DALL-E 3 is a wake-up call for visual artists—and the rest of us

In October, OpenAI launched its newest AI image generator—DALL-E 3—into wide release for ChatGPT subscribers. DALL-E can pull off media generation tasks that would have seemed absurd just two years ago—and although it can inspire delight with its unexpectedly detailed creations, it also brings trepidation for some. Science fiction forecast tech like this long ago, but seeing machines upend the creative order feels different when it's actually happening before our eyes.

"It’s impossible to dismiss the power of AI when it comes to image generation," says Aurich Lawson, Ars Technica's creative director. "With the rapid increase in visual acuity and ability to get a usable result, there’s no question it’s beyond being a gimmick or toy and is a legit tool."





Useful.

https://www.pogowasright.org/state-landscape-privacy/

State Landscape Privacy

Some helpful information on state privacy law changes in 2023 from the Computers & Communication Industry Association. Download your free copy of the 6-page file from ccianet.org.





Let them run wild. We can scold the little devils later.

https://www.reuthis inevitabl;ters.com/technology/germany-france-italy-reach-agreement-future-ai-regulation-2023-11-18/

Exclusive: Germany, France and Italy reach agreement on future AI regulation

France, Germany and Italy have reached an agreement on how artificial intelligence should be regulated, according to a joint paper seen by Reuters, which is expected to accelerate negotiations at the European level.

The three governments support commitments that are voluntary, but binding on small and large AI providers in the European Union that sign up to them.

… Initially, no sanctions should be imposed, according to the paper.

… Germany's Economy Ministry, which is in charge of the topic together with the Ministry of Digital Affairs, said laws and state control should not regulate AI itself, but rather its application.