Saturday, October 21, 2023

Implications for the 2024 elections? (Even without the pornography.)

https://www.euronews.com/next/2023/10/20/generative-ai-fueling-spread-of-deepfake-pornography-across-the-internet

Generative AI fueling spread of deepfake pornography across the internet

Deepfake content is created via machine learning algorithms which can produce hyper-realistic content. Bad actors can use this to target victims, blackmail people, or as a form of criminal or political manipulation.

A comprehensive report into deepfakes in 2023 found that deepfake pornography makes up 98 per cent of all deepfake videos found online, while 99 per cent of the victims targeted by deepfake pornography are women.

Analysts for a website aiming to protect people from online identity fraud - called homesecurityheroes.com - studied 95,820 deepfake videos, 85 dedicated online channels, and more than 100 websites linked to the deepfake ecosystem, producing its 2023 State of Deepfakes report.

One of the main findings of the report is that it can now take less than 25 minutes, and cost nothing, to create a minute-long deepfake pornographic video of anyone using a single clear face image of the victim.





This should not be. I suspect it is because no AI yet has a “check your answer” function. (Not just a problem in math.)

https://garymarcus.substack.com/p/math-is-hard-if-you-are-an-llm-and

Math is hard” — if you are an LLM – and why that matters

The paper alleges “GPT Can Solve Mathematical Problems Without a Calculator.” But it doesn’t really show that, except in the sense that I can shoot free throws in the NBA. Sure, I can toss the ball in the air, and sometimes I might even sink a shot, the more so with practice; but I am probably going to miss a lot, too. And 70% would be great for free throws; for multiplication it sucks. 47323 * 19223 = 909690029 and it shall always be; no partial credit for coming close.





Tools & Techniques. (Imagine creating your own backdated evidence!)

https://www.makeuseof.com/change-date-created-modified-other-windows/

How to Change the Date Created, Date Modified, and Other File Attributes on Windows

Windows keeps a record of when a file was created, who authored it, and when it was last modified. This information is known as file attributes and can be used to sort files by date, author name, and other parameters.

… If you don't want the receiver to know the actual file attributes, here's how to remove or modify them.



Friday, October 20, 2023

Toward creation of a Large Legal Language Model (LLLM)?

https://www.bespacific.com/the-necessary-and-proper-stewardship-of-judicial-data/

The Necessary and Proper Stewardship of Judicial Data

Huq, Aziz Z. and Clopton, Zachary D., The Necessary and Proper Stewardship of Judicial Data (September 20, 2023). Stanford Law Review, Vol. 76, Forthcoming, Northwestern Public Law Research Paper No. 23-55, Available at SSRN: https://ssrn.com/abstract=4578337 – “Governments and commercial firms create profits and social gain by exploiting large pools of data. One source of valuable data, however, lies in public hands yet remains largely untapped. While the deep reservoirs of data produced by Congress and federal agencies have long been available for public use, the data produced by federal judiciary is only loosely regulated, imperfectly used (except by a small number of well-resourced private data cartels), and largely ignored by scholars. But the ordinary process of litigation in federal courts generates an enormous volume of data. Especially after recent developments in large language models, this data holds immense potential for private gain and public good. It can be used to predict case outcomes or clarify the law in ways that advance legality and judicial access. It can reveal shortfalls in judicial practice and enable the provision of cheaper, better access to justice. It can make legible many otherwise invisible social facts that, if brought to light, can help improve public policy. Or else it can serve as a private profit center, its benefits accruing to a small coterie of data brokering firms capable of monopolizing its commercial use. This Article is the first to address the complex empirical, legal, and normative questions raised by the untapped public asset of judicial data. It first develops a positive, descriptive account of how federal courts produce, dissipate, preserve, or disclose information. This includes a map of the known sources of Article III data (e.g., opinions, orders, briefs), but also extends, however, to a massive volume of ‘dark data’ produced but either lost or buried by the courts. This positive analysis further uncovers a complex administrative framework by which a plethora of manifold walls and hurdles—some categorical, and some individuated—are thrown up to slow down or stop public access. With this positive understanding in hand, we offer a careful analysis of the constitutional questions implicated in decisions to disclose, or to render opaque, judicial data. Drawing attention to the key question of who controls judicial data flows, we demonstrate the existence of sweeping congressional power to regulate judicial data outside of a small zone of inherent judicial authority and a handful of instances in which privacy or safety are in play. Congressional authority, therefore, is the rule and not the exception. With these empirical and legal foundations in hand, the Article offers a normative evaluation of how Congress should regulate the production and dissemination of judicial data, in light of the capabilities and incentives of relevant actors. The information produced by the federal courts should not be exclusively a source of private profit for the data-centered firms presently monopolizing access. It is a public asset that should be elicited and disseminated in ways that advance the federal courts’ mission of equal justice under law.”





Worth reading all. (Will we elect the most creative liar.)

https://www.schneier.com/blog/archives/2023/10/ai-and-us-election-rules.html

AI and US Election Rules

If an AI breaks the rules for you, does that count as breaking the rules? This is the essential question being taken up by the Federal Election Commission this month, and public input is needed to curtail the potential for AI to take US campaigns (even more) off the rails.

At issue is whether candidates using AI to create deepfaked media for political advertisements should be considered fraud or legitimate electioneering. That is, is it allowable to use AI image generators to create photorealistic images depicting Trump hugging Anthony Fauci? And is it allowable to use dystopic images generated by AI in political attack ads?

For now, the answer to these questions is probably “yes.” These are fairly innocuous uses of AI, not any different than the old-school approach of hiring actors and staging a photoshoot, or using video editing software. Even in cases where AI tools will be put to scurrilous purposes, that’s probably legal in the US system. Political ads are, after all, a medium in which you are explicitly permitted to lie.

The concern over AI is a distraction, but one that can help draw focus to the real issue. What matters isn’t how political content is generated; what matters is the content itself and how it is distributed.



(Related?)

https://www.bespacific.com/social-medias-frictionless-experience-for-terrorists/

Social Media’s ‘Frictionless Experience’ for Terrorists

The Atlantic [read free ]: “These platforms were already imperfect. Now extremist groups are making sophisticated use of their vulnerabilities. The incentives of social media have long been perverse. But in recent weeks, platforms have become virtually unusable for people seeking accurate information…. Social media has long encouraged the sharing of outrageous content. Posts that stoke strong reactions are rewarded with reach and amplification. But, my colleague Charlie Warzel told me, the Israel-Hamas war is also “an awful conflict that has deep roots … I am not sure that anything that’s happened in the last two weeks requires an algorithm to boost outrage.” He reminded me that social-media platforms have never been the best places to look if one’s goal is genuine understanding: “Over the past 15 years, certain people (myself included) have grown addicted to getting news live from the feed, but it’s a remarkably inefficient process if your end goal is to make sure you have a balanced and comprehensive understanding of a specific event.”

See also Washington Post: Pro-Palestinian creators evade social media suppression by using ‘algospeak’ and Hamas turns to social media to get its message out — and to spread fear. Unmoderated messaging services and gruesome video from a deadly Gaza hospital strike have helped Hamas prosecute its ‘video jihad’



Thursday, October 19, 2023

It’s gonna happen. We ain’t ready.

https://fpf.org/blog/fpf-submits-comments-to-the-fec-on-the-use-of-artificial-intelligence-in-campaign-ads/

FPF SUBMITS COMMENTS TO THE FEC ON THE USE OF ARTIFICIAL INTELLIGENCE IN CAMPAIGN ADS

On October 16, 2023, the Future of Privacy Forum submitted comments to the Federal Election Commission (FEC) on the use of artificial intelligence in campaign ads. The FEC is seeking comments in response to a petition that asked the Agency to initiate a rulemaking to clarify that its regulation on “fraudulent misrepresentation” applies to deliberately deceptive AI-generated campaign ads.

FPF’s comments follow an op-ed FPF’s Vice President of U.S. Policy Amie Stepanovich and AI Policy Counsel Amber Ezzell published in The Hill on how generative AI can be used to manipulate voters and election outcomes, and the benefits to voters and candidates when generative AI tools are deployed ethically and responsibly.

Read the comments here.





So, who has jurisdiction?

https://techcrunch.com/2023/10/18/clearview-wins-ico-appeal/

Selfie-scraper, Clearview AI, wins appeal against UK privacy sanction

Controversial US facial recognition company, Clearview AI, has won an appeal against a privacy sanction issued by the U.K. last year.

In May 2022, the Information Commissioner’s Office (ICO) issued a formal enforcement notice on Clearview — which included a fine of around £7.5 million (~$10 million) — after concluding the self-scraping AI firm had committed a string of breaches of local privacy laws. It also ordered the company, which uses the scraped personal data to sell an identity-matching service to law enforcement and national security bodies, to delete information it held on U.K. citizens.

Clearview filed an appeal against the decision. And in a ruling issued yesterday its legal challenge to the ICO prevailed on jurisdiction grounds after the tribunal ruled the company’s activities fall outside the jurisdiction of U.K. data protection law owing to an exemption related to foreign law enforcement.





Why? Are they not watching what their competitors (and the hacking community) are doing?

https://sloanreview.mit.edu/article/is-your-organization-investing-enough-in-responsible-ai-probably-not-says-our-data/

Is Your Organization Investing Enough in Responsible AI? ‘Probably Not,’ Says Our Data

For the second year in a row, MIT Sloan Management Review and Boston Consulting Group have assembled an international panel of AI experts to help us understand how responsible artificial intelligence (RAI) is being implemented across organizations worldwide. For our final question in this year’s research cycle, we asked our academic and practitioner panelists to respond to this provocation: As the business community becomes more aware of AI’s risks, companies are making adequate investments in RAI.

While their reasons vary, most panelists recognize that RAI investments are falling short of what’s needed: Eleven out of 13 were reluctant to agree that organizations’ investments in responsible AI are “adequate.” The panelists largely affirmed findings from our 2023 RAI global survey, in which less than half of respondents said they believe their company is prepared to make adequate investments in RAI. This is a pressing leadership challenge for companies that are prioritizing AI and must manage AI-related risks.



Wednesday, October 18, 2023

I’m not sure I have a problem with this. Isn’t it like asking neighbors if they saw anyone new in the neighborhood?

https://www.bloomberg.com/news/articles/2023-10-16/colorado-court-approves-use-of-google-search-data-in-murder-case

Colorado Court OKs Use of Google Search Data in Murder Case

The Colorado Supreme Court ruled on Monday that evidence gleaned from a warrant for Google’s search data could be used in a murder case, sparking concerns the decision may encourage more police to embrace the controversial technique.

After a 2020 fire that killed five people in the Denver area, police were scrambling to identify suspects. They asked Alphabet Inc.’s Google to provide information about people who searched for the address of the house that went up in flames, using a novel approach known as a keyword search warrant. After some initial objections, Google shared data that enabled detectives to zero in on five accounts, leading to the arrest of three teens.



(Related)

https://fourthamendment.com/?p=56080

CO: REP in Google search history which also implicates freedom of expression

[*P4] In reaching these conclusions, we make no broad proclamation about the propriety of reverse-keyword warrants. As is often true when we examine what is reasonable under the search-and-seizure provisions of the federal and state constitutions, much is fact-dependent. Our finding of good faith today neither condones nor condemns all such warrants in the future. If dystopian problems emerge, as some fear, the courts stand ready to hear argument regarding how we should rein in law enforcement’s use of rapidly advancing technology. Today, we proceed incrementally based on the facts before us.





What makes AI so seductive that lawyers don’t bother with a reality check? (You don’t suppose that AI wrote the new brief?)

https://www.nbcnews.com/news/us-news/convicted-fugees-rapper-pras-michels-lawyer-used-ai-draft-bungled-clos-rcna120992

Convicted Fugees rapper Pras Michel's lawyer used AI to draft bungled closing argument

The lead defense lawyer for convicted Fugees hip hop star Prakazrel “Pras” Michel improperly relied on an experimental generative AI program to draft his closing argument in Michel’s high-profile criminal trial last spring, according to a newly-filed brief demanding a retrial for Michel.

Michel’s new counsel from ArentFox Schiff said that the AI-generated closing argument by Michel’s previous lawyer, David Kenner, was a resounding flop: “Kenner’s closing argument made frivolous arguments, misapprehended the required elements, conflated the schemes and ignored critical weaknesses in the government’s case,” the brief said.

… Kenner did not immediately respond to two email queries on the new brief. His co-counsel Alon Israely [A pseudonym? Bob] did not immediately respond to a query sent via LinkedIn.





New tech tools of war.

https://news.yahoo.com/hamas-hijacked-victims-social-media-181636480.html

Hamas Hijacked Victims’ Social Media Accounts to Spread Terror

… In a new war tactic, Hamas has seized the social media accounts of kidnapped Israelis and used them to broadcast violent messages and wage psychological warfare, according to interviews with 13 Israeli families and their friends, as well as social media experts who have studied extremist groups.

In at least four cases, Hamas members logged into the personal social media accounts of their hostages to livestream the Oct. 7 attacks. In the days since, Hamas also appeared to infiltrate their hostages’ Facebook groups, Instagram accounts and WhatsApp chats to issue death threats and calls for violence. Hamas members also took hostages’ cellphones to make calls to taunt friends and relatives, according to the Israeli families and their friends. Israel’s military has said at least 199 people have been taken hostage by Hamas.





New tech tools of politics.

https://www.thecity.nyc/2023/10/16/adams-taps-ai-robocalls-languages-he-doesnt-speak/

Tongue Twisted: Adams Taps AI to Make City Robocalls in Languages He Doesn’t Speak

Mayor Eric Adams is using artificial intelligence to turn himself into a polyglot: sending out robocalls with his voice to New Yorkers in a slew of languages he does not speak — and spooking out ethics and privacy advocates.

… At the news conference, Adams described himself as a “techie” and former computer programmer, then later said he used the controversial tech on the city’s robocall system — sending out messages in many languages using his voice.

“Conversational AI is amazing, once you put the script in you can put it in any language you want with my voice,” he said.

… When pressed on any ethics concerns about his voice pretending to know many languages, the mayor noted the importance of speaking to all New Yorkers.

… A spokesperson for the mayor said they’ve reached more than 4 million New Yorkers through robocalls and sent thousands of calls in Spanish, more than 250 in Yiddish, more than 160 in Mandarin, 89 calls in Cantonese and 23 in Haitian Creole. They were mostly used for hiring halls but also promoted the city’s Rise Up NYC concerts.



Tuesday, October 17, 2023

Perspective. (Rah Rah Tech!)

https://a16z.com/the-techno-optimist-manifesto/

The Techno-Optimist Manifesto





Resources.

https://www.kdnuggets.com/5-free-books-to-master-data-science

5 Free Books to Master Data Science

Want to break into data science? Check this list of free books for learning Python, statistics, linear algebra, machine learning and deep learning.



Monday, October 16, 2023

My AI says it’s already too late.

https://www.bespacific.com/ai-skeptic-why-generative-ai-is-currently-doomed/

AI Skeptic – Why Generative AI is Currently Doomed

@The_AI_Skeptic – Long Post: Why Generative AI is Currently Doomed We’re so used to technology getting better. Every year there’s a new iPhone with a faster processor. It’s the way of the world… or so it seems. Sometimes, bigger doesn’t mean better. Take, for example, LLMs like ChatGPT. If you keep scaling them up, they eventually become worse. This inverse scaling leads to them becoming actively bad: (https://youtube.com/watch?v=viJt_DXTfwA&t=1705s ) (When @OpenAI were developing the highly anticipated ChatGPT-4, it seems they may have hit this problem already. Because instead of scaling up their training sets, like previous iterations of their model, leaks indicate that ChatGPT-4 may actually be 8 x ChatGPT3 models tethered together (https://thealgorithmicbridge.substack.com/p/gpt-4s-secret-has-been-revealed ), explaining why it was delayed and why the dataset size was not revealed.) But even if you believe they’ll find a way around that problem, there’s still plenty of others waiting for us. What if I told you that language model technology isn’t actually new? What if I told you it’s largely the same as it was in the 1980s, but the only thing that’s changed is the transformer technology, allowing for more efficient training, and the sheer size of the training data: The public internet. Yes, the thing that gives ChatGPT (and other LLMs) their “magic” is the fact that the internet exists now and can be scraped. (After it’s being manually catalogued by hundreds of thousands of foreign workers, of course (https://theglobeandmail.com/business/article-ai-data-gig-workers/ ).) 300 billions words from the internet were used to train ChatGPT-4. It’s the scale of this training dataset that allows it to sound so human and knowledgeable. There’s nothing else like it. Not only are major companies preventing their content from being used in future AI training datasets (https://deadline.com/2023/10/bbc-will-block-chatgpt-from-scraping-its-content-1235566868/ ) but there’s a lingering question on whether or not it was even legal for them to use their original datasets in the first place (https://theverge.com/2023/9/11/23869145/writers-sue-openai-chatgpt-copyright-claims ). But worse than that, the internet is increasingly being polluted with error-ridden AI generated content. So much so that it’s infecting search results (https://wired.com/story/fast-forward-chatbot-hallucinations-are-poisoning-web-search/ )…”





Of course they do. We keep coming back to the same ‘government wants’ at every opportunity. Would I be required to own a smartphone in order to drive a car?

https://www.pogowasright.org/the-tsa-wants-to-put-a-government-tracking-app-on-your-smartphone/

The TSA wants to put a government tracking app on your smartphone

Edward Hasbrouck writes:

Today the Identity Project submitted our comments to the Transportation Security Administration (TSA) on the TSA’s proposed rules for “mobile driver’s licenses”.
The term “mobile driver’s license” is highly misleading. The model Electronic Credential Act drafted by the American Association of Motor Vehicle Administrators (AAMVA) to authorize the issuance of these digital credentials and installation (“provisioning”) of government-provided identification and tracking apps on individual’s smartphones provides that, “The Electronic Credential Holder shall be required to have their Physical Credential on their person while operating a motor vehicle.”
So the purpose of “mobile driver’s licenses” isn’t actually licensing of motor vehicle operators, as one might naively assume from the name. Rather, the purpose of the “mobile drivers license” scheme is to create a national digital ID, according to standards controlled by the TSA, AAMVA, and other private parties, to be issued by state motor vehicle agencies but intended for use as an all-purpose government identifier linked to a smartphone and used for purposes unrelated to motor vehicles.

Read more at Papers, Please!





Because I’m interested in statistics…

https://www.schneier.com/blog/archives/2023/10/coin-flips-are-biased.html

Coin Flips Are Biased

Experimental result:

Many people have flipped coins but few have stopped to ponder the statistical and physical intricacies of the process. In a preregistered study we collected 350,757 coin flips to test the counterintuitive prediction from a physics model of human coin tossing developed by Persi Diaconis. The model asserts that when people flip an ordinary coin, it tends to land on the same side it started—Diaconis estimated the probability of a same-side outcome to be about 51%.

And the final paragraph:

Could future coin tossers use the same-side bias to their advantage? The magnitude of the observed bias can be illustrated using a betting scenario. If you bet a dollar on the outcome of a coin toss (i.e., paying 1 dollar to enter, and winning either 0 or 2 dollars depending on the outcome) and repeat the bet 1,000 times, knowing the starting position of the coin toss would earn you 19 dollars on average. This is more than the casino advantage for 6 deck blackjack against an optimal-strategy player, where the casino would make 5 dollars on a comparable bet, but less than the casino advantage for single-zero roulette, where the casino would make 27 dollars on average. These considerations lead us to suggest that when coin flips are used for high-stakes decision-making, the starting position of the coin is best concealed.

Boing Boing post.





Perspective. Can you remember what you spent 4.4 hours per day doing as a teen?

https://news.gallup.com/poll/512576/teens-spend-average-hours-social-media-per-day.aspx

Teens Spend Average of 4.8 Hours on Social Media Per Day

Just over half of U.S. teenagers (51%) report spending at least four hours per day using a variety of social media apps such as YouTube, TikTok, Instagram, Facebook and X (formerly Twitter), a Gallup survey of more than 1,500 adolescents finds. This use amounts to 4.8 hours per day for the average U.S. teen across seven social media platforms tested in the survey.

Across age groups, the average time spent on social media ranges from as low as 4.1 hours per day for 13-year-olds to as high as 5.8 hours per day for 17-year-olds. Girls spend nearly an hour more on social media than boys (5.3 vs. 4.4 hours, respectively).





Perspective.

https://www.technologyreview.com/2023/10/16/1081149/ai-consciousness-conundrum/

Minds of machines: The great AI consciousness conundrum

At the breakneck pace of AI development, however, things can shift suddenly. For his mathematically minded audience, Chalmers got concrete: the chances of developing any conscious AI in the next 10 years were, he estimated, above one in five.

AI consciousness isn’t just a devilishly tricky intellectual puzzle; it’s a morally weighty problem with potentially dire consequences. Fail to identify a conscious AI, and you might unintentionally subjugate, or even torture, a being whose interests ought to matter. Mistake an unconscious AI for a conscious one, and you risk compromising human safety and happiness for the sake of an unthinking, unfeeling hunk of silicon and code. Both mistakes are easy to make. “Consciousness poses a unique challenge in our attempts to study it, because it’s hard to define,” says Liad Mudrik, a neuroscientist at Tel Aviv University who has researched consciousness since the early 2000s. “It’s inherently subjective.”





Tools & Techniques.

https://dataconomy.com/2023/10/16/best-ai-tools-for-teachers/

12 game-changing AI tools for teachers (free)

AI tools for teachers have surged in prominence, redefining the educational landscape. Thanks to artificial intelligence, educators can now amplify the impact of their pedagogical methods, ensuring that every moment in the classroom is optimized.

Commencing our exploration, we spotlight some standout AI tools for teachers that promise to be game-changers this 2023. We believe these innovations don’t just augment the teaching process; they also empower students, fostering an environment ripe for enriched learning experiences.



Sunday, October 15, 2023

Any actor can play…

https://www.washingtonpost.com/technology/2023/10/14/propaganda-misinformation-israel-hamas-war-social-media/

A flood of misinformation shapes views of Israel-Gaza conflict

A WhatsApp voice memo purporting to have insider information ricocheted across hundreds of group chats in Israel early on Monday. The Israeli army was planning for another “battle like we’ve never experienced before,” the anonymous woman said in Hebrew, warning that people should prepare to lose access to food, water and internet service for a week.

Across the country, Israelis raced to the banks and to the grocery stores, anticipating another attack. But the message, the army clarified hours later on X, turned out to be a falsehood.

One week into the war between Israel and Gaza, social media is inducing a fog of war surpassing previous clashes in the region — one that’s shaping how panicked citizens and a global public view the conflict.



(Related) Perhaps there was insufficient data in the AI’s library to match the photo?

https://www.404media.co/ai-images-detectors-are-being-used-to-discredit-the-real-horrors-of-war/

AI Images Detectors Are Being Used to Discredit the Real Horrors of War

A free AI image detector that's been covered in the New York Times and the Wall Street Journal is currently identifying a photograph of what Israel says is a burnt corpse of a baby killed in Hamas’s recent attack on Israel as being generated by AI.

However, the image does not show any signs it was created by AI, according to Hany Farid, a professor at UC Berkeley and one of the world’s leading experts on digitally manipulated images.





Perspective.

https://www.zbw.eu/econis-archiv/handle/11159/605045

Disrupting Creativity: Copyright Law in the Age of Generative Artificial Intelligence

Very recently, due largely to breakthroughs in deep learning technologies, AI has begun stepping into the shoes of human content generators and making valuable creative works at scale. Before the end of the decade, a significant amount of art, literature, music, software, and web content will likely be created by AI rather than traditional human authors. Yet the law, as it has so often historically, lags this technological evolution by prohibiting copyright protection for AI-generated works. The predominant narrative holds that even if AI can automate creativity, that this activity is not the right sort of thing to protect, and that protection could even harm human artists. AI-generated works challenge beliefs about human exceptionalism and the normative foundations of copyright law, which until now have offered something for everyone. Copyright can be about ethics and authors and protecting the sweat of a brow and personality rights. Copyright can also be about the public interest and offering incentives to create and disseminate content. But copyright cannot have it all with AI authors—there is valuable output being generated, but by authors with no interests to protect. This Article argues that American copyright law is, and has been traditionally, primarily about benefiting the public interest rather than benefiting authors directly. As a result, AI-generated works are precisely the sort of thing the system was designed to protect. Protection will encourage people to develop and use creative AI which will result in the production and dissemination of new works. Taken further, attributing authorship to AI when an AI has functionally done the work of a traditional author will promote transparency, efficient allocations of rights, and even counterintuitively protect human authors. AI-generated works also promise to radically impact other fundamental tenets of copyright law such as infringement, protection of style, and fair use. How the law should respond to AI activity has lessons more broadly for thinking about what rules should apply to people, machines, and other sorts of artificial authors.





Perspective.

https://www.geekwire.com/2023/robots-ai-and-the-future-of-labor-an-economic-opportunity-way-bigger-than-the-steam-engine/

Robots, AI, and the future of labor: An economic opportunity ‘way bigger than the steam engine’

The global conversation about robots and the workforce has shifted substantially in recent years, from widespread concerns about robots taking jobs to growing questions about how quickly they can fill gaps in the labor market.

One of the ventures at the forefront of this quest is Sanctuary AI. It’s a Vancouver, B.C.-based company that has raised more than $100 million Canadian dollars to pursue its vision for labor as a service. Sanctuary makes a 5-foot, 7-inch general-purpose humanoid robot called Phoenix, powered by an AI system called Carbon.





Perspective. Granted, AI will be faster than humans to see patterns in data. AI will likely find all the patterns in the data. Who selects the data?

https://www.psychologytoday.com/gb/blog/tech-happy-life/202310/the-ai-domino-effect-how-ai-will-soon-outsmart-us-all

The Domino Effect: How AI Will Soon Outsmart Us All

Artificial intelligence (AI) is a civilization-altering technology that is already changing our world in profound ways.

One thing capitalism is good at is making things better. We merely have to look back to our history of various technologies to see proof of how we improve them—rockets, televisions, video games, laptop computers, phones, etc. There is a powerful, profit-based incentive within our capitalist system to overcome any technical hurdles that stifle technological innovation and evolution. Since there are profits to be made, it's 100 percent guaranteed that capitalism will make AI much better than it is now. Importantly, "better" does not necessarily mean "good."