In case you find this stuff interesting…
https://www.theverge.com/24214574/google-antitrust-search-apple-microsoft-bing-ruling-breakdown
‘There’s no price’ Microsoft could pay Apple to use Bing: all the spiciest parts of the Google antitrust ruling
… The ruling in United States v. Google is a lot to take in. Some of it was previously reported in the press over the course of the weekslong trial; but here, the judge has inadvertently compiled the trial’s greatest hits: catty quotes from executives, embarrassing internal studies, and a bunch of surprising deets about that multibillion-dollar contract that keeps Google the default search engine in Safari.
...and AI isn’t even a lawyer! (Yet)
https://www.nationalreview.com/corner/ai-could-make-the-google-court-decision-moot/
AI Could Make the Google Court Decision Moot
… What this means is a tale as old as the U.S. antitrust system — that by the time the court gets around to providing its solution, the market has already solved it itself. This is one of the central problems of relying on antitrust to solve problems, as Bob Crandall of the Brookings Institution wrote back in 2000, when Microsoft, the likely winner in today’s case, was the target. As Crandall notes,
In 1969 International Business Machines was charged with monopolizing the computer industry. But whatever IBM had done to incur the Justice Department’s wrath became irrelevant in a world in which personal computers and minicomputers were making deep inroads into the “mainframe” computer business. Thirteen years later, the government simply dropped the case.
It may well be that AI does the government’s job for it, not just making the default search engine in a Web browser irrelevant but making search engines as we know them today themselves irrelevant. And we’ll likely all be better off for it.
A law to stop the tides?
Can AI chatbots be reined in by a legal duty to tell the truth?
… LLM chatbots, such as ChatGPT, generate human-like responses to users’ questions, based on statistical analysis of vast amounts of text. But although their answers usually appear convincing, they are also prone to errors – a flaw referred to as “hallucination”.
“We have these really, really impressive generative AI systems, but they get things wrong very frequently, and as far as we can understand the basic functioning of the systems, there’s no fundamental way to fix that,” says Mittelstadt.
No comments:
Post a Comment