Could you deliberately create exculpatory evidence in your chats?
Major law firms are warning clients: anything you type into an AI chatbot can be used against you in court…
Reuters: “As people increasingly turn to artificial intelligence for advice, some U.S. lawyers are telling their clients not to treat AI chatbots like trusted confidants when their freedom or legal liability is on the line. These warnings became more urgent after a federal judge in New York ruled, opens new tab this year that the former CEO of a bankrupt financial services company could not shield his AI chats from prosecutors pursuing securities against him.
In the wake of the ruling, attorneys have been advising that conversations with chatbots like Anthropic’s Claude and OpenAI’s ChatGPT could be demanded by prosecutors in criminal cases or by litigation adversaries in civil cases. “We are telling our clients: You should proceed with caution here,” said Alexandria GutiĆ©rrez Swette, a lawyer at New York-based law firm Kobre & Kim. People’s discussions with their lawyers are almost always deemed confidential under U.S. law. But AI chatbots are not lawyers, and attorneys are instructing clients to take steps that could keep their communications with AI tools more private…”
I see pros and cons.
What Happens If America Nationalizes AI?
AI companies are beginning to entertain the possibility that they could cease to exist. This notion was, until recently, more theoretical: A couple of years ago, an ex-OpenAI employee named Leopold Aschenbrenner wrote a lengthy memo speculating that the U.S. government might soon take control of the industry. By 2026 or 2027, Aschenbrenner wrote, an “obvious question” will be circling through the Pentagon and Congress: Do we need a government-led program for artificial general intelligence—an AGI Manhattan Project? He predicted that Washington would decide to go all in on such an effort.
Aschenbrenner may have been prescient. Earlier this year, at the height of the Pentagon’s ugly contract dispute with Anthropic, Secretary of Defense Pete Hegseth warned that he could invoke the Defense Production Act (DPA), a Cold War–era law that he reportedly suggested would allow him to force the AI company to hand over its technology on whatever terms the Pentagon desired. The act is one of numerous levers the Trump administration can pull to direct, or even commandeer, AI companies. And the companies have been giving the administration plenty of reason to consider doing so.
Clearly a system design failure. And the omission of a fix process.
He didn’t commit a crime, but Flock cam alerts keep getting him pulled over
Kyle Dausman was just driving through Cherry Hills Village when officers pulled him over without warning. Officers thought he had a warrant attached to his vehicle. He didn't. They released him.
A few days later, he was pulled over again by one of the same Cherry Hills Village police officers. Same thing. The officer quickly recognized him and let him go.
… Lyons said the warrant traces back to a Gilpin County case and a court data entry error that confused Dausman's plate with that of a similar plate of a wanted man.
Lyons believes the root cause is a data entry issue involving Colorado license plates, which use both the letter O and the numeral zero.
"In Colorado data entry, we use both zeros and O's in license plates," Lyons said. "Sometimes the data entry will be for both."
He said the warrant returned hits when Dausman's plate was searched either way.
"They entered it for both," Lyons said. "It wasn't a mistake, one or the other. They just entered it for both an O and a zero, because we've run it both ways and the warrant pops up both ways."
Dausman said he tried to resolve the problem by contacting Gilpin County courts and the sheriff's office dispatch, and was told he needed to provide the name of the suspect tied to the warrant — information no one would give him because it involves an ongoing criminal investigation.
No comments:
Post a Comment