Thursday, April 27, 2023

Interesting, but everyone involved needs to know what ‘filters’ are in place.

https://www.engadget.com/palantir-shows-off-an-ai-that-can-go-to-war-180513781.html

Palantir shows off an AI that can go to war

… “LLMs and algorithms must be controlled in this highly regulated and sensitive context to ensure that they are used in a legal and ethical way,” the video begins. To do so, AIP's operation is based on three "key pillars," the first being that AIP will deploy across a classified system, able to parse in real-time both classified and non-classified data, ethically and legally. The company did not elaborate on how that would work. The second pillar is that users will be able to toggle the scope and actions of every LLM and asset on the network. The AIP itself will generate a secure digital record of the entire operation, "crucial for mitigating significant legal, regulatory, and ethical risks in sensitive and classified settings," according to the demo. The third pillar are AIP's "industry-leading guardrails" to prevent the system from taking unauthorized actions.

A "human in the loop" to prevent such actions does exist in Palantir's scenario, though from the video, the "operator" appears to do little more than nod along with whatever AIP suggests. The demo also did not elaborate on what steps are being taken to prevent the LLMs that the system relies on from "hallucinating" pertinent facts and details.





A new can of worms! Is this something Trump will use?

https://www.reuters.com/legal/elon-or-deepfake-musk-must-face-questions-autopilot-statements-2023-04-26/

Elon, or deepfake? Musk must face questions on Autopilot statements

A California judge on Wednesday ordered Tesla CEO Elon Musk to be interviewed under oath about whether he made certain statements regarding the safety and capabilities of the carmaker’s Autopilot features.

… Musk will likely be asked about a 2016 statement cited by plaintiffs, in which he allegedly said: "A Model S and Model X, at this point, can drive autonomously with greater safety than a person. Right now.”

Tesla opposed the request in court filings, arguing that Musk cannot recall details about statements.

In addition Musk, “like many public figures, is the subject of many ‘deepfake’ videos and audio recordings that purport to show him saying and doing things he never actually said or did,” Tesla said.

… “Their position is that because Mr. Musk is famous and might be more of a target for deep fakes, his public statements are immune,” Pennypacker wrote, adding that such arguments would allow Musk and other famous people “to avoid taking ownership of what they did actually say and do.”





What better authority?

https://www.bespacific.com/role-of-chatgpt-in-law-according-to-chatgpt/

Role of chatGPT in Law: According to chatGPT

Biswas, Som, Role of chatGPT in Law: According to chatGPT (March 30, 2023). Available at SSRN: https://ssrn.com/abstract=4405398 or http://dx.doi.org/10.2139/ssrn.4405398

ChatGPT is a language model developed by OpenAI that can provide support to paralegals and legal assistants in various tasks. Some of the uses of ChatGPT in the legal field include legal research, document generation, case management, document review, and client communication. However, ChatGPT also has limitations that must be taken into consideration, such as limited expertise, a lack of understanding context, the risk of bias in its responses, the potential for errors, and the fact that it cannot provide legal advice. While ChatGPT can be a valuable tool for paralegals and legal assistants, it is important to understand its limitations and use it in conjunction with the expertise and judgment of licensed legal professionals. The author acknowledges asking chatGPT questions regarding its uses for law. Some of the uses that it states are possible now and some are potentials for the future. The author has analyzed and edited the replies of chat GPT.”





Outsider trading?

https://www.bloomberg.com/news/articles/2023-04-26/jpmorgan-s-ai-puts-25-years-of-federal-reserve-talk-into-a-hawk-dove-score?leadSource=uverify%20wall

JPMorgan Creates AI Model to Analyze 25 Years of Fed Speeches

A week before the Federal Reserve’s next meeting, JPMorgan Chase & Co. unveiled an artificial intelligence-powered model that aims to decipher the central bank’s messaging and uncover potential trading signals.

Based off of Fed statements and central-banker speeches going back 25 years, the firm’s economists including Joseph Lupton employed a ChatGPT-based language model to detect the tenor of policy signals, effectively rating them on a scale from easy to restrictive in what JPMorgan is calling the Hawk-Dove Score.

Plotting the index against a range of asset performances, the economists found that the AI tool can be useful in potentially predicting changes in policy — and give off tradeable signals. For instance, they discovered that when the model shows a rise in hawkishness among Fed speakers between meetings, [Do they keep the Fed speakers under surveillance? Bob] the next policy statement has gotten more hawkish and yields on one-year government bonds advanced.





A report from Stanford and Georgetown.

https://fsi9-prod.s3.us-west-1.amazonaws.com/s3fs-public/2023-04/adversarial_machine_learning_and_cybersecurity_v7_pdf_1.pdf

Adversarial Machine Learning and Cybersecurity: Risks, Challenges, and Legal Implications

… This report is meant to accomplish two things. First, it provides a high-level discussion of AI vulnerabilities, including the ways in which they are disanalogous to other types of vulnerabilities, and the current state of affairs regarding information sharing and legal oversight of AI vulnerabilities. Second, it attempts to articulate broad recommendations as endorsed by the majority of participants at the workshop.



No comments: