Tuesday, February 28, 2023

I suspect they tripped over a vulnerability rather than made a deliberate choice to hack the Marshals. Unfortunately, the results are the same.

https://www.cnn.com/2023/02/27/politics/us-marshals-service-ransomeware-attack/index.html

Ransomware attack on US Marshals Service affects ‘law enforcement sensitive information’

A ransomware attack on the US Marshals Service has affected a computer system containing “law enforcement sensitive information,” including personal information belonging to targets of investigations, a US Marshals Service spokesperson said Monday evening.

The Justice Department subsequently determined it “constitutes a major incident,” according to the statement. A “major incident” is a hack that is significant enough that it requires a federal agency to notify Congress.





Can we identify the irrelevant, then eliminate it? Perhaps we need another AI?

https://www.bespacific.com/large-language-models-can-be-easily-distracted-by-irrelevant-context/

Large Language Models Can Be Easily Distracted by Irrelevant Context

Large Language Models Can Be Easily Distracted by Irrelevant Context. Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed Chi, Nathanael Schärli, Denny Zhou. [current version in PDF ]

Large language models have achieved impressive performance on various natural language processing tasks. However, so far they have been evaluated primarily on benchmarks where all information in the input context is relevant for solving the task. In this work, we investigate the distractibility of large language models, i.e., how the model problem-solving accuracy can be influenced by irrelevant context. In particular, we introduce Grade-School Math with Irrelevant Context (GSM-IC), an arithmetic reasoning dataset with irrelevant information in the problem description. We use this benchmark to measure the distractibility of cutting-edge prompting techniques for large language models, and find that the model performance is dramatically decreased when irrelevant information is included. We also identify several approaches for mitigating this deficiency, such as decoding with self-consistency and adding to the prompt an instruction that tells the language model to ignore the irrelevant information.”



No comments: