Wednesday, May 19, 2021

As they say in Moscow, “Это может повредить.” I agree.

https://krebsonsecurity.com/2021/05/try-this-one-weird-trick-russian-hackers-hate/

Try This One Weird Trick Russian Hackers Hate

In a Twitter discussion last week on ransomware attacks, KrebsOnSecurity noted that virtually all ransomware strains have a built-in failsafe designed to cover the backsides of the malware purveyors: They simply will not install on a Microsoft Windows computer that already has one of many types of virtual keyboards installed — such as Russian or Ukrainian. So many readers had questions in response to the tweet that I thought it was worth a blog post exploring this one weird cyber defense trick.

Will installing one of these languages keep your Windows computer safe from all malware? Absolutely not. There is plenty of malware that doesn’t care where in the world you are. And there is no substitute for adopting a defense-in-depth posture, and avoiding risky behaviors online.

But is there really a downside to taking this simple, free, prophylactic approach? None that I can see, other than perhaps a sinking feeling of capitulation. The worst that could happen is that you accidentally toggle the language settings and all your menu options are in Russian.





Always amusing.

https://thenextweb.com/news/how-much-your-stolen-personal-data-is-worth-on-the-dark-web-syndication?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+TheNextWeb+%28The+Next+Web+All+Stories%29

Here’s how much your stolen personal data is worth on the dark web





What to expect when an AI runs for office?

https://www.axios.com/gpt-3-disinformation-artificial-intelligence-c6ea11f7-b7eb-474d-b577-14731ffdbfa4.html

The disinformation threat from text-generating AI

A new report lays out the ways that cutting-edge text-generating AI models could be used to aid disinformation campaigns.

Why it matters: In the wrong hands text-generating systems could be used to scale up state-sponsored disinformation efforts — and humans would struggle to know when they're being lied to.

How it works: Text-generating models like OpenAI's leading GPT-3 are trained on vast volumes of internet data, and learn to write eerily life-like text off human prompts.

  • In their new report released this morning, researchers from Georgetown's Center for Security and Emerging Technology (CSET) examined how GPT-3 might be used to turbocharge disinformation campaigns like the one carried out by Russia's Internet Research Agency (IRA) during the 2016 election.



(Related) Learning to lie or detect lies. Either could be useful.

https://www.bespacific.com/mit-detect-political-fakes/

MIT – Detect Political Fakes

MIT Media Lab: “Did he say that? At Detect Political Fakes, we will show you a variety of media snippets (transcripts, audio, and videos). Half of the media snippets are real statements made by Joseph Biden and Donald Trump. The other half of the media snippets are fabricated. The media snippets that are fabricated are produced using deepfake technology. We are asking you to share how confident you are that a media snippet is real or fabricated.

Instructions – We will show you a variety of media snippets including transcripts, audio files, and videos. Sometimes, we include subtitles. Sometimes, the video is silent. You can watch the videos as many times as you would like. Please share how confident you are that the individual really said what we show. If you have seen the video before today, please select the checkbox that says “I’ve already seen this video.” And remember, half of the media snippets that we present are statements that the individual actually said…”





Is it worth noting that even the lawyers are confused?

https://abovethelaw.com/2021/05/just-calling-a-product-artificial-intelligence-isnt-good-enough/

Just Calling A Product ‘Artificial Intelligence’ Isn’t Good Enough

The many, many definitions of AI Contract Review.





Ignore all those primary sources, let your AI tell you what’s what.

https://www.bespacific.com/rethinking-search-making-experts-out-of-dilettantes/

Rethinking Search: Making Experts out of Dilettantes

MIT Technology Review: “…a team of Google researchers has published a proposal for a radical redesign that throws out the ranking approach and replaces it with a single large AI language model—a future version of BERT or GPT-3. The idea is that instead of searching for information in a vast list of web pages, users would ask questions and have a language model trained on those pages answer them directly. The approach could change not only how search engines work, but how we interact with them. Many issues with existing language models will need to be fixed first. For a start, these AIs can sometimes generate biased and toxic responses to queries—a problem that researchers at Google and elsewhere have pointed out... Metzler and his colleagues are interested in a search engine that behaves like a human expert. It should produce answers in natural language, synthesized from more than one document, and back up its answers with references to supporting evidence, as Wikipedia articles aim to do. ..”

Source – Cornell University arXiv:2105.02274 – Rethinking Search: Making Experts out of Dilettantes, Authors: Donald Metzler, Yi Tay, Dara Bahri, Marc Najork: Abstract – When experiencing an information need, users want to engage with an expert, but often turn to an information retrieval system, such as a search engine, instead. Classical information retrieval systems do not answer information needs directly, but instead provide references to (hopefully authoritative) answers. Successful question answering systems offer a limited corpus created on-demand by human experts, which is neither timely nor scalable. Large pre-trained language models, by contrast, are capable of directly generating prose that may be responsive to an information need, but at present they are dilettantes rather than experts – they do not have a true understanding of the world, they are prone to hallucinating, and crucially they are incapable of justifying their utterances by referring to supporting documents in the corpus they were trained over. This paper examines how ideas from classical information retrieval and large pre-trained language models can be synthesized and evolved into systems that truly deliver on the promise of expert advice.”





Who knew dogs were such techies?

https://www.theregister.com/2021/05/19/woof_woof_whos_a_good/

Australian Federal Police hiring digital evidence retrieval specialists: Being a very good boy and paws required

Hounds can sniff out SIM cards that a human might miss



No comments: