Wednesday, May 10, 2023

I wonder about timing. Once the FBI knows how to take down a malware network, they can choose to do so at any time. Why now?

https://www.theregister.com/2023/05/09/fbi_operation_medusa_snake/

FBI-led Op Medusa slays NATO-bothering Russian military malware network

The FBI has cut off a network of Kremlin-controlled computers used to spread the Snake malware which, according to the Feds, has been used by Russia's FSB to steal sensitive documents from NATO members for almost two decades.

Turla, the FSB-backed cyberspy group, has used versions of the Snake malware to steal data from hundreds of computer systems belonging to governments, journalists, and other targets of interest in at least 50 countries, according to the US Justice Department. After identifying and stealing sensitive files on victims' devices, Turla exfiltrated them through a covert network of unwitting Snake-compromised computers in the US.





Are we too focused?

https://www.pogowasright.org/privacy-law-is-devouring-internet-law-and-other-doctrinesto-everyones-detriment/

Privacy Law Is Devouring Internet Law (and Other Doctrines)…To Everyone’s Detriment

Eric Goldman writes:

What does “privacy” mean? It’s a simple question that lacks a single answer, even from privacy experts. Without a universally shared definition of privacy, scholars have instead attempted to “define” privacy by taxonomizing problems that they think should fit under the privacy umbrella. However, this taxonomical approach to defining “privacy” has no natural boundary. Virtually every policy question could have privacy implications, so the privacy umbrella keeps expanding to account for those implications.
To privacy advocates, an ever-expanding scope for privacy law might sound like a good thing. For the rest of us, it’s unquestionably not a good thing. We don’t want privacy experts making policy decisions about topics outside their swimlanes. They lack the requisite expertise, so they will make serious and avoidable policy errors. Furthermore, in the inevitable balancing act between competing policy interests, they will overweight privacy considerations to the exclusion of other critical considerations. (This is a hammer/nail problem–if you’re a privacy hammer, everything looks like a privacy nail).

Read more at Technology & Marketing Law Blog.





Another “cause of America’s decline” has been debunked?

https://techcrunch.com/2023/05/09/american-psychology-org-releases-guidelines-for-kids-social-media-use/

American psychology group issues recommendations for kids’ social media use

The American Psychological Association (APA) issued its first ever health advisory on social media use Tuesday, addressing mounting concerns about how social networks designed for adults can negatively impact adolescents.

The report doesn’t denounce social media, instead asserting that online social networks are “not inherently beneficial or harmful to young people,” but should be used thoughtfully.





An instantly memorable phrase…

https://apnews.com/article/jack-dorsey-jayz-music-streaming-block-inc-e1f511727b88dd4ef96e57dd921fc8d2

Judge nixes Block shareholder suit over online music deal

… “It seemed, by all accounts, a terrible business decision,” the judge said of Block’s acquisition of Tidal. “Under Delaware law, however, a board comprised of a majority of disinterested and independent directors is free to make a terrible business decision without any meaningful threat of liability, so long as the directors approve the action in good faith.”





Use” certainly, “fair” unlikely.

https://www.bespacific.com/copyright-safety-for-generative-ai/

Copyright Safety for Generative AI

Sag, Matthew, Copyright Safety for Generative AI (May 4, 2023). Forthcoming in the Houston Law Review, Available at SSRN: https://ssrn.com/abstract=4438593 or http://dx.doi.org/10.2139/ssrn.4438593

Generative AI based on large language models such as ChatGPT, DALL·E-2, Midjourney, Stable Diffusion, JukeBox, and MusicLM can produce text, images, and music that are indistinguishable from human-authored works. The training data for these large language models consists predominantly of copyrighted works. This Article explores how generative AI fits within fair use rulings established in relation to previous generations of copy-reliant technology, including software reverse engineering, automated plagiarism detection systems, and the text data mining at the heart of the landmark HathiTrust and Google Books cases. Although there is no machine learning exception to the principle of non-expressive use, the largeness of likelihood models suggest that they are capable of memorizing and reconstituting works in the training data, something that is incompatible with non-expressive use. At the moment, memorization is an edge case. For the most part, the link between the training data and the output of generative AI is attenuated by a process of decomposition, abstraction, and remix. Generally, pseudo-expression generated by large language models does not infringe copyright because these models “learn” latent features and associations within the training data, they do not memorize snippets of original expression from individual works. However, this Article identifies particular situations in the context of text-to-image models where memorization of the training data is more likely. The computer science literature suggests that memorization is more likely when: models are trained on many duplicates of the same work; images are associated with unique text descriptions; and the ratio of the size of the model to the training data is relatively large. This Article shows how these problems are accentuated in the context of copyrightable characters and proposes a set of guidelines for “Copyright Safety for Generative AI” to reduce the risk of copyright infringement.”





Maybe because the label has been libeled?

https://sloanreview.mit.edu/article/business-leaders-need-to-rise-above-anti-woke-attacks/

Business Leaders Need to Rise Above Anti-Woke Attacks

As the debate over the word woke rages on, business leaders are grappling with the meaning and connotations of the term. (Spoiler alert: Woke means being aware of inequity and injustice.) Many conservative CEOs have followed the lead of politicians in using the label as a weapon, accusing others of contracting the “woke mind virus” or claiming that caring about “woke diversity” ignores the economy’s bottom line. But even politically moderate CEOs have become quick to reject the label.

… Why do corporate leaders hesitate to embrace the woke label when being aware of inequity and injustice aligns with growing commitments to socially conscious business? One issue is the term’s evolution. In its 21st-century usage, woke emerged as a watchword for Black Americans in the fight against police brutality and racial discrimination, but in recent years the term has been transformed into a cudgel for the conservative right to fight culture wars. Recent polling finds that Americans generally understand that woke means “being informed, educated on, and aware of social injustices,” and not “being overly politically correct.” But they are also slightly more likely to view being called woke as an insult, not a compliment.





Tools & Techniques. Perhaps it could recommend the appropriate fly based on the stream, date and time of day?

https://www.trendmicro.com/en_us/devops/23/e/build-simple-application-with-chatgpt.html

How to Build a Simple Application Powered by ChatGPT

… This tutorial demonstrates how ChatGPT creates a chatbot that helps users find new books to read. The bot will ask users about their favorite genres and authors, then generate recommendations based on their responses.



No comments: