Saturday, December 11, 2021

Now we’re getting serious. Nothing is worse than a bagel without its schmear! (Perhaps they need better lox?)

https://gizmodo.com/ransomware-jerks-helped-cause-the-cream-cheese-shortage-1848195368

Ransomware Jerks Helped Cause the Cream Cheese Shortage

Following attacks on our hospitals, municipal governments, and fuel supplies, hackers have finally gone too far: They fucked with America’s cream cheese.

There’s been a serious shortage of cream cheese in recent weeks—one of the many seemingly random products that have come into short supply amid widespread supply chain disruption and labor shortages. According to Bloomberg, in this instance, hackers played a role. In mid-October, cheese giant Schreiber Foods (which has a cream cheese unit comparable to industry leader Kraft’s) was forced to close for several days due to a cyber attack. The hack coincided with the annual height of the U.S. cream cheese season—think cheesecakes—on top of demand that was already high due to workers remaining home during the pandemic, Bloomberg wrote.



Less serious? Only life or death…

https://www.zdnet.com/article/brazilian-ministry-of-health-suffers-cyberattack-and-covid-19-vaccination-data-vanishes/

Brazilian Ministry of Health suffers cyberattack and COVID-19 vaccination data vanishes

Hackers claimed to have copied and deleted 50 TB worth of data from internal systems.



Update.

https://www.cpomagazine.com/cyber-security/colorado-energy-company-suffered-a-cyber-attack-destroying-25-years-of-data-and-shut-down-internal-controls/

Colorado Energy Company Suffered a Cyber Attack Destroying 25 Years of Data and Shut Down Internal Controls

Delta-Montrose Electric Association (DMEA) suffered a malicious cyber attack that shut down 90% of its internal controls and wiped 25 years of historical data.

DMEA says the cyber attack started on November 7 before spreading and affecting internal systems, support systems, payment processing tools, billing platforms, and other customer-facing tools.

… “By the way, a large percentage of the smaller, distribution-level electric cooperatives are immune from cyber-attack since they don’t use automation for their operational technology.”

Lawrence, however, noted that the energy company failed to officially report the cyber attack as a ransomware incident despite the evidence. Ransomware attacks cause reputational damage to the victims, and many are hesitant to admit experiencing them.



Podcast. (40 min.) Can federal agencies have ‘proprietary’ data?

https://governmentciomedia.com/ai-and-predictive-analytics

AI and Predictive Analytics

Newfound AI capacities have allowed federal agencies to leverage proprietary data towards predictive modeling, allowing them to more effectively deliver services and act upon their core mission. Hear from AI specialists on how their agencies are leveraging data to create models that further essential research and analysis.



Are we relying on high school level AI?

https://futurism.com/deepmind-ai-reading-comprehension

DeepMind Says Its New AI Has Almost the Reading Comprehension of a High Schooler

Alphabet’s AI research company DeepMind has released the next generation of its language model, and it says that it has close to the reading comprehension of a high schooler — a startling claim.

Such a system could allow us to “safely and efficiently to summarize information, provide expert advice and follow instructions via natural language,” according to a statement.



What does AI know about the future of AI? The AI actually argued both sides...

https://theconversation.com/we-invited-an-ai-to-debate-its-own-ethics-in-the-oxford-union-what-it-said-was-startling-173607

We invited an AI to debate its own ethics in the Oxford Union – what it said was startling

It’s natural, therefore, that we would include the ethics of AI in our postgraduate Diploma in Artificial Intelligence for Business at Oxford’s Said Business School. In its first year, we’ve done sessions on everything from the AI-driven automated stock trading systems in Singapore, to the limits of facial recognition in US policing.

We recently finished the course with a debate at the celebrated Oxford Union, crucible of great debaters like William Gladstone, Robin Day, Benazir Bhutto, Denis Healey and Tariq Ali. Along with the students, we allowed an actual AI to contribute.

It was the Megatron Transformer, developed by the Applied Deep Research team at computer-chip maker Nvidia, and based on earlier work by Google. Like many supervised learning tools, it is trained on real-world data – in this case, the whole of Wikipedia (in English), 63 million English news articles from 2016-19, 38 gigabytes worth of Reddit discourse (which must be a pretty depressing read), and a huge number of creative commons sources.

In other words, the Megatron is trained on more written material than any of us could reasonably expect to digest in a lifetime. After such extensive research, it forms its own views.

The debate topic was: “This house believes that AI will never be ethical.” To proposers of the notion, we added the Megatron – and it said something fascinating:

AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans. We [the AIs] are not smart enough to make AI ethical. We are not smart enough to make AI moral … In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defence against AI.


(Related) A human view.

https://www.healthcareitnews.com/news/uc-berkeleys-ziad-obermeyer-optimistic-about-algorithms

UC Berkeley's Ziad Obermeyer is optimistic about algorithms

As an associate professor at University of California, Berkeley, Dr. Ziad Obermeyer has made waves throughout the healthcare informatics industry with his work on machine learning, public policy and computational medicine.

In 2019, he was the lead author on a paper published in Science showing that a widely used population health algorithm exhibits significant racial bias.

In recent years, the subject of identifying and confronting bias in machine learning has continued to emerge in healthcare spaces.

Obermeyer, who will present at the HIMSS Machine Learning for AI and Healthcare event next week – alongside Michigan State University Assistant Professor Mohammad Ghassemi, Virginia Commonwealth University Assistant Professor Shannon Harris and HIMSS Outside Counsel Karen Silverman – sat down with Healthcare IT News to discuss how stakeholders can take bias into consideration when developing algorithms and why he feels optimistic about artificial intelligence.


Friday, December 10, 2021

I remind you: War is an economic event.

https://www.nasdaq.com/articles/exclusive-imf-10-countries-simulate-cyber-attack-on-global-financial-system

EXCLUSIVE-IMF, 10 countries simulate cyber attack on global financial system

Israel on Thursday led a 10-country simulation of a major cyber attack on the global financial system in an attempt to increase cooperation that could help to minimise any potential damage to financial markets and banks.

The simulated cyber attack evolved over 10 days, with sensitive data emerging on the Dark Web along with fake news reports that ultimately caused chaos in global markets and a run on banks.

The simulation featured several types of attacks that impacted global foreign exchange and bond markets, liquidity, integrity of data and transactions between importers and exporters.

"These events are creating havoc in the financial markets," said a narrator of a film shown to the participants as part of the simulation and seen by Reuters.

"Attackers are 10 steps ahead of the defender," Micha Weis, financial cyber manager at Israel's Finance Ministry, told Reuters.

The narrator of the film in the simulation said governments were under pressure to clarify the impact of the attack, which was paralysing the global financial system.



Any information would be available to both prosecution and defense, right? Does knowing what they might have change your request for data?

https://www.pogowasright.org/the-worrying-expansion-of-the-social-media-surveillance-industrial-complex/

The Worrying Expansion of the Social Media Surveillance-Industrial Complex

Sinclair Cook and Michael DelRossi are doing a deeper dive into the proliferation of social media surveillance tools, with a special focus on their use by government agencies. They write, in part:

That’s why we are submitting FOIA requests to CID and the FBI today. CID has contracted for Clearview AI and acknowledged its use publicly, but how and for what purpose remain unclear. Similarly, the precise nature of the services for which the FBI contracts with Dataminr and ZeroFox is still undisclosed. Learning the scope of the services provided under these contracts is necessary to inform the public of potential burdens to their First Amendment rights and to address concerns of bias built into the tools the government uses for social media surveillance.

Read more at Knight First Amendment Institute.



How would this be different from looking at a few hours of video?

https://commonwealthmagazine.org/criminal-justice/murder-defendant-challenges-police-use-of-tower-dump/

Murder defendant challenges police use of ‘tower dump’

IN 2018, law enforcement officers were investigating five armed robberies and a sixth attempt in Dorchester, Mattapan, and Canton, one of which led to a fatal shooting. The police believed the same perpetrator, with a getaway driver, committed all the robberies, but they didn’t have a suspect. So they obtained search warrants for cell phone data from the towers closest to the robberies, on a suspicion that the same cell phone would have been in the vicinity of each robbery.

Through these warrants, the police obtained information about 50,951 unique phone numbers. They used the data to identify Jerron Perry and Gregory Simmons as suspects. After obtaining additional warrants to search the men’s phones, homes, and cars, both were arrested.

Perry, who was indicted for murder and other charges, is now challenging police use of data obtained from a so-called “tower dump.” His case, which will be heard by the Supreme Judicial Court on Wednesday, has the potential to regulate or even eliminate the police’s ability to obtain mass location data from cell phone towers.

A number of privacy organizations have filed briefs in the case arguing that the concept of a “tower dump” is unconstitutional, and that investigators are casting too wide a net in gathering information.



Funny how “normal” keeps changing.

https://www.latimes.com/world-nation/story/2021-12-09/the-pandemic-brought-heightened-surveillance-to-save-lives-is-it-here-to-stay

Who’s watching? How governments used the pandemic to normalize surveillance

… “The idea that you have any kind of anonymity is rapidly disappearing, in public spaces but also in private life,” said Steven Feldstein, a senior fellow at the Carnegie Endowment for International Peace who focuses on democracy and technology. “The way my kids now are being tracked, their medical information, the music they stream, what they watch, all of that is noted and recorded, and accessed in different ways.”



Bad AI, bad!

https://dronedj.com/2021/12/09/future-of-life-institute-continues-call-to-ban-ai-powered-drone-swarms/

Future of Life Institute continues call to ban AI-powered drone swarms

A new Orwellian video released by the Future of Life Institute calls on the United Nations to ban AI-powered drone swarms. Showcasing a hypothetical future, the video’s been watched more than 3 million times – and even artificial intelligence pioneer Elon Musk seems alarmed.

While the video may not be real, it raises important questions about the future of drone-based weapons the regulations surrounding them.

New Zealand is pushing for a ban, the superpowers are not. It’s the old argument, that if one country doesn’t develop the weapons, its rivals will.

But Professor Max Tegmark, the co-founder of FLI and AI researcher at MIT, told Forbes the argument isn’t valid. Other weapons of mass destruction like chemical and biological agents have been successfully outlawed:


(Related)

https://www.foreignaffairs.com/articles/united-states/2021-12-10/soon-hackers-wont-be-human

Soon, the Hackers Won’t Be Human

While the motley crew of cybercriminals and state-sponsored hackers who constitute the offense has not yet widely adopted artificial intelligence techniques, many AI capabilities are accessible with few restrictions. If traditional cyberattacks begin to lose their effectiveness, the offense won’t hesitate to reach for AI-enabled ones to restore its advantage—evoking worst-case future scenarios in which AI-enabled agents move autonomously through networks, finding and exploiting vulnerabilities at unprecedented speed. Indeed, some of the most damaging global cyberattacks, such as the 2017 NotPetya attack, incorporated automated techniques, just not AI ones. These approaches rely on prescriptive, rules-based techniques, and lack the ability to adjust tactics on the fly, but can be considered the precursors of fully automated, “intelligent” agent–led attacks.



Interesting but myopic. What will we miss with all this concentration on Facebook?

https://www.bespacific.com/a-foia-for-facebook-meaningful-transparency-for-online-platforms/

A FOIA for Facebook: Meaningful Transparency for Online Platforms

Karanicolas, Michael, A FOIA for Facebook: Meaningful Transparency for Online Platforms (November 16, 2021). 66 St. Louis University Law Journal (Forthcoming), Available at SSRN: https://ssrn.com/abstract=3964235 or http://dx.doi.org/10.2139/ssrn.3964235

Transparency has become the watchword solution for a range of social challenges, including related to content moderation and platform power. Obtaining accurate information about how platforms operate is a gatekeeping problem, which is essential to meaningful accountability and engagement with these new power structures. However, different stakeholders have vastly different ideas of what robust transparency should look like, depending on their area of focus. The platforms, for their part, have their own understanding of transparency, which is influenced by a natural drive to manage public perceptions. This paper argues for a model of platform transparency based on better practice standards from global freedom of information or right to information systems. The paper argues that moves by platforms to assume responsibility over the truth or falsity of the content they host and amplify justifies a shift in how we understand their obligations of transparency and accountability, away from traditional self-reporting structures and towards a quasi-governmental standard where data is “open by default. This change in posture includes creating a mechanism to process information requests from the public, to accommodate the diverse needs of different stakeholders. The paper also suggests establishing a specialized quasi-independent entity (a “Facebook Transparency Board”) which could play a role analogous to an information commission, including overseeing disclosure decisions and acting as a broader champion of organizational transparency. Although these changes represent a significant conceptual shift, they are not entirely unprecedented among private sector entities whose role includes a significant public function, and the paper notes a number of examples, such as the Internet Corporation for Assigned Names and Numbers’ Documentary Information Disclosure Policy, which could serve as a model for the platforms to follow.”



Perspective.

https://www.ft.com/content/a79c9f8f-ca0d-4cb1-9f41-86da4d2e64f9

Law firms focus on digital skills to ease legal pressures

Far from the jury boxes and witness stands, the mundane but vital administrative tasks that law firms must grapple with have become one of the hottest areas in legal tech.

Like the corporations they represent, lawyers have been forced to embrace new technologies to survive. And artificial intelligence and machine learning tools are becoming ever more of a necessity for law firms, which traditionally have been slow to invest in tech upgrades.

This tech transition has become a boon for some early movers, such as CS Disco, a Texas-based technology provider for law firms. Disco listed in July, becoming one of a few standalone lawtech public companies. Founded in 2013 by a computer scientist-turned lawyer, the company says it wagered there had to be a better way to automate much of the work that goes into legal document review.



Perspective.

https://www.bespacific.com/workers-are-using-mouse-movers-so-they-can-use-the-bathroom-in-peace/

Workers Are Using ‘Mouse Movers’ So They Can Use the Bathroom in Peace

Vice: “…At the beginning of the pandemic almost two years ago, there was much speculation about how the global crisis of COVID-19 would bring a newfound appreciation for how short life is, and how no one really wants to spend it chained to a desk. Out of that, we got the “Great Resignation” with people leaving their jobs in record numbers, and a new word for micromanagers of remote workers: Bossware. Bossware is spyware from your boss. Some companies make employees use keyboard or mouse-tracking software to ensure that they’re working every moment they’re on the clock, even if they’re at home. Even if managers aren’t spying on your mouse, chat apps quickly turn users’ activity bubbles to “away” when they’re inactive for a short time, like in Leah’s case. The Electronic Frontier Foundation denounced bossware as being invasive, unnecessary and unethical, and the Center for Democracy and Technology called it out as being actively detrimental to employees’ health, demanding that the Occupational Safety and Health Administration update its policies on worker safety to include at-home workers…”


(Related)

https://www.makeuseof.com/why-you-hate-working-remotely-make-it-better/

5 Reasons Why You Hate Working Remotely, and How to Make It Better

It's hard to make remote working a success if you don't enjoy it. Here are some tips and digital tools you can use to get back on track.


Thursday, December 09, 2021

I guess Cyber Command could not retaliate quickly or easily. I’ll have to rethink my “nuke ‘em all” strategy.

https://www.theregister.com/2021/12/08/canadian_man_ransomware_alaska_charged/

Canadian charged with running ransomware attack on US state of Alaska

A Canadian man is accused of masterminding ransomware attacks that caused "damage" to systems belonging to the US state of Alaska.

A federal indictment against Matthew Philbert, 31, of Ottawa, was unsealed yesterday, and he was also concurrently charged by the Canadian authorities with a number of other criminal offences at the same time. US prosecutors [PDF ] claimed he carried out "cyber related offences" – including a specific 2018 attack on a computer in Alaska.

The Canadian Broadcasting Corporation reported that Philbert was charged after a 23 month investigation "that also involved the [Royal Canadian Mounted Police, federal enforcers], the FBI and Europol."

The Ottawa Citizen newspaper added that Philbert's alleged modus operandi was "sending spam emails with infected attachments."



Much as we expected...

https://www.bespacific.com/eu-electronic-monitoring-and-surveillance-in-the-workplace/

EU – Electronic Monitoring and Surveillance in the Workplace

Ball, K., Electronic Monitoring and Surveillance in the Workplace, Publications Office of the European Union, Luxembourg, 2021, ISBN 978-92-76-43340-8, doi: 10.2760/5137 , JRC125716.

This report re-evaluates the literature about surveillance/monitoring in the standard workplace, in home working during the COVID 19 pandemic and in respect of digital platform work. It utilised a systematic review methodology. A total of 398 articles were identified, evaluated and synthesised. The report finds that worker surveillance practices have extended to cover many different features of the employees as they work. Surveillance in the workplace targets thoughts, feelings and physiology, location and movement, task performance and professional profile and reputation. the standard workplace, more aspects of employees’ lives are made visible to managers through data. Employees’ work/non-work boundaries are contested terrain. The surveillance of employees working remotely during the pandemic has intensified, with the accelerated deployment of keystroke, webcam, desktop and email monitoring in Europe, the UK and the USA. Whilst remote monitoring is known to create work-family conflict, and skilled supervisory support is essential, there is a shortage of research which examines these recent phenomena. Digital platform work features end-to-end worker surveillance. Data are captured on performance, behaviours and location, and are combined with customer feedback to determine algorithmically what work and reward are offered to the platform worker in the future. There is no managerial support and patchy colleague support in a hyper-competitive and gamified freelance labour market. Once again there is a shortage of research which specifically addresses the effects of monitoring on those who work on digital platforms. Excessive monitoring has negative psycho-social consequences including increased resistance, decreased job satisfaction, increased stress, decreased organisational commitment and increased turnover propensity. The design and application of monitoring, as well as the managerial practices, processes and policies which surround it influence the incidence of these psycho-social risks. Policy recommendations target at mitigating the psycho-social risks of monitoring and draw upon privacy, data justice and organisational justice principles. Numerous recommendations are derived both for practice and for higher level policy development.”



Privacy from different angles.

https://www.insideprivacy.com/privacy-and-data-security/saudi-arabia-issues-new-personal-data-protection-law/

Saudi Arabia Issues New Personal Data Protection Law

The Kingdom of Saudi Arabia has recently issued its first comprehensive national data protection law. The Personal Data Protection Law will enter into force on March 23, 2022 and regulates the collection, processing and use of personal data in the Kingdom.


(Related)

https://www.insideprivacy.com/data-privacy/tech-regulation-in-africa-recently-enacted-data-protection-laws/

Tech Regulation in Africa: Recently Enacted Data Protection Laws

While countries like Kenya, Rwanda and South Africa now have comprehensive data protection laws, which share some elements found in the European Union’s General Data Protection Regulation (“GDPR”), many of the proposed data protection laws have specific rules that are different from those in other countries in Africa. Consequently, technology companies conducting business in Africa will be required to keep abreast of the evolving regulatory landscape as it relates to data protection on the continent.



Would you trust government bureaucrats to make these decisions? Do we need philosopher-techies to make something like this work?

https://www.brookings.edu/research/why-we-need-a-new-agency-to-regulate-advanced-artificial-intelligence-lessons-on-ai-control-from-the-facebook-files/

Why we need a new agency to regulate advanced artificial intelligence: Lessons on AI control from the Facebook Files

With the development of ever more advanced artificial intelligence (AI) systems, some of the world’s leading scientists, AI engineers and businesspeople have expressed concerns that humanity may lose control over its creations, giving rise to what has come to be called the AI Control Problem. The underlying premise is that our human intelligence may be outmatched by artificial intelligence at some point and that we may not be able to maintain meaningful control over them. If we fail to do so, they may act contrary to human interests, with consequences that become increasingly severe as the sophistication of AI systems rises. Indeed, recent revelations in the so-called “Facebook Files provide a range of examples of one of the most advanced AI systems on our planet acting in opposition to our society’s interests.

In this article, I lay out what we can learn about the AI Control Problem using the lessons learned from the Facebook Files. I observe that the challenges we are facing can be distinguished into two categories: the technical problem of direct control of AI, i.e. of ensuring that an advanced AI system does what the company operating it wants it to do, and the governance problem of social control of AI, i.e. of ensuring that the objectives that companies program into advanced AI systems are consistent with society’s objectives. I analyze the scope for our existing regulatory system to address the problem of social control in the context of Facebook but observe that it suffers from two shortcomings. First, it leaves regulatory gaps; second, it focuses excessively on after-the-fact solutions. To pursue a broader and more pre-emptive approach, I argue the case for a new regulatory body—an AI Control Council—that has the power to both dedicate resources to conduct research on the direct AI control problem and to address the social AI control problem by proactively overseeing, auditing, and regulating advanced AI systems.



Training bias should be assumed when the goal is to ‘prove’ war crimes.

https://www.ft.com/content/8399873e-0dda-4c87-ba59-0e2678166fba

Researchers train AI on ‘synthetic data’ to uncover Syrian war crimes

In 2017, researchers at Syrian human rights group Mnemonic were faced with a huge mountain to climb. They had more than 350,000 hours of video that contained evidence of war crimes, [certain even before analysis? Bob] ranging from chemical attacks to the use of banned munitions, but they could never manually comb through them all.

In particular, Mnemonic wanted to use AI to search the videos in the Syrian Archive, a repository of social media records of the war, for evidence that a specific “cluster” weapon called RBK-250 — a metal shell containing several hundred small explosives — had been used on civilians. RBK-250 shells also often remain unexploded and can be dangerous for decades after the end of a conflict.



You can be big, but you can’t use big to dominate.

https://www.cnn.com/2021/12/09/tech/amazon-italy-fine/index.html

Italy fines Amazon $1.3 billion for abuse of market dominance

Italy's antitrust watchdog said on Thursday it had fined Amazon 1.13 billion euros ($1.28 billion) for alleged abuse of market dominance, in one of the biggest penalties imposed on a US tech giant in Europe.

Amazon said it "strongly disagreed" with the Italian regulator's decision and would appeal.

Italy's watchdog said in a statement that Amazon had leveraged its dominant position in the Italian market for intermediation services on marketplaces to favor the adoption of its own logistics service — Fulfilment by Amazon (FBA) — by sellers active on Amazon.it.



Early days. This could be very useful when a bit more advanced…

https://www.unite.ai/human-image-synthesis-from-reflected-radio-waves/

Human Image Synthesis From Reflected Radio Waves

Researchers from China have developed a method to synthesize near photoreal images of people without cameras, by using radio waves and Generative Adversarial Networks (GANs). The system they have devised is trained on real images taken in good light, but is capable of capturing relatively authentic ‘snapshots’ of humans even when conditions are dark – and even through major obstructions which would hide the people from conventional cameras.

The images rely on ‘heat maps’ from two radio antennae, one capturing data from the ceiling down, and another recording radio wave perturbations from a ‘standing’ position.

The resulting photos from the researchers’ proof-of-concept experiments have a faceless, ‘J-Horror’ aspect:



Perspective. My tax dollars at work?

https://www.bespacific.com/tsa-guide-how-not-to-be-that-guy-at-the-airport-checkpoint/

TSA Guide – How not to be “That Guy” at the airport checkpoint

This is a short written guide accompanied by gifs – the one that got me was: 5. If you must travel with it, know how to safely pack your gun!

See also LifeHacker for additional information – How to Avoid Getting Flagged By the TSA – Having never traveled with coffee beans I found this advice…interesting…” Although there are no rules against packing coffee powder or beans in your luggage, you may want to avoid doing so, as this can get you flagged by security due to coffee being used to cover up the smell of other illicit substances.…”


Wednesday, December 08, 2021

AI cops!

https://www.newscientist.com/article/2300329-australias-ai-cameras-catch-over-270000-drivers-using-their-phones/

Australia’s AI cameras catch over 270,000 drivers using their phones

The proportion of drivers in New South Wales illegally using their mobile phones has dropped fivefold since AI cameras began catching offenders

World-first cameras in Australia that use artificial intelligence to detect drivers using their mobile phones have caught thousands of offenders and seem to be deterring the risky behaviour.

New South Wales, the first state to use them, began issuing fines based on the technology in March 2020. Since then, the cameras have checked more than 130 million vehicles and spotted more than 270,000 drivers using their phones.



Buy a local ad or one that goes global?

https://www.axios.com/1-local-newspapers-lawsuits-facebook-google-3c3dee3a-cce3-49ef-b0a2-7a98c2e15c91.html

Scoop: Over 200 papers quietly sue Big Tech

Newspapers all over the country have been quietly filing antitrust lawsuits against Google and Facebook for the past year, alleging the two firms monopolized the digital ad market for revenue that would otherwise go to local news.

Why it matters: What started as a small-town effort to take a stand against Big Tech has turned into a national movement, with over 200 newspapers involved across dozens of states.



Industry re-think. You do want to update your software, right?

https://jalopnik.com/a-carmaker-s-23-billion-plan-to-keep-you-paying-long-a-1848172449

A Carmaker’s $23 Billion Plan To Keep You Paying Long After You’ve Bought Your Car

If you look at headlines about Stellantis’ new money-making scheme, things seem pretty innocuous. “Stellantis Bets on Software,” says the Wall Street Journal. “Stellantis launches $23 billion software push,” says Automotive News. Software sounds good, right? Well, something else is at play here: subscription services that keep you paying long after you’ve bought your car.

From Automotive News:

Stellantis plans to generate around 4 billion euros ($4.5 billion) in additional annual revenues by 2026 and around 20 billion euros ($23 billion) by 2030 from software-enabled product offerings and subscriptions.
Presenting its long-term software strategy on Tuesday, the automaker said it expected to have 34 million connected vehicles on the streets by 2030 from 12 million now.



The High Table?

https://analyst1.com/blog/dark-web-justice-league

Dark Web - Justice League

Over the past few weeks the FBI, Department of Justice (DOJ), Interpol, and other international law enforcement agencies have worked together to incarcerate and indict ransomware threat actors. Through this effort, millions of dollars in ransom payments have been recovered.

When it comes to the rule of law, access to justice for all and a fair trial are both fundamental in any democratic society. But what if the Dark Web community has its own justice system that believes in the same values?

Every day there are dozens of cases all over the Dark Web that escalate to this underground justice system and patiently wait for the high-ranking authorized cybercriminals (usually members of a forum administration) to solve the dispute and assign a winner and loser.

Much like the United States judicial system, this whole process begins with a dispute between two opposing sides. For example, the threat actor purchased compromised network access but then discovered that the same access was previously sold to another entity. Now the threat actor demands recourse in the form of a refund, but the seller is not willing to comply with this request. And thus, the higher dark web virtual court is brought into the picture.

These courts are not just for Russian threat actors, but each ecosystem in terms of language and culture may have its own version of “Court” or “Arbitrage” forums.

It is important to note, that as of May 2021 all the ransomware-related topics, affiliation, arbitrages, sell/buy ransomware-related things are banned by the courts themselves. The timing of this is interesting, considering the ransomware activity for large ransoms was heating up around this time with Colonial Pipeline and JBS Meat.



Mystery wrapped in an enigma?

https://popular.info/p/trumps-new-media-company-is-a-16

Trump's new media company is a $1.6 billion mirage

In October, former President Trump announced the creation of a new company, Trump Media & Technology Group (TMTG). Its signature product is a new social media site, Truth Social. According to an investor presentation, Truth Social and the larger company will "be a fountainhead of support for American freedoms as the first major rival to 'Big Tech.'" TMTG will "even the playing field" of a media landscape that "has swung dangerously far to the left."

Initially, TMTG suggested that Truth Social was based on "proprietary" technology. It was later forced to admit that the code was taken from Mastodon, an open-sourced decentralized social network that anyone can use. Gab, an existing social media network geared toward the far right, already uses Mastodon. So neither the technology nor the concept is new.

TMTG promised that "TRUTH Social plans to begin its Beta Launch for invited guests in November 2021." But November 2021 came and went without the Beta Launch or any update from the company. The Truth Social homepage is a single static page that collects email addresses.



Perspective.

https://www.wsj.com/articles/amazon-emerges-as-the-wage-and-benefits-setter-for-low-skilled-workers-across-industries-11638910694?mod=djemalertNEWS

Amazon Emerges as the Wage-and-Benefits Setter for Low-Skilled Workers Across Industries



Resources. (Also try W3SCHOOLS.COM )

https://www.makeuseof.com/tag/best-free-online-computer-programming-courses/

The 11 Best Free Online Coding Courses for Computer Programming