Saturday, May 29, 2021

A common security concern. The solution is to provide secure versions of the tools, unfortunately there are too many to keep up with.

https://www.databreaches.net/us-soldiers-accidentally-leak-nuclear-secrets-via-study-apps-report/

US soldiers accidentally leak nuclear secrets via study apps — report

Alex Berry reports:

Troops on US bases in Europe housing nuclear weapons have been using publicly accessible online flashcard apps to remember long and complex security protocols, the investigative website Bellingcat revealed on Friday.
The military personnel turned to sites such as Quizlet, Chegg Prep and Cram to memorize codes, jargon and even the status of nuclear vaults, according to the report.

Read more on dw.com.





I see very little value in this requirement. Do they expect terrorists to turn over their contact with their organizations? They will turn over carefully crafted ‘I love the US’ feeds instead. (Since I don’t use social media, would I be forbidden to enter the US?)

https://knightcolumbia.org/blog/biden-administration-continues-to-defend-social-media-registration-requirement-in-court

Biden Administration Continues to Defend Social Media Registration Requirement in Court

In a terse court filing today, the Biden administration indicated that it would continue to defend a controversial Trump administration rule that requires millions of visa applicants each year to register their social media handles with the U.S. government. The registration requirement, which stems from the Muslim ban, is the subject of an ongoing First Amendment challenge filed by the Knight Institute, the Brennan Center, and the law firm Simpson Thacher on behalf of two documentary film organizations, Doc Society and the International Documentary Association.





Yes, it’s screwy.

https://venturebeat.com/2021/05/28/ai-in-a-post-pandemic-economy/

AI in a post-pandemic economy

In many cases, AI was a key factor in keeping critical sectors of the economy afloat during the worst of the past year. When the workforce went home, enterprises ramped up deployment of intelligent technologies. That trend shows no signs of slowing, even as hiring starts to tick up and workers return to the office.

The AI in a Post-COVID-19 World report by GBSN Research recently found that three-quarters of business leaders have a positive outlook on AI and expect it will not only make processes more efficient but will also help create new products, services, and business models. This is backed up by another report from management solutions provider OneStream, which found that the use of AI tools like machine learning has jumped from about 20% of enterprises in 2020 to nearly 60% in 2021. This is despite the fact that, according to analytics firms FICO and Corinium, upwards of 65% of top executives don’t know exactly how AI works or how it makes decisions.





It’s like buying ‘scratch & dent’ software.

https://www.makeuseof.com/buy-microsoft-office-get-huge-discount/

Need to Buy Microsoft Office? Here's How to Get a Huge Discount

there is a method for saving money on software that not many people know about—renewed software. Buying renewed license keys can save you a mint as long as you buy from a reputable online reseller. Today, we’re going to show you the best way to get Microsoft Office for a deal. This process doesn't require a physical copy of the software, and getting set up is easy. Let’s get into it.





Enough to draw me out of retirement? Work a year, buy a small country, re-retire?

https://www.cnn.com/2021/05/28/tech/cybersecurity-labor-shortage/index.html

Wanted: Millions of cybersecurity pros. Salary: Whatever you want

The takeaway from such security breaches, according to experts, is that it's high time for companies to start investing in robust controls and, in particular, adding cybersecurity professionals to their teams.

The only hitch: There's a massive, longstanding labor shortage in the cybersecurity industry.

"It's a talent war," said Bryan Orme, principal at GuidePoint Security. "There's a shortage of supply and increased demand."



Friday, May 28, 2021

I guess they haven’t gotten around to a “lessons learned” review of the last one.

https://www.nbcnews.com/tech/security/solarwinds-hackers-are-it-again-targeting-150-organizations-microsoft-warns-n1268893

SolarWinds hackers are at it again, targeting 150 organizations, Microsoft warns

The Russian-based group behind the SolarWinds hack has launched a new campaign that appears to target government agencies, think tanks and non-governmental organizations, Microsoft said Thursday.

Nobelium launched the current attacks after getting access to an email marketing service used by the United States Agency for International Development, or USAID, according to Microsoft.

"These attacks appear to be a continuation of multiple efforts by Nobelium to target government agencies involved in foreign policy as part of intelligence gathering efforts," Tom Burt, Microsoft vice president of customer security and trust, wrote in a blog post.





Oh, I feel safer already! It is so comforting to know that the agency that can’t secure a single point of entry at an airport will now protect thousands of miles of pipeline.

https://www.csoonline.com/article/3620300/tsa-s-pipeline-cybersecurity-directive-is-just-a-first-step-experts-say.html#tk.rss_all

TSA’s pipeline cybersecurity directive is just a first step experts say

The new, hastily announced security directive requires US pipeline companies to appoint a cybersecurity coordinator and report possible breaches within 12 hours.

The Transportation Safety Administration (TSA), an arm of the US Department of Homeland Security (DHS), released a Security Directive on Enhancing Pipeline Cybersecurity. TSA released the document two days after the Biden administration leaked the details of the regulations and less than a month after the ransomware attack on Colonial Pipeline created a significant gas shortage in the Southeast US.

As a result of post-9/11 government maneuvering, the TSA gained statutory authority to secure surface transportation and ensure pipeline safety. The directive follows largely ineffective, voluntary pipeline security guidelines established by the TSA in 2010 and updated in 2018.

This new regulation requires that designated pipeline security companies report cybersecurity incidents to the DHS's Cybersecurity and Infrastructure Security Agency (CISA ) no later than 12 hours after a cybersecurity incident is identified. The TSA estimates that about 100 companies in the US would fall under the directive's mandates.





Podcast.

https://whyy.org/episodes/the-promise-and-pitfalls-of-ai/

The Promise and Pitfalls of AI

On this episode, we hear from scientists and thinkers who argue that we should look at AI not as a threat or competition, but as an extension of our minds and abilities. They explain what AI is good at, and where humans have the upper hand. We look at AI in three different settings: medicine, work, and warfare, asking how it affects our present — and how it could shape our future.



(Related)

https://www.nydailynews.com/news/national/ny-microsoft-president-orwell-1984-brad-smith-ai-20210528-66btcxssgfczlhnz62neeojwwq-story.html

Microsoft president suggests George Orwell’s ’1984′ could happen by 2024 because of AI tech

Microsoft president Brad Smith is worried that the totalitarian surveillance famously seen in George Orwell’s novel “1984″ could exist in the real world soon because of artificial intelligence.

If we don’t enact the laws that will protect the public in the future, we are going to find the technology racing ahead, and it’s going to be very difficult to catch up,” Smith told the BBC.





My AI assures me that AIs would never do that.

https://www.globenewswire.com/news-release/2021/05/27/2237870/0/en/Litigating-Artificial-Intelligence-When-Does-AI-Violate-Our-Legal-Rights.html

Litigating Artificial Intelligence: When Does AI Violate Our Legal Rights?

From the minds of Canada’s leading law and technology experts comes a playbook for understanding the multi-faceted intersection of AI and the law

Emond Publishing, Canada’s leading independent legal publisher, today announced the release of Litigating Artificial Intelligence, a book examining AI-informed legal determinations, AI-based lawsuits, and AI-enabled litigation tools. Anchored by the expertise of general editors Jill R. Presser, Jesse Beatson, and Gerald Chan, this title offers practical insights regarding AI’s decision-making capabilities, position in evidence law and product-based lawsuits, role in automating legal work, and use by the courts, tribunals, and government agencies.





AI identifies targets as they ‘pop up.” Are they confirmed before they are attacked or is this a computer run war where humans are only tools?

https://www.jpost.com/arab-israeli-conflict/gaza-news/guardian-of-the-walls-the-first-ai-war-669371

Israel's operation against Hamas was the world's first AI war

Having relied heavily on machine learning, the Israeli military is calling Operation Guardian of the Walls the first artificial-intelligence war.

For the first time, artificial intelligence was a key component and power multiplier in fighting the enemy,” an IDF Intelligence Corps senior officer said. “This is a first-of-its-kind campaign for the IDF. We implemented new methods of operation and used technological developments that were a force multiplier for the entire IDF.”





My AI wants to point out that the tweets were not written by AI. (AI good, humans not so good.)

https://www.vox.com/recode/22455140/lemonade-insurance-ai-twitter

A disturbing, viral Twitter thread reveals how AI-powered insurance can go wrong

Lemonade, the fast-growing, machine learning-powered insurance app, put out a real lemon of a Twitter thread on Monday with a proud declaration that its AI analyzes videos of customers when determining if their claims are fraudulent. The company has been trying to explain itself and its business model — and fend off serious accusations of bias, discrimination, and general creepiness — ever since.

The prospect of being judged by AI for something as important as an insurance claim was alarming to many who saw the thread, and it should be. We’ve seen how AI can discriminate against certain races, genders, economic classes, and disabilities, among other categories, leading to those people being denied housing, jobs, education, or justice. Now we have an insurance company that prides itself on largely replacing human brokers and actuaries with bots and AI, collecting data about customers without them realizing they were giving it away, and using those data points to assess their risk.



Thursday, May 27, 2021

It’s not exactly war, so what should we call it?

https://news.softpedia.com/news/japan-expects-russian-cyberattacks-on-tokyo-summer-olympics-533044.shtml

Japan Expects Russian Cyberattacks on Tokyo Summer Olympics

… It appears to be the retaliation for the exclusion of Russian teams from the Olympic Games in Pyeongchang and Tokyo over widespread doping allegations. We expect massive cyberattacks on the Tokyo Olympics, he adds.





So, let’s be anti-social.

https://www.makeuseof.com/does-social-media-do-more-harm-than-good/

Does Social Media Do More Harm Than Good for Society?

Media has always had the power to influence our society, but it wasn't until the social media boom that we saw it on this scale and magnitude. While it has potential for good, social media has been also been harmful to society because of how we use it.

Here's how social media is harming our mental health, self-image, communication skills, and society at large—potentially causing more harm than good overall.





Case or no case? If you face is marked “public” can it also be “private?” Will expectations or privacy policies rule?

https://www.theverge.com/2021/5/27/22455446/clearview-ai-legal-privacy-complaint-privacy-international-facial-recognition-eu

Clearview AI hit with sweeping legal complaints over controversial face scraping in Europe

Privacy International (PI) and several other European privacy and digital rights organizations announced today that they’ve filed legal complaints against the controversial facial recognition company Clearview AI. The complaints filed in France, Austria, Greece, Italy, and the United Kingdom say that the company’s method of documenting and collecting data — including images of faces it automatically extracts from public websites — violates European privacy laws. New York-based Clearview claims to have built “the largest known database of 3+ billion facial images.”

PI, NYOB, Hermes Center for Transparency and Digital Human Rights, and Homo Digitalis all claim that Clearview’s data collection goes beyond what the average user would expect when using services like Instagram, LinkedIn, or YouTube. “Extracting our unique facial features or even sharing them with the police and other companies goes far beyond what we could ever expect as online users,” said PI legal officer Ioannis Kouvakas in a joint statement.





No clear direction yet, but lots of potential for Privacy lawsuits.

https://www.cnbc.com/2021/05/27/office-surveillance-digital-leash-on-workers-could-be-crossing-a-line.html

Bosses putting a ‘digital leash’ on remote workers could be crossing a privacy line

A recent report by the Institute for the Future of Work, a British research and development group, said that algorithmic systems typically used in monitoring the performance of warehouse workers or delivery riders have pervaded more and more industries.

Prospect has published some research into workers’ attitude to these technologies. The majority of respondents in one survey said they were uncomfortable with the likes of camera monitoring or keystroke monitoring.

This technology is catching more and more attention from critics. Microsoft faced a backlash over its “productivity score” in Microsoft 365, which allowed managers to track an employee’s output. Microsoft has since rowed back on the product’s features, minimizing the data collected on individuals.

PwC was criticized last year for developing a facial recognition tool for finance firms that would monitor an employee and ensure they are at their desk when they’re supposed to be. A PwC spokesperson told CNBC that the tool was a “conceptual prototype.”



(Related)

https://www.politico.eu/article/ai-workplace-surveillance-facial-recognition-software-gdpr-privacy/

Your boss is watching: How AI-powered surveillance rules the workplace

Companies are buying increasingly intrusive artificial intelligence tools to keep an eye on their workers.





No matter how brilliant this idea is, it will never get by human politicians. (Unless we declare an AI as human?)

https://www.cnbc.com/2021/05/27/europeans-want-to-replace-lawmakers-with-ai.html

More than half of Europeans want to replace lawmakers with AI, study says

Researchers at IE University’s Center for the Governance of Change asked 2,769 people from 11 countries worldwide how they would feel about reducing the number of national parliamentarians in their country and giving those seats to an AI that would have access to their data.

The results, published Thursday, showed that despite AI’s clear and obvious limitations, 51% of Europeans said they were in favor of such a move.





My AI suggests we should listen to this…

https://spectrum.ieee.org/podcast/robotics/artificial-intelligence/can-a-robot-be-arrested-hold-a-patent-pay-income-taxes

Can a Robot Be Arrested? Hold a Patent? Pay Income Taxes?

When horses were replaced by engines, for work and transportation, we didn’t need to rethink our legal frameworks. So when a fixed-in-place factory machine is replaced by a free-standing AI robot, or when a human truck driver is replaced by autonomous driving software, do we really need to make any fundamental changes to the law?

My guest today seems to think so. Or perhaps more accurately, he thinks that surprisingly, we do not; he says we need to change the laws less than we think. In case after case, he says, we just need to treat the robot more or less the same way we treat a person.

A year ago, he was giving presentations in which he argued that AIs can be patentholders. Since then, his views have advanced even further in that direction. And so last fall, he published a short but powerful treatise, The Reasonable Robot: Artificial Intelligence and the Law, published by Cambridge University Press. In it, he argues that the law more often than not should not discriminate between AI and human behavior.





The potential for error is stunning. Imagine the effort required to determine who tweaked the data enough to make it dangerous!

https://www.healthcareitnews.com/news/synthetic-datas-growing-role-healthcare-ai-machine-learning-and-robotics

Synthetic data's growing role in healthcare AI, machine learning and robotics

Today there is a bottleneck in the development of artificial intelligence and machine learning – real-world data collection. AI and machine learning models require large datasets to become proficient at a task.

But preparing these datasets for model training is both costly and labor intensive. It is a conundrum, and the lack of large, accurately labeled datasets for specific applications is holding back the development of artificial intelligence and machine learning.

Some say synthetic data offers a solution – data that imitates real-world data. Instead of manually collecting and labeling datasets from the real-world, synthetic data is instead computer-generated.





Interesting read…

https://onezero.medium.com/a-i-is-solving-the-wrong-problem-253b636770cd

A.I. Is Solving the Wrong Problem

People don’t make better decisions when given more data, so why do we assume A.I. will?





Is that really me bungee jumping from the bridge over I25 last Tuesday? Of course not. Here’s a video of me surfing in Maui on Tuesday. And another of me having tea with Queen Elizabeth.

https://www.bespacific.com/the-threat-of-deepfakes-in-litigation-raising-the-authentication-bar-to-combat-falsehood/

The Threat of Deepfakes in Litigation: Raising the Authentication Bar to Combat Falsehood

Vanderbilt Journal of Entertainment & Technology Law, Vol. 23, No. 2, 2021: “Deepfakes are all over the internet—from shape-shifting comedians and incoherent politicians to disturbingly realistic fake pornography. Emerging technology makes it easier than ever to create a convincing deepfake. What used to take significant time and money to develop is now widely available, often for free, thanks to rapid advances in deepfake technology. Deepfakes threaten individual rights and even democracy. But their impact on litigation should not be overlooked. The US adversarial system of justice is built on a foundation of seeking out the truth to arrive at a just result. The Federal Rules of Evidence serve as an important framework for this truth-seeking mission, and the authentication rules, in particular, should play a key role in preventing deepfake evidence from corrupting the legal process. This Article looks at the unique threat of deepfakes and how the authentication rules under the Federal Rules of Evidence can adapt to help deal with these new challenges. It examines authentication standards that have emerged for social media evidence and suggests a middle-ground approach that redefines the quantity and quality of circumstantial evidence necessary for a reasonable jury to determine authenticity in the age of deepfakes. This middle-ground approach may raise the evidentiary bar in some cases, but it seeks to balance efficiency with the need to combat falsehood in the litigation process.”





Resources for my Business students.

https://www.bespacific.com/competitive-intelligence-a-selective-resource-guide-updated-may-2021/

Competitive Intelligence – A Selective Resource Guide – Updated May 2021

Via LLRX Competitive Intelligence – A Selective Resource Guide – Completely revised and updated May 2021, By Sabrina I. Pacifici

This guide on competitive intelligence resources on the web was first published in 2005, and I have continued to edit, revise and update it over the course of 16 years. My objective is to provide researchers with a current, well vetted, reliable, actionable subject matter specific pathfinder on a wide range of sites and services to assist in the delivery of outstanding customer service. Seasoned researchers, law librarians and knowledge managers routinely use free or low cost value added and content rich sites as components in the delivery of more robust and comprehensive work products. This “Best of the Web CI Guide” seeks to engage researchers with topical search engines, portals, government sponsored and open source databases, news and topical alerts and data archives, as well as academic, corporate and publisher specific services and applications. The sites I have included are benchmarks for internet search and discovery, monitoring, analyzing and reviewing current and historical data; news; reports, analysis and commentary; statistics; and profiles on companies, markets, countries, people and issues, from a national and a global perspective. My recommendations are accompanied by links to trusted and targeted content and sources produced by reputable media and publishing companies, businesses, government organizations, academe, IGOs and NGOs, and expert legal and technology professionals.



Wednesday, May 26, 2021

Still looking for a copy of that tutorial.

https://www.databreaches.net/nigerian-cybercriminal-gang-targets-texas-unemployment-system/

Nigerian Cybercriminal Gang Targets Texas Unemployment System

Brian New reports:

A Nigerian cybercriminal gang is targeting the Texas unemployment system, according to evidence shared with the CBS 11 I-Team.
A 13-page step-by-step tutorial on how to commit unemployment identity fraud through the Texas Workforce Commission website was discovered in an online closed group chat between members of the cybercriminal organization known as Scattered Canary.

Read more on CBSDFW.





The three pillars of standing?

https://www.databreaches.net/one-employees-accidental-email-leads-to-a-significant-data-breach-ruling-in-federal-appeals-court/

One Employee’s Accidental Email Leads To A Significant Data Breach Ruling in Federal Appeals Court

Jeffrey Csercsevits of Fisher Phillips writes:

A federal appeals court recently addressed whether employees had standing to bring a lawsuit when their personally identifiable information (PII) was inadvertently circulated to other employees at the company, with no indication of misuse or external disclosure. In McMorris v. Carlos Lopez & Associates, LLC, the 2nd Circuit Court of Appeals (hearing cases from New York, Connecticut, and Vermont) determined that the particular plaintiffs at issue did not have standing and that their mere fear of identity theft was insufficient for them to sustain a claim for relief. Importantly, however, the court set forth a three-part framework for how standing could be established in a similar situation.

Read more on JDSupra.





Guidelines for my Computer Security students.

https://www.csoonline.com/article/3619610/best-practices-for-conducting-ethical-and-effective-phishing-tests.html#tk.rss_all

5 best practices for conducting ethical and effective phishing tests

Phishing simulations—or phishing tests—have become a popular feature of cybersecurity training programs in organizations of all sizes. One can see the appeal: phishing tests allow security staff to craft and send emails to employees en masse that are designed to appear as authentic and enticing as the genuine malicious phishing emails that bombard businesses on a regular basis. These typically include lures such as missed delivery notices, invoice payment requests, and celebrity gossip.





An intro for my Computer Security students.

https://www.makeuseof.com/bec-scams/

What Is the Business Email Compromise (BEC) Scam?





Ah man, just when I was learning to spell GDPR…

https://www.politico.eu/article/eu-privacy-laws-chief-architect-calls-for-its-overhaul/

EU privacy law’s chief architect calls for its overhaul

Former EU justice chief Viviane Reding has called for Europe’s data protection rulebook to be revised just three years after it came into force.

The intervention by the Luxembourgish politician, who spearheaded the European Commission’s proposal of the General Data Protection Regulation in 2012, comes as the flagship law celebrates its third anniversary.

Reding, now an opposition MP in the Grand Duchy, told POLITICO that though the GDPR has succeeded in becoming a global privacy standard copied by the likes of Brazil and India, its enforcement was uneven.

… The center-right politician suggested that reform to centralize enforcement of the GDPR could help rein in powerful tech companies.

At present, a patchwork of national and regional regulators are tasked with enforcing the code. But that arrangement is further complicated by the "one-stop-shop," a rule that obliges the regulator where a company is legally established to be the one in charge, leaving Luxembourg and Ireland's data protection authorities responsible for almost all Silicon Valley giants.



(Related) Some new ideas?

https://fpf.org/blog/privacy-trends-four-state-bills-to-watch-that-diverge-from-california-and-washington-models/

PRIVACY TRENDS: FOUR STATE BILLS TO WATCH THAT DIVERGE FROM CALIFORNIA AND WASHINGTON MODELS





India seems to have a very flexible idea of privacy.

https://www.cnbc.com/2021/05/26/whatsapp-reportedly-sues-india-govt-says-new-media-rules-end-privacy.html

WhatsApp reportedly sues Indian government, says new media rules mean an end to privacy

WhatsApp has filed a legal complaint in Delhi against the Indian government seeking to block regulations coming into force on Wednesday that experts say would compel the California-based Facebook unit to break privacy protections, sources said.

The lawsuit, described to Reuters by people familiar with it, asks the Delhi High Court to declare that one of the new rules is a violation of privacy rights in India’s constitution since it requires social media companies to identify the “first originator of information” when authorities demand it.

While the law requires WhatsApp to unmask only people credibly accused of wrongdoing, the company says it cannot do that alone in practice. Because messages are end-to-end encrypted, to comply with the law WhatsApp says it would have break encryption for receivers, as well as “originators,” of messages.





Phrenology as a tuning fork?

https://www.fastcompany.com/90640109/ai-is-being-used-to-profile-people-from-their-head-vibrations

AI is being used to profile people from their head vibrations

Digital video surveillance systems can’t just identify who someone is. They can also work out how someone is feeling and what kind of personality they have. They can even tell how they might behave in the future. And the key to unlocking this information about a person is the movement of their head.

That is the claim made by the company behind the VibraImage artificial intelligence (AI) system. (The term “AI” is used here in a broad sense to refer to digital systems that use algorithms and tools such as automated biometrics and computer vision). You may never have heard of it, but digital tools based on VibraImage are being used across a broad range of applications in Russia, China, Japan and South Korea.

But as I show in my recent research, published in Science, Technology and Society, there is very little reliable, empirical evidence that VibraImage and systems like it are actually effective at what they claim to do.



Tuesday, May 25, 2021

Surveillance or management oversight?

https://www.bespacific.com/survey-reveals-the-extent-of-surveillance-on-the-remote-workforce/

Survey reveals the extent of surveillance on the remote workforce

With many companies extending their remote-work policies indefinitely, employers are increasingly exploring new ways to oversee their staff’s productivity. But this challenge is giving rise to solutions that may have disastrous consequences for individual privacy. In a study commissioned by ExpressVPN, in collaboration with Pollfish, 2,000 employers and 2,000 employees who work in a remote or hybrid capacity were surveyed to reveal the extent of employer surveillance, how it’s impacting employees, and the rate at which it might increase in the future as remote working continues…”





Surveillance, UK vs. EU?

https://www.theguardian.com/uk-news/2021/may/25/gchqs-mass-data-sharing-violated-right-to-privacy-court-rules

GCHQ’s mass data interception violated right to privacy, court rules

GCHQ’s methods for bulk interception of online communications violated the right to privacy and the regime for collection of data was “not in accordance with the law”, the grand chamber of the European court of human rights has ruled.

It also found the bulk interception regime contained insufficient protections for confidential journalistic material but said the decision to operate a bulk interception regime did not of itself violate the European convention on human rights.

The chamber also concluded that GCHQ’s regime for sharing sensitive digital intelligence with foreign governments was not illegal.





You would think knowing which end of the gun you were on would be rather important…

https://www.theverge.com/22444020/chicago-pd-predictive-policing-heat-list?scrolla=5eb6d68b7fedc32c19ef33b4

HEAT LISTED

He invited them into this home. And when he did, they told McDaniel something he could hardly believe: an algorithm built by the Chicago Police Department predicted — based on his proximity to and relationships with known shooters and shooting casualties — that McDaniel would be involved in a shooting. That he would be a “party to violence,” but it wasn’t clear what side of the barrel he might be on. He could be the shooter, he might get shot. They didn’t know. But the data said he was at risk either way.

But the visit set a series of gears in motion. This Kafka-esque policing nightmare — a circumstance in which police identified a man to be surveilled based on a purely theoretical danger — would seem to cause the thing it predicted, in a deranged feat of self-fulfilling prophecy.





Because buying cheese is suspicious! (Whould that be ‘probable cause’ in the US?)

https://www.zdnet.com/article/encrochat-drug-dealer-betrayed-by-his-love-of-cheese/#ftag=RSSbaffb68

Encrochat drug dealer betrayed by his love of cheese

A drug dealer's enjoyment of Blue Stilton cheese led to his capture and a sentence of over 13 years in prison.

Carl Stewart, a Liverpool resident, was identified after he shared an image of cheese purchased at a UK supermarket.

The 39-year-old shared his delight in the purchase over Encrochat, an encrypted messaging service, under the handle "Toffeeforce." However, in his glee, he did not realize that the photo provided vital clues to the police -- namely, fingerprints which were then analyzed by investigators.





A reference.

https://www.pogowasright.org/privacy-and-personal-data-protection-in-africa-a-rights-based-survey-of-legislation-in-eight-countries/

Omer Tene pointed out this great resource:

Privacy and data protection in Africa. 370 page report with country reports on Ethiopia, Kenya, Namibia, Nigeria, S. Africa, Tanzania, Togo, and Uganda.

From the Association for Progressive Communications (APC). (pdf )





I’m shocked. Shocked, I tell you!

https://www.zdnet.com/article/fico-report-finds-startling-disinterest-in-ethical-responsible-use-of-ai-among-business-leaders/

Report finds startling disinterest in ethical, responsible use of AI among business leaders

A new report from FICO and Corinium has found that many companies are deploying various forms of AI throughout their business with little consideration of the ethical implications of potential problems.

There have been hundreds of examples over the last decade of the many disastrous ways AI has been used by companies, from facial recognition systems unable to discern darker skinned faces to healthcare apps that discriminate against African American patients and recidivism calculators used by courts that skew against certain races.

Despite these examples, FICO's State of Responsible AI report shows business leaders are putting little effort into ensuring that the AI systems they use are both fair and safe for widespread use.





Lashing out frequently generates a backlash.

https://www.wired.com/story/florida-new-social-media-law-laughed-out-of-court/

Florida’s New Social Media Law Will Be Laughed Out of Court

The Stop Social Media Censorship Act almost certainly violates both the US Constitution and Section 230 of the Communications Decency Act.

On Monday, Governor Ron DeSantis signed into law the Stop Social Media Censorship Act, which greatly limits large social media platforms’ ability to moderate or restrict user content. The bill is a legislative distillation of Republican anger over recent episodes of supposed anti-conservative bias, like Twitter and Facebook shutting down Donald Trump’s account and suppressing the spread of the infamous New York Post Hunter Biden story. Most notably, it imposes heavy fines—up to $250,000 per day—on any platform that deactivates the account of a candidate for political office, and it prohibits platforms from taking action against “journalistic enterprises.”





Tools. Note number four.

https://www.makeuseof.com/tag/voice-changing-apps-android/

The 6 Best Voice Changer Apps for Android