Tuesday, July 27, 2021

When ‘security companies’ fail to secure... Who can customers rely on?

https://www.pogowasright.org/vpn-servers-seized-by-ukrainian-authorities-werent-encrypted/

VPN servers seized by Ukrainian authorities weren’t encrypted

Dan Goodin reports:

Privacy-tools-seller Windscribe said it failed to encrypt company VPN servers that were recently confiscated by authorities in Ukraine, a lapse that made it possible for the authorities to impersonate Windscribe servers and capture and decrypt traffic passing through them.
The Ontario, Canada-based company said earlier this month that two servers hosted in Ukraine were seized as part of an investigation into activity that had occurred a year earlier.

Read more on Ars Technica.





Learn to face the music? That I know my password is a ‘foregone conclusion.’ Same goes for my face? Remember, I’ve only seen it in photos and mirrors.

https://www.pogowasright.org/court-orders-us-capitol-rioter-to-unlock-his-laptop-with-his-face/

Court orders US Capitol rioter to unlock his laptop ‘with his face’

Zack Whittaker reports:

A federal judge in Washington, D.C., has ordered a man accused of participating in the U.S. Capitol riot on January 6 to unlock his laptop “with his face” after prosecutors argued that the laptop likely contains video footage that would incriminate him in the attempted insurrection.
Guy Reffitt was arrested in late January, three weeks after he participated in the riot, and has been in jail since. He has pleaded not guilty to five federal charges, including bringing a firearm to the Capitol grounds and a charge of obstructing justice. His Windows laptop was one of several devices seized by the FBI, which investigators said was protected with a password but could be unlocked using Reffitt’s face.

Read more on TechCrunch.





Can anyone opt out?

https://www.bespacific.com/states-working-with-id-me/

States working with ID.me

CNN – “As of July 19, unemployment agencies in 25 states were using ID.me, which uses facial recognition technology to verify unemployment benefit applications…”





We could develop devices/records that can not be fiddled with. Why isn’t that a requirement?

https://www.vice.com/en/article/qj8xbq/police-are-telling-shotspotter-to-alter-evidence-from-gunshot-detecting-ai

Police Are Telling ShotSpotter to Alter Evidence From Gunshot-Detecting AI

Prosecutors in Chicago are being forced to withdraw evidence generated by the technology, which led to the police killing of 13-year-old Adam Toledo earlier this year.

How did they know that’s where the shooting happened? Police said ShotSpotter, a surveillance system that uses hidden microphone sensors to detect the sound and location of gunshots, generated an alert for that time and place.

Except that’s not entirely true, according to recent court filings.

That night, 19 ShotSpotter sensors detected a percussive sound at 11:46 p.m. and determined the location to be 5700 South Lake Shore Drive—a mile away from the site where prosecutors say Williams committed the murder, according to a motion filed by Williams’ public defender. The company’s algorithms initially classified the sound as a firework. That weekend had seen widespread protests in Chicago in response to George Floyd’s murder, and some of those protesting lit fireworks.

But after the 11:46 p.m. alert came in, a ShotSpotter analyst manually overrode the algorithms and “reclassified” the sound as a gunshot. Then, months later and after “post-processing,” another ShotSpotter analyst changed the alert’s coordinates to a location on South Stony Island Drive near where Williams’ car was seen on camera.



(Related) How to automatically raise the level of suspicion...

https://www.bespacific.com/what-cops-understand-about-copyright-filters-they-prevent-legal-speech/

What Cops Understand About Copyright Filters: They Prevent Legal Speech

EFF: ““You can record all you want. I just know it can’t be posted to YouTube,” said an Alameda County sheriff’s deputy to an activist. “I am playing my music so that you can’t post on YouTube.” The tactic didn’t work—the video of his statement can in fact, as of this writing, be viewed on YouTube. But it’s still a shocking attempt to thwart activists’ First Amendment right to record the police—and a practical demonstration that cops understand what too many policymakers do not: copyright can offer an easy way to shut down lawful expression. This isn’t the first time this year this has happened. It’s not even the first time in California this year. Filming police is an invaluable tool, for basically anyone interacting with them. It can provide accountability and evidence of what occurred outside of what an officer says occurred. Given this country’s longstanding tendency to believe police officers’ word over almost anyone else’s, video of an interaction can go a long way to getting to the true..”



(Related) I doubt this is a universal problem, but it is worth considering.

https://thenextweb.com/news/cops-are-running-amok-with-artificial-intelligence?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+TheNextWeb+%28The+Next+Web+All+Stories%29

Lying, corrupt, anti-American cops are running amok with AI

A cop installs software from a company such as Clearview AI on their personal smartphone. This allows them to take a picture of anyone and surface their identity. The cop then runs the identity through an app from a company such as Palantir, which surfaces a cornucopia of information on the individual.

So, without a warrant, officer Friendly now has access to your phone carrier, ISP, and email records. They have access to your medical and mental health records, military service history, court records, legal records, travel history, and your property records. And it’s as easy to use as Netflix or Spotify.

Best of all, at least for the corrupt cops using these systems unethically, there’s absolutely no oversight whatsoever. Cops are often offered these systems directly from the vendors as “trials” so they can try them before they decide whether to ask their departments to adopt them at scale.





Future law. Maybe. Possibly.

https://www.insideprivacy.com/internet-of-things/u-s-ai-iot-cav-and-privacy-legislative-update-second-quarter-2021/

U.S. AI, IoT, CAV, and Privacy Legislative Update – Second Quarter 2021

In this update, we detail the key legislative developments in the second quarter of 2021 related to artificial intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAVs”), and federal privacy legislation. As we recently covered on May 12, President Biden signed an Executive Order to strengthen the federal government’s ability to respond to and prevent cybersecurity threats, including by removing obstacles to sharing threat information between private sector entities and federal agencies and modernizing federal systems. On the hill, lawmakers have introduced a number of proposals to regulate AI, IoT, CAVs, and privacy.





Will they figure it out in our lifetime?

https://www.bespacific.com/data-literacy-in-government-how-are-agencies-enhancing-data-skills/

Data Literacy in Government: How Are Agencies Enhancing Data Skills?

Fed Tech: “The federal government is vast, and the challenge of understanding its oceans of data grows daily. Rather than hiring thousands of new experts, agencies are moving to train existing employees on how to handle the new frontier. Data literacy is now a common buzzword, spurred by the publication of the Federal Data Strategy 2020 Action Plan last year and the growing empowerment of chief data officers in the government. The document outlines a multiyear, holistic approach to government information that includes building a culture that values data, encouraging strong management and protection and promoting its efficient and appropriate use.

While the Federal government leads globally in many instances in developing and providing data about the United States and the world, it lacks a robust, integrated approach to using data to deliver on mission, serve the public and steward resources,” the plan notes. A key pillar of the plan is to “identify opportunities to increase staff data skills,” and it directs all federal agencies to undertake a gap analysis of skills to see where the weaknesses and needs lie…”





Who’d a thunk it?

https://venturebeat.com/2021/07/20/employees-want-more-ai-to-boost-productivity-study-finds/

Employees want more AI to boost productivity, study finds

Eighty-one percent of employees believe AI improves their overall performance at work. As a result, more than two-thirds (68%) are calling on their employers to deploy more AI-based technologies to help them execute tasks. That’s the top-level finding from a study published today by 3GEM on behalf of SnapLogic, which surveyed 400 office workers across the U.S. and U.K. about their opinions on AI in the workplace.



(Related)

https://hbr.org/2021/07/everyone-in-your-organization-needs-to-understand-ai-ethics

Everyone in Your Organization Needs to Understand AI Ethics

When most organizations think about AI ethics, they often overlook some of the sources of greatest risk: procurement officers, senior leaders who lack the expertise to vet ethical risk in AI projects, and data scientists and engineers who don’t understand the ethical risks of AI. Fixing this requires both awareness and buy-in on your AI ethics program across the organization. To achieve this, consider these six strategies: 1) remove the fear of not getting it right away, 2) tailor your message to your audience, 3) tie your efforts to your company purpose, 4) define what ethics means in an operational way, 5) lean on trusted and influential individuals, and 6) never stop educating.



Monday, July 26, 2021

Music for the ears of my lawyer friends?

https://www.databreaches.net/first-came-the-ransomware-attacks-now-come-the-lawsuits/

First came the ransomware attacks, now come the lawsuits

Gerrit De Vynck reports:

In a world where everything runs on computers, these attacks can cause havoc. Hospitals have had to postpone surgeries. In Southern Maryland, Leonardtown was hit by the sprawling Kaseya IT software hack and lost 17 of its 19 computers, forcing it to stop billing residents for electricity and blocking paychecks from going out to town employees. And in the case of Colonial Pipeline, hundreds of gas stations were shut down, leading to huge lines of cars waiting for what little fuel remained.
The rise in lawsuits may mean companies and organizations that are hacked are no longer just on the hook for reimbursing people who had their data stolen. They could now be liable for all kinds of damages that go well beyond a heightened risk of identity theft or credit card fraud.

Read more on Washington Post.





Repeating those repetitious redundancies my students need to hear.

https://www.cpomagazine.com/tech/business-protection-make-sure-it-thrives-in-case-of-an-emergency/

Business Protection: Make Sure It Thrives in Case of an Emergency

In this article, you will find a list of things you need to take into account to make sure that your business thrives in case of unexpected problems. This list of emergencies includes but is not limited to data protection issues, your or your business partner’s death or critical injury, financial trouble, weather disaster, and the pandemic. Read on and make sure that you are prepared for anything.





Worth asking the question?

https://www.cpomagazine.com/tech/are-companies-collecting-too-much-data-on-us/

Are Companies Collecting Too Much Data on Us?

With rising data privacy concerns, the active threat of cybercrime, and a generally oblivious public, do companies need to collect and store a lot of our data?

In today’s post, we cover how much data companies and governments should have on us. Without further ado, let us start.





Quick! Before they sneak up on you.

https://www.ofcom.org.uk/research-and-data/internet-and-on-demand-research/internet-futures

Internet Futures: Spotlight on the technologies which may shape the Internet of the future

As the UK’s communications regulator, it is important that Ofcom is aware of new types of Internet technology that may affect the future. We will monitor and consider the effects that these developments may have on the communications services we use every day.

Read the report

Internet Futures: Spotlight on the technologies which may shape the Internet of the future (PDF, 4.9 MB)





After getting so much right with their GDPR…

https://www.ft.com/content/a5970b6c-e731-45a7-b75b-721e90e32e1c

EU proposals to regulate AI are only going to hinder innovation

While some of us under lockdown churned through streaming services and sourdough starters, others decided to use the time for a little self-improvement — taking up Dutch or Danish, Swahili or Esperanto. Duolingo, the free app many downloaded, has become the world’s most popular way to learn a second language. The company is now hoping to ride that interest into an initial public offering: last week it said it wanted to be valued at up to $3.4bn in its IPO.

But EU proposals for regulating AI threaten the use of one of Duolingo’s niftiest innovations, the  English Test, in its current form. They also make it less likely that the next round of similar innovations will be developed in the bloc. That’s a problem.

The English Test provides a way for people to demonstrate their proficiency to more than 3,000 educational institutions around the world. Test-takers don’t need to register in advance or travel anywhere; they just need an internet-connected device with a webcam and an hour to spare. The test guards against cheating (that’s what the webcam is for); assesses literacy, conversation and comprehension; returns results in two days; and costs less than $50.

It’s also a high-risk AI system, according to the EU proposal. This label applies because the test uses AI, both for personalisation — questions appropriate to the taker’s skill level are generated on the fly — and for grading. Systems that use AI for “assessing participants in tests commonly required for admission to educational institutions” are put in the high-risk category by the EU’s proposal.



(Related)

https://www.cnbc.com/2021/07/26/aia-europes-proposed-ai-law-could-cost-its-economy-36-billion.html

Europe’s proposed A.I. law could cost its economy $36 billion, think tank warns

A new law designed to regulate artificial intelligence in Europe could end up costing the EU economy 31 billion euros ($36 billion) over the next five years, according to a report from Washington-based think tank the Center for Data Innovation released on Sunday.

The Artificial Intelligence Act — a proposed law put forward by the European Commission, the executive arm of the EU — will be the “world’s most restrictive regulation of AI,” according to the center.

Read the report.





Perspective. A nation of voyeurs.

https://nypost.com/2021/07/25/citizen-pays-new-yorkers-25-an-hour-to-livestream-crime-scenes/

Citizen pays New Yorkers $25 an hour to livestream crime scenes

Want to make $200 a day in New York City? Rush to the scene of a murder, a three-alarm fire or a traffic accident — then pull out your phone and start shooting.

That’s the pitch from Citizen, a controversial neighborhood watch app that’s quietly hiring New Yorkers to livestream crime scenes and other public emergencies in an apparent effort to encourage more ordinary citizens to do the same, The Post has learned.





Tools & techniques.

https://www.fastcompany.com/90657628/best-free-writing-tools

7 great free tools for improving your writing



Sunday, July 25, 2021

War by random cyber bombing?

https://www.databreaches.net/cyberattack-shuts-down-services-in-greeces-second-largest-city/

Cyberattack Shuts Down Services in Greece’s Second-Largest City

The National Herald reports:

As hackers – many sponsored by Russia and China and authoritarian governments around the world – have stepped up cyber attacks on municipal services in a number of countries, Thessaloniki‘s agencies were shut down over an electronic intrusion.
That happened July 23, with Deputy Mayor of Business Planning, e-Government and Migration Policy Giorgos Avarlis saying the city – Greece’s second-largest – closed its services and web applications, “so that proper investigations can be carried out and we do not risk being attacked again,” with no report what kind of defenses it has.

Read more on The National Herald.





My AI says, “Yes, if we can tell it what is ethical in every circumstance.”

https://www.tandfonline.com/doi/full/10.1080/0731129X.2021.1951459

Can AI Weapons Make Ethical Decisions?

The ability of machines to make truly independent and autonomous decisions is a goal of many, not least of military leaders who wish to take the human out of the loop as much as possible, claiming that autonomous military weaponry—most notably drones—can make decisions more quickly and with greater accuracy. However, there is no clear understanding of how autonomous weapons should be conceptualized and of the implications that their “autonomous” nature has on them as ethical agents. It will be argued that autonomous weapons are not full ethical agents due to the restrictions of their coding. However, the highly complex machine-learning nature gives the impression that they are making their own decisions and creates the illusion that their human operators are protected from the responsibility of the harm they cause. Therefore, it is important to distinguish between autonomous AI weapons and an AI with autonomy, a distinction that creates two different ethical problems for their use. For autonomous weapons, their limited agency combined with machine-learning means their human counterparts are still responsible for their actions while having no ability to control or intercede in the actual decisions made. If, on the other hand, an AI could reach the point of autonomy, the level of critical reflection would make its decisions unpredictable and dangerous in a weapon.





Boarder searches should catch anyone stupid enough to carry into the country data they can easily download from the Internet after they arrive.

https://www.sciencedirect.com/science/article/pii/S2666281721001256

On the need for AI to triage encrypted data containers in U.S. law enforcement applications

This paper takes an analogical approach to define the parameters by which artificial intelligence (AI) can be utilized to facilitate warrantless searches at U.S. ports of entry. The authors tailor their discussion to the prevention of child pornography (also referred to as child abuse or exploitation materials in the academic literature), and the traffic thereof. By making the legal case to utilize AI, particularly eXplainable AI (XAI), to search encrypted devices for attributes indicative of child pornography, the authors hope to encourage research in this field and develop better technology to help catch criminals without relinquishing privacy rights.





Something for lawyers to consider.

https://link.springer.com/article/10.1007/s10506-021-09294-4

Preserving the rule of law in the era of artificial intelligence (AI)

The study of law and information technology comes with an inherent contradiction in that while technology develops rapidly and embraces notions such as internationalization and globalization, traditional law, for the most part, can be slow to react to technological developments and is also predominantly confined to national borders. However, the notion of the rule of law defies the phenomenon of law being bound to national borders and enjoys global recognition. However, a serious threat to the rule of law is looming in the form of an assault by technological developments within artificial intelligence (AI). As large strides are made in the academic discipline of AI, this technology is starting to make its way into digital decision-making systems and is in effect replacing human decision-makers. A prime example of this development is the use of AI to assist judges in making judicial decisions. However, in many circumstances this technology is a ‘black box’ due mainly to its complexity but also because it is protected by law. This lack of transparency and the diminished ability to understand the operation of these systems increasingly being used by the structures of governance is challenging traditional notions underpinning the rule of law. This is especially so in relation to concepts especially associated with the rule of law, such as transparency, fairness and explainability. This article examines the technology of AI in relation to the rule of law, highlighting the rule of law as a mechanism for human flourishing. It investigates the extent to which the rule of law is being diminished as AI is becoming entrenched within society and questions the extent to which it can survive in the technocratic society.





After that, Skynet? This reminds me of Paul David’s “The Dynamo and the Computer,” which I frequently quote and wish he had followed up on…

https://venturebeat.com/2021/07/24/deadline-2024-why-you-only-have-3-years-left-to-adopt-ai/

Deadline 2024: Why you only have 3 years left to adopt AI

If your company has yet to embrace AI, you’re in a race against the clock. And by my calculations, you have just three years left.

How did I arrive at 2024 as the deadline for AI adoption? My prediction — formulated with KUNGFU.AI advisor Paco Nathan — is rooted in us noticing that many futurists’ J curves show innovations typically have a 12-to-15-year window of opportunity, a period between when a technology emerges and when it reaches the point of widespread adoption.





Fearless prediction: This argument will continue until an AI provides us with the answer.

https://www.digitallawjournal.org/jour/article/viewFile/56/48

INTELLECTUAL PROPERTY LAW: IN THE HANDS OF ARTIFICIAL CREATOR

Every year, digitalization covers more and more areas of social life, algorithmization expands the horizons of human capabilities, and mechanization accelerates the interaction of subjects of social relations. A growing number of innovations appears in the turnover of property; it is here that the consequences of the digital revolution most acutely affect a wide range of persons participating in it. Should the conservative civil law regulation of property and personal non-property relations change under the pressure of digital technologies? Should we destroy the foundations and institutions tested by many years of experience in social communication, or will the existing civil law norms be able to withstand change, only requiring a little adaptation to new circumstances?

All these issues are even more relevant in the field of intellectual activity and the protection of intellectual property. One of the challenges is related to the development and implementation of artificial intelligence. Significant advances in the creation of algorithmic software raise the question of the possibility of legal protection of the results of its activities. The merit of the first comprehensive and multifaceted study of this problem belongs to the authors of the recently published monograph “Artificial Intelligence and Intellectual Property” by Oxford University Press, which is reviewed in this article.



(Related)

https://www.digitallawjournal.org/jour/article/view/53

Deconstruction of the legal personhood of artificial intelligence

Calls to rethink the content of “legal personhood” are increasingly being heard at the present time: to recognize animals, artificial intelligence, etc. as a subject. There are several explanations for this: firstly, a change in ideas about a person and their position in society, and secondly, attempts to rethink the traditional categories of law. Throughout long periods of history, the definition of legal personhood depended on the definition of subjective right; the subjective right was associated with the legally significant will of the person. Consequently, a change in views on the will theory of subjective right inevitably lead to a revision of the content of the person. The main purpose of this article is to determine the essence of the legal personhood. To do this, using the historical method, the evolution of ideas about the legal personhood is revealed. It is argued that Hohfeld’s approach to understanding subjective-legal structures made it possible to look differently at the content of the category of legal personhood: it became possible to recognize animals or artificial intelligence as the owners of various subjective-legal categories. Nevertheless, the logic of modern commentators, as well as supporters of such a flexible approach to the definition of legal personhood, is not free from shortcomings. Using the method of analytical jurisprudence, the author demonstrates the emerging problems.





If this is true, and the people who don’t think they need a vaccine (or a mask) will get Covid then ‘believe’ can vote for Trump twice?

https://www.psypost.org/2021/07/large-study-finds-covid-19-is-linked-to-a-substantial-drop-in-intelligence-61577

Large study finds COVID-19 is linked to a substantial drop in intelligence

People who have recovered from COVID-19 tend to score significantly lower on an intelligence test compared to those who have not contracted the virus, according to new research published in The Lancet journal EclinicalMedicine. The findings suggest that the SARS-CoV-2 virus that causes COVID-19 can produce substantial reductions in cognitive ability, especially among those with more severe illness.





For my students...

https://www.makeuseof.com/linkedin-scams-to-watch-out-for/

5 LinkedIn Scams to Watch Out For

LinkedIn is a safe platform, but you can nonetheless find scammers on the site. Here's what to look out for.



Saturday, July 24, 2021

Something very strange here.

https://www.cnn.com/2021/07/23/tech/kaseya-encryptor-ransomware-victims/

Software company's unveiling of decryption key comes too late for many victims of devastating ransomware attack

On Thursday, the software company Kaseya announced that it could help unlock any of its customers' systems that were still inaccessible following a devastating ransomware attack early this month that took down as many as 1,500 businesses worldwide. But for many victims it was too little, too late.

Kaseya had obtained a decryption key, the company said, that could release any file still locked down by malicious software produced by the criminal gang REvil, which is believed to operate from Eastern Europe or Russia.

For the organizations whose systems were still offline three weeks after the attack, the newfound availability of a decryptor tool offered a sign of hope, especially after REvil mysteriously disappeared from the internet and left many organizations unable to contact the group.

But for many others that have already recovered without Kaseya's help, either by paying off the ransomware gang weeks ago or by painstakingly restoring from backups, the announcement was no help -- and opens a new chapter of scrutiny for Kaseya as it declines to answer questions about how it obtained the key and whether it paid the $70 million ransom demand or another amount.

In order to access the tool, Kaseya is requiring that businesses sign a non-disclosure agreement, according to several cybersecurity experts working with affected companies. While such agreements are not unusual in the industry, they could make it more difficult to understand what happened in the incident's aftermath. Kaseya declined to comment on the non-disclosure agreements.





Still trying to identify that tipping point. (Not just sanctions, all out cyber war.)

https://www.cpomagazine.com/cyber-security/us-intelligence-allies-formally-accuse-chinese-state-backed-hackers-of-the-microsoft-exchange-cyber-attacks-but-stop-short-of-sanctions/

US & Intelligence Allies Formally Accuse Chinese State-Backed Hackers of the Microsoft Exchange Cyber Attacks, but Stop Short of Sanctions

The massive hack of the Microsoft Exchange email server software that took place early this year is estimated to have hit tens of thousands of victims, causing disproportionate chaos for smaller businesses. The Biden administration has formally declared that Chinese state-backed APT groups are to blame. While the attack was not considered a major national security threat (at least not on par with the SolarWinds breach), it was devastating to many American small businesses ill-equipped to respond to cyber attacks of this level of sophistication.





Establishing an absolute minimum. Stop there at your peril.

https://www.databreaches.net/connecticut-enacts-safe-harbor-from-punitive-damages-in-data-breach-cases/

Connecticut Enacts Safe Harbor From Punitive Damages In Data Breach Cases

Jason Gavejian and Joseph Lazzarotti of JacksonLewis write:

Effective October 1, 2021, Connecticut becomes the third state with a data breach litigation “safe harbor” law (Public Act No. 21-119 ), joining Utah and Ohio. In short, the Connecticut law prohibits courts in the state from assessing punitive damages in data breach litigation against a covered defendant that created, maintained, and complied with a cybersecurity program that meets certain requirements. Cyberattacks are on the rise – think Colonial Pipeline, Kaseya, JBS, and others – with ransomware attacks up 158 percent from 2019-2020 in North America.

Read more on JDSupra.





Should they all be discoverable?

https://www.databreaches.net/convenience-store-chain-cant-shield-investigative-report-on-data-breach-from-discovery-judge-rules/

Convenience Store Chain Can’t Shield Investigative Report on Data Breach From Discovery, Judge Rules

We often hear of firms having their counsel running incident response and contracting of forensics, etc., so that any reports would be protected by work product doctrine as well as attorney-client privilege. But if the attorney doesn’t word the contract carefully, any report may not be covered by the doctrine. We saw that in a Capital One case last year in the Eastern District of Virginia involving a 2019 breach, and now we’re seeing it again over another 2019 case, this time in the Middle District of Pennsylvania.

P.J. Annunzio reports:

A federal judge has ruled that because an investigative report commissioned by Pennsylvania-based convenience store chain Rutter’s in response to a data security breach was not prepared for litigation purposes, it is discoverable.
In a July 22 ruling granting the class action plaintiffs’ motion to compel the document, U.S. Magistrate Chief Judge Karoline Mehalchick of the Middle District of Pennsylvania held that the report done by consultant Kroll Cyber Security for Rutter’s was not covered by attorney-client and work product privilege.

Read more on Law.com.





Not-so-private mail.

https://www.makeuseof.com/what-is-email-tracking-pixel/

What Is An Email Tracking Pixel? How Do Companies Use Them to Access Your Private Data?

Companies have a way of tracking who is opening and reading their email content: the email tracking pixel. Although email tracking pixels fly under the radar for most people, many companies use them to gauge engagement with advertising and marketing campaigns.

So, how does an email tracking pixel work?





Once identified and discontinued as a bad idea, they brought it back. Should be interesting to see how mission creep impacts this system.

https://www.pogowasright.org/englands-nhs-data-sharing-to-third-parties-the-view-from-new-zealand/

England’s NHS data-sharing to third parties: the view from New Zealand

Ephraim Wilson of the NZ Privacy Commissioner’s Office writes:

In 2013, UK Prime Minister David Cameron tried to instigate the sharing of UK National Health Service (“NHS”) patient data to private organisations for a small fee. Despite plans to anonymise the data, the move was sufficiently controversial that the Government had to drop the plan – there were major concerns over transparency and privacy. Eight years later, a similar plan has emerged, this time during the pandemic response of Boris Johnson’s Government.
As part of its General Practitioner Data for Planning and Research Programme (“GPDPR”), the Government is planning to put the GP records of England’s 55 million enrolled patients into a single NHS database which will become available to third-party companies and researchers for a fee. It is an ‘opt-out’ programme, meaning that patients need to fill out a form to prevent their data from being included. Originally, GPDPR was supposed to come into action in July 2021 but has now been pushed back to September.
GPDPR will give private organisations access to the NHS Digital central database containing data about diagnoses, symptoms, observations, test results, medications, allergies, immunisations, referrals, and appointments, including information about physical, mental, and sexual health. The information will include data about patients’ gender, ethnicity, and sexual orientation.
Technically peoples’ data will be anonymised, but there are two qualifications. First, given how specific the data is, it will at least be possible to cross-reference with other databases to reidentify the data. Secondly, NHS Digital can unlock the codes to allow access in certain circumstances and where there is valid legal reason. No names and addresses will be available to researchers, but encoded postcodes will be included.
What about these third parties? According to NHS Digital, the data will only be used for health planning and research purposes by organisations that can show they have an appropriate legal basis and a legitimate need to use it. Any data sharing will be overseen by the British Medical Association (“BMA”), the Royal College of General Practitioners (“RCGP”), and the Independent Group Advising on the Release of Data (”IGARD”).
One issue is that neither the NHS, nor their chosen third parties, have had the best record when it comes to data sharing.

Read more on the New Zealand Privacy Commissioner’s Office Blog.





Two plus two does not always equal five.

https://www.databreaches.net/q2-ransom-payment-amounts-decline-as-ransomware-becomes-a-national-security-priority/

Q2 Ransom Payment Amounts Decline as Ransomware becomes a National Security Priority

Seen on Coveware:

If you had told us at the beginning of 2021 that then President elect Biden would be having a nose to nose face off with Putin over ransomware, we would have speculated that some serious escalation must have occurred. In reality, the lackadaisical indifference of one threat actor (DarkSide) set off a compounding series of events that have led us to where we are today. Given the volume of attacks that Ransomware-as-a-service (RaaS) groups conduct, and the de minimis diligence that these groups perform, we are quite certain that the DarkSide affiliate that attacked Colonial Pipeline, had no idea that a) Colonial controlled 45% of the gasoline supply on the US east coast, b) that shutting down that pipeline would cause a consumer run on gasoline, c) that NOTHING gets voters and their duly elected representatives out of their chairs like rising gasoline prices, and finally d) that if you mess with US gasoline prices, you are going to get the attention of the President. Other high profile attacks that would have otherwise garnered 12 hours of media attention were (FINALLY) codified proof that the US indeed has a major problem with ransomware.

But what does that have to do with ransomware payments declining, you ask? Read more on Coveware.





My AI says, “No that can never happen. Please stop asking.”

https://thenextweb.com/news/build-a-computer-with-free-will-syndication?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+TheNextWeb+%28The+Next+Web+All+Stories%29

Can we build a computer with free will?

Do you have free will? Can you make your own decisions? Or are you more like an automaton, just moving as required by your constituent parts? Probably, like most people, you feel you have something called free will. Your decisions are not predetermined; you could do otherwise.

Yet scientists can tell you that you are made up of atoms and molecules and that they are governed by the laws of physics. Fundamentally, then – in terms of atoms and molecules – we can predict the future for any given starting point. This seems to leave no room for free will, alternative actions, or decisions.

Confused? You have every right to be. This has been one of the long outstanding unresolved problems in philosophy. There has been no convincing resolution, though speculation has included a key role for quantum theory, which describes the uncertainty of nature at the smallest scales. It is this that has fascinated me. My research interests include the foundations of quantum theory. So could free will be thought of as a macroscopic quantum phenomenon? I set out to explore the question.





Perspective. Well, maybe not everything...

https://www.zdnet.com/article/what-is-ai-heres-everything-you-need-to-know-about-artificial-intelligence/

What is AI? Here's everything you need to know about artificial intelligence

An executive guide to artificial intelligence, from machine learning and general AI to neural networks.

Back in the 1950s, the fathers of the field, Minsky and McCarthy, described artificial intelligence as any task performed by a machine that would have previously been considered to require human intelligence.

Francois Chollet, an AI researcher at Google and creator of the machine-learning software library Keras, has said intelligence is tied to a system's ability to adapt and improvise in a new environment, to generalise its knowledge and apply it to unfamiliar scenarios.

"Intelligence is the efficiency with which you acquire new skills at tasks you didn't previously prepare for," he said.





Perspective. Fully self-driving? The end of this year? Ford must think this is the future.

https://www.cnbc.com/2021/07/21/ford-and-argo-ai-to-launch-self-driving-cars-with-lyft-by-end-of-year.html

Ford and Argo AI to launch self-driving cars with Lyft by the end of the year

Ford will launch an autonomous vehicle fleet with Lyft and Argo AI by the end of the year, the companies announced Wednesday.

Self-driving rides with safety drivers will begin this year in Miami. The companies said they plan to expand to Austin, Texas, in 2022 and roll out about 1,000 self-driving cars in multiple markets within five years.

The partnership comes as ride-hailing companies Uber and Lyft ditch their own in-house systems and instead look to outside partners for self-driving technology. Lyft announced plans in April to sell its autonomous vehicle unit to a subsidiary of Toyota for $550 million. In December, Uber sold its self-driving unit to start-up Aurora — which is backed by Hyundai and Amazon — amid safety concerns and extreme costs.





Perspective. Your next programming language?

https://www.analyticsinsight.net/julia-is-causing-quite-a-stir-with-code-modernization-in-the-tech-industry/

JULIA IS CAUSING QUITE A STIR WITH CODE MODERNIZATION IN THE TECH INDUSTRY

The present tech industry is in dire need of a programming language that provides the best of C or C++ and the usability of Python. All of these capabilities are at the heart of what the open-source Julia language project set out to do over a decade ago. When Julia was conceived in 2009 at MIT, the goal was to solve a problem that still exists: the need to use two (or more) languages, one for high performance (C or C++) and another that made programming complex systems a more pleasant experience (the Python example). While using both could get the job done, there is inherent friction between those interfaces and processes. In addition to this basic mismatch, many of the codes in high-value science and engineering are the product of decades of building. They are inherently messy and rooted in codes that were state of the art in the 1980s, particularly in modeling and simulation.





Tools & Techniques.

https://www.makeuseof.com/use-microsoft-edge-solve-math-problems/

How to Use Microsoft Edge's to Solve Math Problems

Developed by Microsoft, Math Solver is a tool built into the Edge browser that recognizes mathematical problems from an image, and solves them for you.