Sunday, April 04, 2021

Just about everyone?

https://bgr.com/2021/04/03/facebook-data-leak-533-million-user-records-leaked-online/

Facebook’s response to Saturday’s news of a huge data leak was so awful

Monday was already shaping up to be a lively news day for tech journalists. That’s when the next episode of Sway, the podcast from The New York Times’ Kara Swisher, will be available to listen to, with the new interview subject being none other than Apple CEO Tim Cook.

Swisher on Friday teased via Twitter that the conversation with Cook will cover everything from the App Store drama around Parler to the iPhone maker’s feud with Facebook — the latter of which, on Saturday, inadvertently handed Cook even more ammunition to use against the social networking giant as he continues making his case that Facebook is awful. In case you haven’t heard by now, there’s been another huge Facebook data leak, encompassing personal information from more than 533 million Facebook users from 106 countries. This data was posted in a hacking forum, according to a report from Insider, which is to say — if you have a Facebook account, there’s a good chance your data has once again been exposed to hackers including everything from your phone number to your email address, birthday, full name, and more.

One of the big dangers with a leak like this is that hackers and other malicious actors can use this information to try to access your Facebook account, and frankly any other accounts, now that they have an abundance of information about you. They can try to reset your password, for example, and use that to cause all sorts of other mischief.

On Twitter, Facebook spokesperson Liz Bourgeois responded to a handful of news articles and posts about this leak by tweeting the same two-sentence statement: “This is old data that was previously reported on in 2019. We found and fixed this issue in August 2019.”

In other words, Facebook is responsible for a few hundred million users having their data leaked yet again (seriously, how many times is this now?), but don’t worry, it’s fine — they fixed the problem a long time ago. Not that this does anything to help un-leak the data that’s now in hackers hands, but, hey, Facebook did its part!





Timely. (But the Facebook data was free.)

https://www.databreaches.net/buying-breached-data-when-is-it-ethical/

Buying Breached Data: When Is It Ethical?

Jeremy Kirk reports:

Security practitioners often tread a fine and not entirely well-defined legal line when conducting data breach research. This research can also pose ethical questions when commercial sources for stolen data fall into a gray area.

Kirk’s article on DataBreach Today provides a good overview of the issue. And I totally agree with Troy Hunt on this issue, who is quoted as saying:

I can’t for the life of me understand how security companies paying for that data on a legal basis is any different than the hacker buying the data,” he says. “People justifying this practice are relying entirely on intent being the differentiating factor, but that doesn’t do anything to de-incentivize the market for stolen data.”

I know there are people who maintain that once a data dump has been made public, it’s fair game, and people can buy it and use it. But if you buy it — even if you pay for it in “tokens” on RF — you are encouraging more data theft and dumps, which harms consumer privacy. This applies even in those situations where a firm or individual is buying a data dump on behalf of the victim company who wants to find out what data the threat actors obtained. Their agent is not doing anything technically wrong or illegal (at least I don’t think they are) but by making the purchase for them, they are still rewarding the criminals with a payment and therefore still encouraging crime.

Even if you just download the data totally for free but then use it on your commercial site — like charging people to access the data that you did not have the owner’s consent to obtain or use — well, to me, that’s unethical if not actually illegal.





It can get worse?

http://modern-journals.com/index.php/ijma/article/view/723

IS AI-BASED SURVEILLANCE AND FACIAL RECOGNITION TECHNOLOGY DEVALUING HUMAN RIGHTS?

Innovations in Artificial Intelligence technology have demonstrated great potential to advance the society by alleviating some of the world’s most significant problems, which is a way forward towards the attainment of the UN Sustainable Development Goals. These technological advancements enable mass surveillance by the State to track, monitor, and digitally surveil its citizens through facial recognition technology. As facial recognition systems are rapidly proliferating, its propensity to unjustifiably interfere with human rights, such as privacy, data protection, equality and non-discrimination further escalates. In consideration of the fact that Artificial Intelligence (AI) technology has not yet attained its highest level of advancement, this paper aims to study the impact of AI-based surveillance and facial recognition technology on human rights at present and its potential impact in the future. Based on doctrinal research, this paper analyses the positive and negative impact of deployment of AI technology in the European Union and India upon human rights. This paper evaluates the governance of AI technology through existing laws in the EU from a human rights perspective. It is highly imperative for India to formulate an AI governance mechanism to protect against abuse of human rights. Drawing from the best practices of EU, this paper suggests some considerations relevant for formulating a regulatory or policy framework for AI-based surveillance in India.





A possible way forward? What is the simplest step in ethics?

https://arxiv.org/abs/2103.15739

Automation: An Essential Component Of Ethical AI?

Ethics is sometimes considered to be too abstract to be meaningfully implemented in artificial intelligence (AI). In this paper, we reflect on other aspects of computing that were previously considered to be very abstract. Yet, these are now accepted as being done very well by computers. These tasks have ranged from multiple aspects of software engineering to mathematics to conversation in natural language with humans. This was done by automating the simplest possible step and then building on it to perform more complex tasks. We wonder if ethical AI might be similarly achieved and advocate the process of automation as key step in making AI take ethical decisions. The key contribution of this paper is to reflect on how automation was introduced into domains previously considered too abstract for computers.





For my geeks.

https://arxiv.org/abs/2103.15746

Towards An Ethics-Audit Bot

In this paper we focus on artificial intelligence (AI) for governance, not governance for AI, and on just one aspect of governance, namely ethics audit. Different kinds of ethical audit bots are possible, but who makes the choices and what are the implications? In this paper, we do not provide ethical/philosophical solutions, but rather focus on the technical aspects of what an AI-based solution for validating the ethical soundness of a target system would be like. We propose a system that is able to conduct an ethical audit of a target system, given certain socio-technical conditions. To be more specific, we propose the creation of a bot that is able to support organisations in ensuring that their software development lifecycles contain processes that meet certain ethical standards.





This could be amusing.

https://www.theguardian.com/us-news/2021/apr/04/dominion-trump-disinformation-fox-news-sidney-powell-giuliani-mike-lindell-lawsuits

Dominion: will one Canadian company bring down Trump's empire of disinformation?

When Donald Trump and his allies pushed the “big lie” of voter fraud and a stolen election, it seemed nothing could stop them spreading disinformation with impunity.

Politicians and activists’ pleas fell on deaf ears. TV networks and newspapers fact-checked in vain. Social media giants proved impotent.

But now a little-known tech company, founded 18 years ago in Canada, has the conspiracy theorists running scared. The key: suing them for defamation, potentially for billions of dollars.

Libel laws may prove to be a very old mechanism to deal with a very new phenomenon of massive disinformation,” said 4Bob Shrum, a Democratic strategist. “We have all these fact checkers but lots of people don’t care. Nothing else seems to work, so maybe this will.”

The David in this David and Goliath story is Dominion Voting Systems, an election machine company named after Canada’s Dominion Elections Act of 1920. Its main offices are in Toronto and Denver and it describes itself as the leading supplier of US election technology. It says it serves more than 40% of American voters, with customers in 28 states.

But the 2020 election put a target on its back. As the White House slipped away and Trump desperately pushed groundless claims of voter fraud, his lawyers and cheerleaders falsely alleged Dominion had rigged the polls in favour of Joe Biden.

Among the more baroque conspiracy theories was that Dominion changed votes through algorithms in its voting machines that were created in Venezuela to rig elections for the late dictator Hugo Chávez.

The company is fighting back. It filed $1.3bn defamation lawsuits against Trump lawyers Rudy Giuliani and Sidney Powell, and MyPillow chief executive Mike Lindell, for pushing the allegations without evidence.

Separately, Dominion’s security director, Eric Coomer, launched a suit against the Trump campaign, Giuliani, Powell and some conservative media figures and outlets, saying he had been forced into hiding by death threats.

Then came the big one. Last month Dominion filed a $1.6bn defamation suit against Rupert Murdoch’s Fox News, accusing it of trying to boost ratings by amplifying the bogus claims.





I had not thought of these as writing tools.

https://www.makeuseof.com/free-text-to-speech-tools-educators/

The 7 Best Free Text-to-Speech Tools for Educators

Text-to-speech tools can be beneficial to students of all ages. Hearing your text out loud can help you catch mistakes you may have made and spot phrases that don't fit the writing as well as you thought.

Often, when writing a paper, you might reach a point when reading it doesn't make a big difference. That's where a text-to-speech tool can help. Below you will find the best free text-to-speech tools to help you write a paper or grade one.



No comments: