Monday, June 07, 2021

Would your organization do as well?

https://www.databreaches.net/jp-fujifilm-refuses-to-pay-ransomware-demand-restores-network-from-backups/

Jp: Fujifilm refuses to pay ransomware demand, restores network from backups

Robert Scammell reports:

Japanese multinational conglomerate Fujifilm said it has refused to pay a ransom demand to the cyber gang that attacked its network in Japan last week and is instead relying on backups to restore operations.
The company’s computer systems in the US, Europe, the Middle East and Africa are now “fully operational and back to business as usual”, a Fujifilm spokesperson told Verdict.

Read more on Verdict.





A “feature” that marketing missed?

https://www.pogowasright.org/tiny-trackers-make-it-alarmingly-easy-for-someone-to-stalk-you/

Tiny trackers make it alarmingly easy for someone to stalk you

Tribune Content Agency reports:

The proliferation of tracking devices — particularly the type of popular gadgets being sold to help you find your belongings, such as your wallet, keys or luggage — have led to an increase in fears of stalking, experts say.
Five years ago, two Hallandale Beach commissioners and one candidate complained to police when they found GPS trackers planted under their cars. A private investigator named Victor Elbeze pleaded no contest to the rarely invoked criminal charge and was fined $293.
More recently, Apple released its new AirTags, coin-sized $30 wireless devices that the company says are “a supereasy way to keep track of your stuff.”

Read more on Bangor Daily News.





A simple backgrounder for my Computer Security students.

https://www.bespacific.com/identity-theft-101-tips-to-protect-yourself-against-identity-theft/

Identity Theft 101: Tips to Protect Yourself Against Identity Theft

Law Technology Today: “What identity theft comes down to is that your personal and confidential information ends up in the wrong hands and gets used without your permission for purchases and all kinds of fraudulent activities. The scary part is that most of us willingly make our personal information available online, and it is easy for cybercriminals to steal it. Considering that we all use technology and the internet nowadays, this could happen to anyone. On the up-side, though, identity theft can be prevented with some basic knowledge, planning, and awareness…”





Perhaps this is how a cyber war will start?

https://www.databreaches.net/ukraines-security-services-claims-to-have-thwarted-mass-cyberattack-by-russian-special-forces/

Ukraine’s security service claims to have thwarted mass cyberattack by Russian special forces

The SBU blocked a mass cyberattack by Russian special services on the computer networks of the Ukrainian authorities

Cyber experts of the Security Service of Ukraine revealed the facts of purposeful distribution of malicious software by the special services of the Russian Federation. Customers planned to hit the computer networks of public authorities, local governments and critical infrastructure.

Specialists of the Security Service of Ukraine established that in early June this year, mass e-mails were sent with a change of address of the sender. In particular, reports from the Kyiv Patrol Police Department allegedly contained malicious attachments and were sent to a number of government agencies.

Original Source: Служба безпеки України.





Lawyers are using AI for discovery, why not use it for evidence gathering? What could possibly go wrong?

https://www.zdnet.com/article/nsw-police-using-artificial-intelligence-to-analyse-cctv-footage/

NSW Police using artificial intelligence to analyse CCTV footage

The New South Wales Police Force is in the process of bringing its back-end into the 21st century, turning to Microsoft and its Azure cloud platform for help.

According to Microsoft, the force is retiring, re-architecting, or replacing over 200 legacy systems with cloud-based systems. Part of this transformation is changing the way the force analyses CCTV footage.

Labelled as the "AI/ML-infused Insights policing platform", the system essentially speeds up the processing of data. In one example, NSW Police collected 14,000 pieces of CCTV as part of a murder and assault investigation and analysed it in a manner faster than it previously could.

"The AI/ML infused Insights platform ingested this huge volume in five hours and prepared it for analysis by NSW Police Force investigators, a process that would otherwise have taken many weeks to months," Microsoft said in a case study prepared alongside NSW Police.





Further down the path to a full automated lawyer! (My AI vs your AI)

https://www.bespacific.com/new-westlaw-feature-flags-weaknesses-in-opponents-cases-and-arguments/

New Westlaw Feature Flags Weaknesses In Opponent’s Cases and Arguments

LawSites: “A feature launched this week in Westlaw Edge is designed to help legal professionals more easily identify law that is contrary to their opponents’ arguments. Called Quick Check Contrary Authority Identification, the feature helps find cases that may be helpful in arguing against an opponent’s filing and prioritizes them in search results, according to an announcement from Thomson Reuters. The new feature is part of Quick Check, a component of Westlaw Edge that uses artificial intelligence to analyze a brief and identify relevant authorities omitted from the brief…”





What ethics are ethical?

https://www.bespacific.com/the-contestation-of-tech-ethics-a-sociotechnical-approach-to-ethics-and-technology-in-action/

The Contestation of Tech Ethics: A Sociotechnical Approach to Ethics and Technology in Action

The Contestation of Tech Ethics: A Sociotechnical Approach to Ethics and Technology in Action, Ben Green via arvix.org: “Recent controversies related to topics such as fake news, privacy, and algorithmic bias have prompted increased public scrutiny of digital technologies and soul-searching among many of the people associated with their development. In response, the tech industry, academia, civil society, and governments have rapidly increased their attention to “ethics” in the design and use of digital technologies (“tech ethics”). Yet almost as quickly as ethics discourse has proliferated across the world of digital technologies, the limitations of these approaches have also become apparent: tech ethics is vague and toothless, is subsumed into corporate logics and incentives, and has a myopic focus on individual engineers and technology design rather than on the structures and cultures of technology production. As a result of these limitations, many have grown skeptical of tech ethics and its proponents, charging them with “ethics-washing”: promoting ethics research and discourse to defuse criticism and government regulation without committing to ethical behavior. By looking at how ethics has been taken up in both science and business in superficial and depoliticizing ways, I recast tech ethics as a terrain of contestation where the central fault line is not whether it is desirable to be ethical, but what “ethics” entails and who gets to define it. This framing highlights the significant limits of current approaches to tech ethics and the importance of studying the formulation and real-world effects of tech ethics. In order to identify and develop more rigorous strategies for reforming digital technologies and the social relations that they mediate, I describe a sociotechnical approach to tech ethics, one that reflexively applies many of tech ethics’ own lessons regarding digital technologies to tech ethics itself.”





The first of many. Remember, governments have a monopoly on how antitrust is defined.

https://www.wsj.com/articles/french-regulator-fines-google-268-million-in-antitrust-settlement-11623054737?mod=djemalertNEWS

Google Settles Antitrust Case Over Advertising Practices

Alphabet Inc.’s Google agreed to pay French regulators a fine of nearly $270 million, settling one of the first antitrust cases globally that allege the tech company abused its leading role in the digital advertising sector.

France’s competition authority said it had also accepted a series of proposed commitments Google made to settle the case, including promises to make it easier for competitors to use its online-ad tools. The Wall Street Journal first reported the proposed settlement last month.

Google’s commitments will be binding for three years, the authority said. [“Then we can fine them again.” Bob]





Perspective.

https://www.fastcompany.com/90643827/how-to-make-sure-that-ai-isnt-invasive-and-creepy

How to make sure that AI isn’t invasive and creepy

If it’s designed in a way that respects human decision-making, AI can actually be a force for privacy.





Perspective.

https://socialeurope.eu/robots-jobs-and-the-future-of-work

Robots, jobs and the future of work

Apocalyptic visions of robots stealing workers’ jobs are not only misguided but have diverted attention from more significant trends.

There has been a lively debate over the last decade on the ‘future of work—the implications for employment of recent technical change. With few exceptions, it has had a predominantly negative tone. Recent waves of change—computerisation, robotisation and, more recently, artificial intelligence are often portrayed as impersonal, pervasive, ineluctable forces, which can bring enormous benefits (mostly for consumers) but also major challenges (mostly for workers).

Projected employment losses over the near future are often counted in the millions and, the argument goes, the outlook is particularly bleak for low- and medium-skilled workers, who will be easily replaced by machines because of a lack of complementary skills. Potential policy responses are generally framed in a defensive, almost fatalistic way—at most to try to mitigate these inescapable tendencies. Most frequently mentioned are opportunities for re- and up-skilling of the forthcoming masses of displaced workers or—presuming there will anyway be too few jobs for all—some form of unconditional income support.

But the urgency and scale of this debate contrasts with the thinness of the evidence supporting these prognoses.

Comparing past projections with actual numbers however illustrates how far off these can be. A recent study by the Organisation for Economic Co-operation and Development found net employment growth between 2012 and 2019 in the occupations and countries it had considered at highest risk from automation in 2012—albeit at a slower pace than the rest.


No comments: