Sunday, September 20, 2020

Interesting that this comes from Russia.

http://journals.rudn.ru/law/article/view/24569

COMPUTER TECHNOLOGIES FOR COMMITTING SABOTAGE AND TERRORISM

The article discusses the problems that arise in connection with the crimes against state and public security committed by use of computer and network technologies. This topic is becoming relevant because some states have already experienced the effects of “combat” computer viruses, which can be regarded as waging war using cyber weapons. The most famous example is the attack by the Stuxnet computer virus on an Iranian uranium enrichment plant. The virus was created specifically to disable industrial control systems. The use of unmanned ground and air vehicles to carry out terrorist acts is of particular danger.

The destructive potential of cyberterrorism is determined by the widespread computerization of state and public life, the implementation of projects to create smart cities, including smart transportation, as well as the intensive development of the Internet of things. The purpose of the article is to analyze new criminal threats to state and public security, as well as to study high-tech ways of committing crimes such as sabotage, terrorist acts, and other crimes of a terrorist nature.

The article describes some of the techniques already used to commit crimes of sabotage and terrorism.





...and you will think it’s a human writing it.

https://www.theatlantic.com/ideas/archive/2020/09/future-propaganda-will-be-computer-generated/616400/

In the Future, Propaganda Will Be Computer-Generated

Disinformation campaigns used to require a lot of human effort, but artificial intelligence will take them to a whole new level.





Don’t wait for an AI professor…

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7487209/

Emerging challenges in AI and the need for AI ethics education

Artificial Intelligence (AI) is reshaping the world in profound ways; some of its impacts are certainly beneficial but widespread and lasting harms can result from the technology as well. The integration of AI into various aspects of human life is underway, and the complex ethical concerns emerging from the design, deployment, and use of the technology serves as a reminder that it is time to revisit what future developers and designers, along with professionals, are learning when it comes to AI. It is of paramount importance to train future members of the AI community, and other stakeholders as well, to reflect on the ways in which AI might impact people’s lives and to embrace their responsibilities to enhance its benefits while mitigating its potential harms. This could occur in part through the fuller and more systematic inclusion of AI ethics into the curriculum. In this paper, we briefly describe different approaches to AI ethics and offer a set of recommendations related to AI ethics pedagogy.



(Related) If that last article was too complex…

https://dspace.mit.edu/handle/1721.1/127488

Can my algorithm be my opinion? : an AI + ethics curriculum for middle school students

Children of today can be considered "AI natives." In the same way that children of the 90s were considered to be digital natives, children of the early 2000s and 2010s have grown up in a world where much of their access to information is mediated by artificial intelligence systems. Furthermore, we expect their futures to be increasingly affected by AI, as consumers and designers. For this reason, there is a movement to teach AI concepts to K-12 students. Drawing on a tradition of scholarship in Science and Technology Studies and a surge in recent research on the ethical issues associated with the construction of AI systems, it is clear that students not only need a technical education of AI, but an education that will allow them to become conscientious consumers and ethical designers of it. This thesis presents a set of standards which describe what every child should know about the ethics of artificial intelligence: that it is not an objective or morally neutral source of information and, given that, how to design AI systems with stakeholders in mind. It then describes a series of open-source, largely unplugged activities which address these standards by blending together ethical and technical content. Finally, it presents results from a pilot where students engaged with these activities. Findings about students' initial understanding of AI and the ethical dilemmas associated with it are presented, as are students' understanding after engaging with the curriculum. After participating, students moved from seeing AI as an objective tool to a tool that can be both objective and subjective. By the end of the curriculum, students were able to identify more stakeholders of technical systems and design their own systems according to the values of those stakeholders. This work shows that students can transform into conscientious consumers and ethical designers of AI.





Explaining “explaining.”

https://www.sciencedirect.com/science/article/abs/pii/S0004370220301375

Explanation in AI and law: Past, present and future

Explanation has been a central feature of AI systems for legal reasoning since their inception. Recently, the topic of explanation of decisions has taken on a new urgency, throughout AI in general, with the increasing deployment of AI tools and the need for lay users to be able to place trust in the decisions that the support tools are recommending. This paper provides a comprehensive review of the variety of techniques for explanation that have been developed in AI and Law. We summarise the early contributions and how these have since developed. We describe a number of notable current methods for automated explanation of legal reasoning and we also highlight gaps that must be addressed by future systems to ensure that accurate, trustworthy, unbiased decision support can be provided to legal professionals. We believe that insights from AI and Law, where explanation has long been a concern, may provide useful pointers for future development of explainable AI.





Another reason to grant AI personhood?

https://qz.com/1905712/when-ai-in-healthcare-goes-wrong-who-is-responsible-2/

When AI in healthcare goes wrong, who is responsible?

Artificial intelligence can be used to diagnose cancer, predict suicide, and assist in surgery. In all these cases, studies suggest AI outperforms human doctors in set tasks. But when something does go wrong, who is responsible?

There’s no easy answer, says Patrick Lin, director of Ethics and Emerging Sciences Group at California Polytechnic State University. At any point in the process of implementing AI in healthcare, from design to data and delivery, errors are possible. “This is a big mess,” says Lin. “It’s not clear who would be responsible because the details of why an error or accident happens matters. That event could happen anywhere along the value chain.”





I wonder if the President has other software as a target for his ‘blessing?’

https://www.npr.org/2020/09/20/914032065/tiktok-ban-averted-trump-gives-oracle-walmart-deal-his-blessing

TikTok Ban Averted: Trump Gives Oracle-Walmart Deal His 'Blessing'

President Trump has given tentative approval to a deal that will keep TikTok alive in the U.S., resolving a months-long confrontation between a hit app popularized by lip-syncing teens and White House officials who viewed the service as a national security risk.

As part of the deal rescuing TikTok, U.S. tech company Oracle is joining hands with Walmart to form a new entity called TikTok Global, which will be headquartered in the U.S.

That arrangement appears to satisfy the White House's concerns over the security of American user data, even though ByteDance is expected to hold its majority ownership position.



(Related) Just a “cost of doing business in the US?”

https://www.reuters.com/article/us-usa-china-tiktok-bytedance-idUSKCN26B039

ByteDance says not aware of $5 billion education fund in TikTok deal

TikTok owner Bytedance said in a social media post on Sunday that it was the first time it had heard in the news it was setting up a $5 billion education fund in the United States.

U.S. President Donald Trump said he had approved a deal, which included a $5 billion education fund, to allow TikTok to continue to operate in the United States.





Perspective. An impressive Infographic returns!

https://www.visualcapitalist.com/every-minute-internet-2020/

Here’s What Happens Every Minute on the Internet in 2020

[Grab it from:

https://web-assets.domo.com/blog/wp-content/uploads/2020/08/20-data-never-sleeps-8-final-01-Resize.jpg





Perhaps the greatest invention of the Pandemic Era!

https://dilbert.com/strip/2020-09-20



Saturday, September 19, 2020

Something fishy here. Clearly a major hack if they needed to restore every computer but yet they can prevent it in future with a simple purchase?

https://www.kristv.com/news/local-news/school-district-reaches-out-to-fbi-following-cyberattack

School district reaches out to FBI following cyberattack

School administrators said they were victims of a cyberattack Tuesday afternoon. "(We have) given the FBI everything they need to investigate the incident," school officials said.

Administrators said they have to go through all Windows devices in the school system, including student’s devices. They have to be scanned, reimaged, wiped, and reinstalled. The process is very time and labor-intensive.

Administrators said they have purchased everything needed to keep this from happening again and they hope to get back to what they do best, teaching kids, on Monday.





Slicing up the law or adding to it?

https://oag.ca.gov/news/press-releases/attorney-general-becerra-announces-landmark-settlement-against-glow-inc-%E2%80%93?&web_view=true

Attorney General Becerra Announces Landmark Settlement Against Glow, Inc. – Fertility App Risked Exposing Millions of Women’s Personal and Medical Information

California Attorney General Xavier Becerra today announced a landmark settlement against Glow, Inc. (Glow), a technology company that operates a fertility-tracking mobile app that stores personal and medical information. The settlement, which is subject to court approval, resolves the Attorney General’s investigation of Glow's app for serious privacy and basic security failures that put women’s highly-sensitive personal and medical information at risk. In addition to a $250,000 civil penalty, the settlement includes injunctive terms that require Glow to comply with state consumer protection and privacy laws, and a first-ever injunctive term that requires Glow to consider how privacy or security lapses may uniquely impact women.





For my Ethical Hackers…

https://hackaday.com/2020/09/18/listening-to-an-iphone-with-am-radio/?web_view=true

LISTENING TO AN IPHONE WITH AM RADIO

Electronic devices can be surprisingly leaky, often spraying out information for anyone close by to receive. [Docter Cube] has found another such leak, this time with the speakers in iPhones. While repairing an old AM radio and listening to a podcast on his iPhone, he discovered that the radio was receiving audio the from his iPhone when tuned to 950-970kHz.

[Docter Cube] states that he was able to receive the audio signal up to 20 feet away. A number of people responded to the tweet with video and test results from different phones. It appears that iPhones 7 to 10 are affected, and there is at least one report for a Motorola Android phone. The amplifier circuit of the speaker appears to be the most likely culprit, with some reports saying that the volume setting had a big impact. With the short range the security risk should be minor, although we would be interested to see the results of testing with higher gain antennas. It is also likely that the emission levels still fall within FCC Part 15 limits.



(Ditto)

https://www.zdnet.com/article/spammers-use-hexadecimal-ip-addresses-to-evade-detection/?&web_view=true

Spammers use hexadecimal IP addresses to evade detection

IP addresses can also be written in three other formats:

    • Octal - 0300.0250.0000.0001 (by converting each decimal number to the octal base)

    • Hexadecimal - 0xc0a80001 (by convert each decimal number to hexadecimal)

    • Integer/DWORD - 3232235521 (by converting the hexadecimal IP to integer)

According to a report published yesterday by Trustwave, a spam group has adopted hexadecimal IP addresses for their campaigns since mid-July earlier this year.





Doom & gloom?

https://thenextweb.com/neural/2020/09/18/a-beginners-guide-to-the-ai-apocalypse-killer-robots/

A beginner’s guide to the AI apocalypse: Killer robots

Welcome to the fifth article in TNW’s guide to the AI apocalypse. In this series we examine some of the most popular doomsday scenarios prognosticated by modern AI experts. Previous articles in this series include: Misaligned Objectives, Artificial Stupidity, Wall-E Syndrome, and Humanity Joins the Hivemind.

We’ve danced around the subject of killer robots in the previous four editions in this series, but it’s time to look the machines in their beady red eyes and… speculate.





Still a hot topic.

https://theconversation.com/gpt-3-new-ai-can-write-like-a-human-but-dont-mistake-that-for-thinking-neuroscientist-146082

GPT-3: new AI can write like a human but don’t mistake that for thinking – neuroscientist

Since it was unveiled earlier this year, the new AI-based language generating software GPT-3 has attracted much attention for its ability to produce passages of writing that are convincingly human-like. Some have even suggested that the program, created by Elon Musk’s OpenAI, may be considered or appears to exhibit, something like artificial general intelligence (AGI), the ability to understand or perform any task a human can. This breathless coverage reveals a natural yet aberrant collusion in people’s minds between the appearance of language and the capacity to think.





Lots of introductory videos. MS Office, Computer Basics, Job Search, Social Media, etc.

https://edu.gcfglobal.org/en/topics/

GCFLearnFree.org



Friday, September 18, 2020

Training and constant reminders could help.

https://www.infosecurity-magazine.com/news/outbound-email-breaches/?&web_view=true

Outbound Email Errors Cause 93% Increase in Breaches

According to research by Egress, 93% of 538 IT leaders surveyed reported a breach in the past year due to an email error, with 70% of those believing remote working increases the risk of sensitive data being put at risk from outbound email data breaches.

The most common breach types were replying to spear-phishing emails (80%), emails sent to the wrong recipients (80%) and sending the incorrect file attachment (80%).





Similar possibilities here? Probably not.

https://www.insideprivacy.com/international/united-kingdom/english-high-court-awards-damages-for-quasi-defamation-data-claim/

English High Court Awards Damages for Quasi-Defamation Data Claim

The English High Court has recently awarded damages in a data privacy case, with two features of particular interest. First, the nature of the claim is more reminiscent of a claim in defamation than for data privacy breaches, which is a development in the use of data protection legislation. Secondly, the damages awarded (perhaps influenced by the nature of the case) were unusually high for a data privacy case.

The decision highlights an unusual use of data protection in English law, as a freestanding form of quasi-defamation claim, as the claimants sought damages for reputational harm (as well as distress) solely under the Data Protection Act 1998 (the “DPA”, since replaced by the Data Protection Act 2018, which implemented the General Data Protection Regulation ((EU) 2016/679) (GDPR) in the UK) rather than in a libel or defamation claim, or in parallel with such a claim. It also sets a potentially unhelpful precedent by awarding two of the claimants £18,000 each for inaccurate processing of their personal data, an amount that is significantly higher than has been awarded in other data protection cases brought under the DPA. If such awards were to be made in the context of a class action, the potential liability for data controllers could be significant.





No doubt this will enable block by block advertising.

Waze To Keep 7-Day Records Of Americans’ Driving Habits

Joe Cadillic writes:

Two weeks have passed since I warned everyone about Amazon drone deliveries being the biggest threat to our privacy that Americans have ever seen. But a recent news release revealed that Google is giving them a run for the money.

Waze’s latest feature ‘save your drive’ on Live Map will record Americans driving habits in real-time, effectively turning Waze into a national drivers surveillance program.

Read more on MassPrivateI.

[From the article:

Letting Waze know your favorite and frequent travel destinations is just asking for trouble. Not only do Americans have to worry about DHS tracking everyone's license plates but now Google knows where your friends and family live. And they will know the time you leave your house and when you arrive at your destination[s].





Will changes due to the pandemic ever be undone?

https://dilbert.com/strip/2020-09-18





You invented something that gave you an advantage. Now give it to your competitors?

https://www.bloomberg.com/news/articles/2020-09-17/apple-pay-tech-likely-to-be-open-to-rivals-in-rules-mulled-by-eu

Apple Would Have to Share Payment Tech Under Rules Mulled by EU

The European Union is considering new rules that would likely require Apple Inc. to give competitors access to payments technology inside its iPhones.

The new laws would prevent mobile device manufacturers from limiting access to near-field communication technology embedded in smartphones and other devices such as smartwatches, according to documents obtained by Bloomberg.

NFC technology handles wireless signals that allow users to pay via their devices at store terminals, rather than a credit or debit card. While the report did not mention Apple by name, at present iPhone and Apple Watch users can only make NFC payments using Apple Pay. Banks and other competitors have complained they want the same functionality for their own iPhone apps and that Apple won’t give them access to the chip.

The report is set to be unveiled next week by the European Commission as part of a package of policy proposals. It includes a footnote to a competition case launched by the European Commission’s antitrust arm in June, which is seeking to assess whether the iPhone giant unfairly blocks other providers from using the tap-and-go functionality on its smartphones.





Perspective. State sanctioned espionage.

https://www.scmagazine.com/home/security-news/fbi-opens-china-related-counterintelligence-case-every-10-hours/?web_view=true

FBI opens China-related counterintelligence case every 10 hours

FBI Director Christopher Wray today offered the House Homeland Security Committee some sobering news about China – the FBI opens a new China-related counterintelligence case roughly every 10 hours.

… “They are going after cost and pricing information, internal strategy documents, personally identifiable information – anything that can give them a competitive advantage,” Wray told House members this morning.





Perspective.

https://www.bespacific.com/political-divides-conspiracy-theories-and-divergent-news-sources-heading-into-2020-election/

Political Divides, Conspiracy Theories and Divergent News Sources Heading Into 2020 Election

As the nation heads toward Election Day in the midst of a persistent pandemic and simmering social unrest, a new Pew Research Center survey finds that Americans’ deep partisan divide, dueling information ecosystems, and divergent responses to conspiracy theories and misinformation are all fueling uncertainty and conflict surrounding the presidential election. While Americans across the political spectrum have been getting information about key election-related storylines, their knowledge and opinions about these issues – as well as the candidates themselves – differ strikingly based on their party affiliation and key news sources, according to the new survey, conducted Aug. 31-Sept. 7, 2020, as part of the Center’s American News Pathways project. One central issue creating confusion in this campaign is the reliability of voting by mail, which figures to be more widespread than ever this year as people try to avoid crowded polling places during the coronavirus outbreak. President Donald Trump has repeatedly promoted the unsupported idea that mail-in voting will lead to significant fraud and has put the U.S. Postal Service in the campaign spotlight.

While evidence indicates that mail-in voting is associated with only minuscule levels of fraud, 43% of Republicans and Republican-leaning independents identify voter fraud as a “major problem” associated with mail-in ballots. By contrast, only 11% of Democrats and Democratic-leaning independents say the same thing…”





Perspective (and a reminder)

https://www.bespacific.com/duckduckgo-is-growing-fast/

DuckDuckGo Is Growing Fast

BleepingComputer:DuckDuckGo, the privacy-focused search engine, announced that August 2020 ended in over 2 billion total searches via its search platform. While Google remains the most popular search engine, DuckDuckGo has gained a great deal of traction in recent months as more and more users have begun to value their privacy on the internet. DuckDuckGo saw over 2 billion searches and 4 million app/extension installations, and the company also said that they have over 65 million active users. DuckDuckGo could shatter its old traffic record if the same growth trend continues. Even though DuckDuckGo is growing rapidly, it still controls less than 2 percent of all search volume in the United States. However, DuckDuckGo’s growth trend has continued throughout the year, mainly due to Google and other companies’ privacy scandal…”





Interesting how much discussion the Guardian article generated.

https://theconversation.com/can-robots-write-machine-learning-produces-dazzling-results-but-some-assembly-is-still-required-146090

Can robots write? Machine learning produces dazzling results, but some assembly is still required



Thursday, September 17, 2020

Oops! We didn’t mean to kill you, sorry.

https://www.databreaches.net/did-ransomware-threat-actors-hit-a-german-medical-clinic-by-mistake-either-way-someone-died-as-a-result/

Did ransomware threat actors hit a German medical clinic by mistake? Either way, someone died as a result.

It was our nightmare realized: a medical center was completely paralyzed by a ransomware attack and someone died as a result.

As of last week, the University Clinic in Düsseldorf reported that it was in a state of emergency. Operations had been canceled, and ambulances had to be redirected to other clinics. On September 10, the clinic had posted an announcement:

A Google translation reads, in part:

There is currently an extensive IT failure at the University Hospital Düsseldorf (UKD). This means, among other things, that the clinic can only be reached to a limited extent – both by telephone and by email.

The UKD has deregistered from emergency care. Planned and outpatient treatments will also not take place and will be postponed. Patients are therefore asked not to visit the UKD – even if an appointment has been made.

Days later, the clinic remained paralyzed and unable to function normally, even as of yesterday.

And now we read that the threat actors’ attack has resulted in a death. Associated Press reports:

German authorities say a hacker attack caused the failure of IT systems at a major hospital in Duesseldorf, and a woman who needed urgent admission died after she had to be taken to another city for treatment.

Did the threat actors intend to hit the hospital? They may not have. According to the AP, citing German authorities, the extortion note appeared intended for the affiliated university, not the hospital. And when “police told hackers the hospital was affected, they provided a decryption key. The hackers are no longer reachable, they said.”

Does it matter that the threat actors may not have intended to hit the hospital and just hit it in error? Their criminal actions resulted in the death of someone, and they should be held accountable for that.

So far, I can find no mention of what type of ransomware this was or who the threat actors were.





I’m sure that EU police would share this software with US police if they were asked. And I bet they will be asked if it can be tweaked to work on other phones.

https://www.vice.com/en_us/article/k7qjkn/encrochat-hack-gps-messages-passwords-data?&web_view=true

European Police Malware Could Harvest GPS, Messages, Passwords, More

The malware that French law enforcement deployed en masse onto Encrochat devices, a large encrypted phone network using Android phones, had the capability to harvest "all data stored within the device," and was expected to include chat messages, geolocation data, usernames, passwords, and more, according to a document obtained by Motherboard.





Worth a listen?

https://www.insideprivacy.com/data-security/inside-privacy-audiocast-episode-4-a-look-into-the-aclu-of-californias-position-on-the-cpra/

Inside Privacy Audiocast: Episode 4 – A Look into the ACLU of California’s Position on the CPRA

On our fourth episode of our Inside Privacy Audiocast, we are aiming our looking glass at the California Privacy Rights Act, and are joined by guest speaker Jacob Snow, Technology and Civil Liberties Attorney with the American Civil Liberties Union of Northern California.





We’re willing to protect your privacy, except when it become inconvenient.

COVID-19 and HIPAA: HHS’s Troubled Approach to Waiving Privacy and Security Rules for the Pandemic

A snippet from the Executive Summary of a new report written by Robert Gellman and Pam Dixon:

This report offers an analysis of existing laws and practices regarding both types of HIPAA COVID-19 waivers. The report recommends that, when the current emergency subsides, the Secretary of HHS review in a systematic way the privacy, security, and legal questions about all HIPAA waivers. The report further recommends that HHS prepare for future health emergencies with advance planning for HIPAA waiver practices. The report recommends that the National Committee on Vital and Health Statistics be tasked with the fact-finding and policy work needed to develop legislative and administrative recommendations for HIPAA waivers. Discussions about HIPAA waivers should involve all relevant stakeholders. Finally, once the Secretary completes a review of waiver authority, the report recommends that the US Congress reform the statutory HIPAA waiver rules.

You can download their analysis and full report here.





For the pandemic and beyond.

https://www.cpomagazine.com/data-protection/adapting-data-governance-to-wfh-reality/

Adapting Data Governance to WFH Reality

Even if the pandemic subsides, the work from home movement is here to stay. Numerous companies have pushed out plans to return workers to offices and, even then, remote work will likely be more possible for more workers. Twitter, for one, has said WFH is a permanent option.

This all means that companies, many of whom shifted to remote workforces almost overnight earlier this year, should double check that data privacy and security policies are in place to enable a secure and efficient WFH workforce—while protecting consumer and corporate data.

This is no small task. Even before the pandemic and WFH rush, companies were having trouble complying with the European Union’s General Data Protection Regulation, passed in 2018, says consulting firm, McKinsey & Company.

Yet the goal for stepping up WFH data governance isn’t simply compliance. The ultimate aim is to go forward with policies and procedures that enable better results and build brand trust with consumers, business partners and others.

Here are four strategic steps to help enterprises update data governance and the WFH reality:





Could every company or industry do this?

https://www.bespacific.com/encyclopedia-of-ethical-failure/

Encyclopedia of Ethical Failure

U.S Department of Defense Standards of Conduct Office – Encyclopedia of Ethical Failure1Revised October 2019

The Standards of Conduct Office of the Department of Defense General Counsel’s Office has assembled the following selection of cases of ethical failure for use as a training tool. Our goal is to provide DoD personnel with real examples of Federal employees who have intentionally or unwittingly violated the standards of conduct. Some cases are humorous, some sad, and all are real. Some will anger you as a Federal employee and some will anger you as an American taxpayer…”



Wednesday, September 16, 2020

For my Ethical Hackers. Would you make more money keeping this hack to yourself?

https://hotforsecurity.bitdefender.com/blog/can-you-crack-monero-irs-offers-625000-bounty-for-anyone-who-can-break-privacy-of-cryptocurrency-24144.html

Can You Crack Monero? IRS Offers $625,000 Bounty for Anyone Who Can Break Privacy of Cryptocurrency

Monero (XMR) is a famously privacy-centric cryptocurrency, with features built into it from its inception that claim to make transactions untraceable and completely private, hiding the details of movements of digital cash from prying eyes. Completely private by default, Monero is a lot more private than many other cryptocurrencies such as Bitcoin.

And that, of course, has not only made it a popular digital currency for criminals operating on the darknet, it’s also made it a focus of interest for law enforcement agencies and tax-enforcement authorities such as the United States Internal Revenue Service (IRS).

According to the IRS’s call for contractors they are looking to share a total of $625,000 to “one or more contractors” who assist them in their goal to break Monero, other anonymity-enhanced cryptocurrency, or Lightning or other Layer 2 off-chain cryptocurrency protocols.

The first part of the payment (a mere $500,000) will be paid if a successful proof-of-concept is delivered, demonstrating how Monero transactions can have their privacy stripped away from them.

An additional $125,000 will apparently be given to whoever the lucky person is after the technique has passed a full examination and has been successfully launched.





You will need this, the only question is when…

https://securityaffairs.co/wordpress/108308/laws-and-regulations/vulnerability-disclosure-toolkit.html?web_view=true

UK NCSC releases the Vulnerability Disclosure Toolkit

The British National Cyber Security Centre (NCSC) released a guideline, dubbed The Vulnerability Disclosure Toolkit, for the implementation of a vulnerability disclosure process.

The international standard for vulnerability disclosure (ISO/IEC 29147:2018 ) defines the techniques and policies that can be used to receive vulnerability reports and publish remediation information. The NCSC designed this toolkit for organisations that currently don’t have a disclosure process but are looking to create one.” reads the guideline.





Still want to be famous?

https://www.theregister.com/2020/09/15/china_shenzhen_zhenhua_database/

Chinese database details 2.4 million influential people, their kids, addresses, and how to press their buttons

… The researcher alleges the purpose of the database is enabling influence operations to be conducted against prominent and influential people outside China.

Security researcher Robert Potter and Balding co-authored a paper [PDF] claiming the trove is known as the “Overseas Key Information Database” (OKIDB) and that while most of it could have been scraped from social media or other publicly-accessible sources, 10 to 20 per cent of it appears not to have come from any public source of information. The co-authors do not rule out hacking as the source of that data, but also say they can find no evidence of such activity.

A fundamental purpose appears to be information warfare,” the pair stated.

In a second post Balding said the database matters because “what cannot be underestimated is the breadth and depth of the Chinese surveillance state and its extension around the world.





You don’t have to follow this rule, but be sure to follow that one.

https://www.cpomagazine.com/data-privacy/fisa-court-approves-warrantless-surveillance-but-with-warning-to-fbi-about-following-privacy-rules/

FISA Court Approves Warrantless Surveillance but With Warning to FBI About Following Privacy Rules

The controversial warrantless surveillance program enacted under Section 702 of the FISA Amendments Act will go on for at least the remainder of this year, according to a recently-declassified Foreign Intelligence Surveillance Act (FISA) court ruling from December 2019. A judge signed off on another year of the program but did so while admonishing the FBI over numerous violations of privacy rules.

… Certain privacy rules do govern this eavesdropping, but the declassified report makes clear that the FBI and other agencies have a tendency to disregard them while applying their queries in an overbroad manner and have potentially accessed the communications of tens of thousands of Americans who are not under investigation.





Another tech first for the justice system.

https://restofworld.org/2020/death-decreed-over-zoom/

Death decreed over Zoom

On May 4, a Nigerian man became the first known person in the world to be sentenced to death via a virtual court on Zoom. The session was brief — it began at 11 a.m. and ended before 2 p.m. — and in the screenshots people posted online, Olalekan Hameed, 35, who joined the call from prison, appeared to be alone. He looked calm, and the ruling was later reported to have gone off without a hitch. Two days before the sentencing, a link to the proceedings was shared on Twitter, but it largely went unnoticed; most Nigerians were preoccupied with the easing of a five-week-long lockdown in response to the Covid-19 pandemic.

Open trials hold some significance for Nigerians, particularly those who lived through the decades of military dictatorships that followed independence and disrupted early attempts at democratic rule. Back then, court hearings for dissenters and political opponents were replaced with Special Military Tribunals (SMTs), and public access to proceedings was granted or withheld on the whim of a dictator.





If you can’t blame the AI, you blame the safety driver? What will happen when there is no safety driver?

https://www.nytimes.com/2020/09/15/technology/uber-autonomous-crash-driver-charged.html

Driver Charged in Uber’s Fatal 2018 Autonomous Car Crash

Investigators said the woman had been watching a video on her phone when the vehicle killed a pedestrian in Arizona.

A safety driver who was riding in an autonomous Uber vehicle when it struck and killed a pedestrian on a street in Tempe, Ariz., in 2018 has been charged with negligent homicide, the local authorities said on Tuesday.

The crash is believed to be the first pedestrian death caused by self-driving technology, and raised questions about who should be held responsible for such fatalities.

… A National Transportation Safety Board investigation attributed the crash mostly to human error, but also faulted an “inadequate safety culture” at Uber.





Don’t these “filters” also apply to individuals?

https://www.bespacific.com/jared-diamond-why-nations-fail-or-succeed-when-facing-a-crisis/

Jared Diamond: Why Nations Fail Or Succeed When Facing A Crisis

The following interview, between Noema Magazine Editor-in-Chief Nathan Gardels and author (previously of “Guns, Germs, and Steel”) Jared Diamond, has been edited for clarity and length.

Nathan Gardels: In assessing how nations manage crises and successfully negotiate turning points — or don’t — you pass their experience through several filters. Some key filters you use are realistic self-appraisal, selective adoption of best practices from elsewhere, a capacity to learn from others while still preserving core values and flexibility that allows for social and political compromise.

How do you see the way various nations addressed the coronavirus pandemic through this lens?

Jared Diamond: Nations and entities doing well by the criteria of those outcome predictors include Singapore and Taiwan. Doing poorly initially were the government of Italy and now, worst of all, the federal government of the U.S…”





Resource

https://www.infoworld.com/article/3574935/oracle-open-sources-java-machine-learning-library.html

Oracle open-sources Java machine learning library

Oracle is making its Tribuo Java machine learning library available free under an open source license.

With Tribuo, Oracle aims to make it easier to build and deploy machine learning models in Java, similar to what already has happened with Python. Released under an Apache 2.0 license and developed by Oracle Labs, Tribuo is accessible from GitHub and Maven Central.