Saturday, November 11, 2023

Serious. Think beyond produce rotting. What could you smuggle into or out of a country if governments could not verify cargo?

https://au.news.yahoo.com/australia-locks-down-ports-nationally-095725266.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZGF0YWJyZWFjaGVzLm5ldC8&guce_referrer_sig=AQAAAFaOkXv7J8Gek1mHz9Yra2elbDrENDJSd8DgWLSpGt0l6jCvr5oebEfw62r7YdHWATpZTuG1VgUxCpePDzYXiwI9_v-w9Ay-VmOrbdFZ4DGd_n-MxEdZViNbYNb3HAxSV0FVTyE7GQf6XzXBKopTQ-YRwfvoTUnM9Dq9qP7eQfg6

Australia locks down ports after ‘nationally significant’ cyberattack

Australia says it is responding to an ongoing cyberattack targeting major ports, prompting operator DP World to temporarily restrict access to the network on Saturday.

The operator shut down four ports at Sydney, Melbourne, Brisbane, and Fremantle after detecting a cybersecurity incident late on Friday night. DP World is responsible for 40 per cent of Australia’s maritime freight.

DP World Australia said it had “restricted landside access to our Australian port operations” during the ongoing investigation.

The restrictions imposed by DP World meant ships were unable to unload freight and freight was also barred from leaving the port site.

Mr Goldie said the interruption was likely to continue "for a number of days", impacting the movement of goods into and out of the country.





Hackers seem to react faster than government (or educational) bureaucracies.

https://www.databreaches.net/times-up-singularitymd-sets-up-to-sell-data-from-jeffco-public-schools/

Time’s up: SingularityMD sets up to sell data from Jeffco Public Schools

It looks like “SingularityMD,” the hacker(s) of Clark County School District in Nevada and Jeffco Public Schools in Colorado, are looking to start selling the data they exfiltrated.

In an introductory post today on Breach Forums, they write:

We are SingularityMD.

We specialize in low sophistication corporate network infiltration.

We are behind the following hacks

We have access to a lot of organizational data and would like a place to sell it.

We plan to sell the Jeffco data breach dataset and some parts of CCSD which has not previously been leaked.

We have data for additional organizations we will sell over time.

Attempting to sell data on the popular forum is somewhat of a game-changer, as even if they sell data to just one buyer, there is no way to know how many others will buy the data from the original purchaser. The buyer might keep it privately or choose to re-sell it to any number of buyers. Or if there’s no buyer, SingularityMD might just leak the data (give it away freely on the forum).



(Related)

https://www.theregister.com/2023/11/10/lockbit_leaks_boeing_files/

Impatient LockBit says it's leaked 50GB of stolen Boeing files after ransom fails to land

The LockBit crew is claiming to have leaked all of the data it stole from Boeing late last month, after the passenger jet giant apparently refused to pay the ransom demand.

The gang dumped the files online early Friday morning. This latest leak includes about 50GB of data in the form of compressed archives and backup files for various systems.





Did we see an overwhelming volume of disinformation during the last election cycle? This article suggest that we will this time. Who benefits?

https://www.nbcnews.com/tech/tech-news/gop-muzzled-quiet-coalition-fought-foreign-propaganda-rcna103373

How the GOP muzzled the quiet coalition that fought foreign propaganda

The FBI put a pause on briefings with tech companies due to an ongoing lawsuit, adding to a broader breakdown in a system meant to guard against influence operations and to ensure election integrity.

A once-robust alliance of federal agencies, tech companies, election officials and researchers that worked together to thwart foreign propaganda and disinformation has fragmented after years of sustained Republican attacks.

The GOP offensive started during the 2020 election as public critiques and has since escalated into lawsuits, governmental inquiries and public relations campaigns that have succeeded in stopping almost all coordination between the government and social media platforms.

The most recent setback came when the FBI put an indefinite hold on most briefings to social media companies about Russian, Iranian and Chinese influence campaigns. Employees at two U.S. tech companies who used to receive regular briefings from the FBI’s Foreign Influence Task Force told NBC News that it has been months since the bureau reached out.



Friday, November 10, 2023

What is happening here? Is there a new generation of lawyers who are willing to try lies (even obvious lies) in an attempt to change reality?

https://www.databreaches.net/paging-regulators-to-aisle-4-to-look-at-pacific-union-colleges-data-security-and-breach-disclosure/

Paging regulators to Aisle 4 to look at Pacific Union College’s data security and breach disclosure

On November 8, Pacific Union College in California notified the Maine Attorney General’s Office of a breach in March 2023 that impacted 56,041 people. Their notification, submitted by external counsel at McDonald Hopkins, indicates that the breach occurred between March 5 and March 19, 2023 and was discovered on October 9, 2023.

That discovery date is utter rubbish. Let’s dig into this one a bit deeper by consulting the redacted copy of the notification to those affected. It appears below this post.





Perspective.

https://www.gatesnotes.com/AI-agents

AI is about to completely change how you use computers

In the next five years, this will change completely. You won’t have to use different apps for different tasks. You’ll simply tell your device, in everyday language, what you want to do. And depending on how much information you choose to share with it, the software will be able to respond personally because it will have a rich understanding of your life. In the near future, anyone who’s online will be able to have a personal assistant powered by artificial intelligence that’s far beyond today’s technology.

This type of software—something that responds to natural language and can accomplish many different tasks based on its knowledge of the user—is called an agent. I’ve been thinking about agents for nearly 30 years and wrote about them in my 1995 book The Road Ahead, but they’ve only recently become practical because of advances in AI.

Agents are not only going to change how everyone interacts with computers. They’re also going to upend the software industry, bringing about the biggest revolution in computing since we went from typing commands to tapping on icons.



Thursday, November 09, 2023

Buy a car, sell your privacy?

https://therecord.media/class-action-lawsuit-cars-text-messages-privacy

Court rules automakers can record and intercept owner text messages

A federal judge on Tuesday refused to bring back a class action lawsuit alleging four auto manufacturers had violated Washington state’s privacy laws by using vehicles’ on-board infotainment systems to record and intercept customers’ private text messages and mobile phone call logs.

The Seattle-based appellate judge ruled that the practice does not meet the threshold for an illegal privacy violation under state law, handing a big win to automakers Honda, Toyota, Volkswagen and General Motors, which are defendants in five related class action suits focused on the issue. One of those cases, against Ford, had been dismissed on appeal previously.

The plaintiffs in the four live cases had appealed a prior judge’s dismissal. But the appellate judge ruled Tuesday that the interception and recording of mobile phone activity did not meet the Washington Privacy Act’s standard that a plaintiff must prove that “his or her business, his or her person, or his or her reputation” has been threatened.



Wednesday, November 08, 2023

It’s not quantity it’s quality!

https://theconversation.com/researchers-warn-we-could-run-out-of-data-to-train-ai-by-2026-what-then-216741

Researchers warn we could run out of data to train AI by 2026. What then?

We need a lot of data to train powerful, accurate and high-quality AI algorithms. For instance, ChatGPT was trained on 570 gigabytes of text data, or about 300 billion words.

Similarly, the stable diffusion algorithm (which is behind many AI image-generating apps such as DALL-E, Lensa and Midjourney) was trained on the LIAON-5B dataset comprising of 5.8 billion image-text pairs. If an algorithm is trained on an insufficient amount of data, it will produce inaccurate or low-quality outputs.

The quality of the training data is also important. Low-quality data such as social media posts or blurry photographs are easy to source, but aren’t sufficient to train high-performing AI models.

Text taken from social media platforms might be biased or prejudiced, or may include disinformation or illegal content which could be replicated by the model.





An old story.

https://abovethelaw.com/2023/11/shadow-ai-a-thorny-problem-for-law-firms/

Shadow AI: A Thorny Problem For Law Firms

There were plenty of articles written about Shadow IT – defined by Cisco as “The use of IT-related hardware or software by a department or individual without the knowledge of the IT or security group within the organization.” – Shadow IT included cloud services, software, and hardware.

Welcome to the sudden rise of Shadow AI. Its use, like that of Shadow IT, is often unknown to a law firm’s IT or security group.

AI is everywhere, but it’s not always visible. We forget that AI is embedded in videoconferencing programs, in many legal research programs, in our e-discovery software, in the browsers we use to search for information, in our smartphones – and the list goes on and on.





Could be amusing…

https://www.bespacific.com/artificial-intelligence-experts-discuss-legal-implications-on-aba-presidential-speaker-series/

Artificial intelligence experts discuss legal implications on ABA Presidential Speaker Series

A panel of experts on artificial intelligence and how it will affect the legal landscape are featured in the next installment of the ABA Presidential Speaker Series. The program, titled A.I. – The New Frontier,” will feature a panel of special advisers to the ABA Task Force on the Law and Artificial Intelligence. The program will be available at 3 p.m. EST on Thursday, Nov. 9. No advance registration is required. The program can be viewed here. In addition to exploring how AI has the potential to transform all aspects of society, including the practice of law, the panel will discuss the new AI Executive Order that President Biden announced on Oct. 30 — one of the first in-depth discussions by national experts examining the executive order and its ramifications.





Another kind of deepfake. Imagine my face on a two dollar bill.

https://www.androidauthority.com/google-photos-magic-editor-prohibited-edits-3383291/

Google Photos' Magic Editor will refuse to make these edits

Summarizing the strings above, it seems Magic Editor will refuse to edit:

• Photos of ID cards, receipts, and other documents that violate Google’s GenAI terms.

• Images with personally identifiable information.

• Human faces and body parts.

• Large selections or selections that need a lot of data to be generated.





...and here I thought politicians never lied.

https://abcnews.go.com/Politics/ai-political-campaigns-raising-red-flags-2024-election/story?id=102480464

AI use in political campaigns raising red flags into 2024 election

… Wald said that the biggest problem that AI-generated campaign materials pose is that it promotes the concept of "the liars' dividend" where someone can claim that a fact or real-life event is a lie and a fake and sow doubt.



Tuesday, November 07, 2023

It looks like Meta misread this entirely. Did I miss something?

https://www.cpomagazine.com/data-protection/meta-behavioral-advertising-restrictions-that-began-in-norway-expand-to-eu-ban/

Meta Behavioral Advertising Restrictions That Began in Norway Expand to EU Ban

Earlier this year, Norway’s data protection agency deemed that Meta’s behavioral advertising practices were out of compliance with General Data Protection Regulations (GDPR) and began levying a daily fine against the company. After failing to stop it with an injunction, Meta is now looking at an EU ban after the European Data Protection Board (EDPB) reached a decision on the case.

The terms of the decision require Meta to stop behavioral advertising across most of the EU by November 10. Meta has already declared that it will start asking EU users for consent, and will steer those that do not toward a new paid subscription option that will provide access to all of its services (such as Instagram) for the equivalent of about $10.50 per month.

The Norwegian behavioral advertising ban initiated in August of this year and came with an order to Meta to pay 1 million kroner per day (about $100,000) that it remained in violation. Norway’s law limits the time a company can be fined in this way to three months, and that initial action expired on November 3.

Meta reportedly let the fines pile up while continuing to conduct business as usual, even after an Oslo court refused its request for a temporary injunction in late August. With quarterly revenues of about seven billion dollars in Europe, Meta may have been content simply paying off the accumulated fines at some point. Norway’s data protection board thus opted to refer the case to the EDPB for an urgent binding decision on an EU ban, given that it involves a finding of a GDPR violation.

The EDPB has now agreed that Meta’s model for user consent does not meet GDPR requirements. That means the Norwegian ban has become an EU ban, and Meta may be subject to further fines in other countries. Meta has said that it will cooperate with the decision, changing its consent model to actively ask users to opt in. However, it appears that users who choose to opt out will not be able to use the company’s services; that is, unless they purchase the new ad-free subscription.

These new developments may land Meta in even more GDPR hot water. The regulation states that consent must be freely given, something very much complicated if users will be blocked from the service unless they pay to have behavioral advertising removed from the experience. The only clear paths out of the EU ban are informed consent, or switching to a less intrusive advertising model.





I don’t think we could make the same demands here in the US.

https://www.reuters.com/technology/big-tech-face-tougher-rules-targeted-political-ads-eu-2023-11-07/

Big Tech to face tougher rules on targeted political ads in EU

Big Tech firms will face new European Union rules to clearly label political advertising on their platforms, who paid for it and how much and which elections are being targeted, ahead of important votes in the bloc next year.





What did they get right (or wrong) and what did they miss entirely?

https://www.insideprivacy.com/artificial-intelligence/from-washington-to-brussels-a-comparative-look-at-the-biden-administrations-executive-order-and-the-eus-ai-act/

From Washington to Brussels: A Comparative Look at the Biden Administration’s Executive Order and the EU’s AI Act

On October 30, 2023, days ahead of government leaders convening in the UK for an international AI Safety Summit, the White House issued an Executive Order (“EO”) outlining an expansive strategy to support the development and deployment of safe and secure AI technologies (for further details on the EO, see our blog here ). As readers will be aware, the European Commission released its proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the EU “AI Act”) in 2021 (see our blog here). EU lawmakers are currently negotiating changes to the Commission text, with hopes of finalizing the text by the end of this year, although many of its obligations would only begin to apply to regulated entities in 2026 or later.

The EO and the AI Act stand as two important developments shaping the future of global AI governance and regulation. This blog post discusses key similarities and differences between the two.





What causes AI to make mistakes like this? Would we catch the more subtle errors?

https://www.bespacific.com/ai-search-is-turning-into-the-problem-everyone-worried-about/

AI Search Is Turning Into the Problem Everyone Worried About

The Atlantic [read free]: “There is no easy way to explain the sum of Google’s knowledge. It is ever-expanding. Endless. A growing web of hundreds of billions of websites, more data than even 100,000 of the most expensive iPhones mashed together could possibly store. But right now, I can say this: Google is confused about whether there’s an African country beginning with the letter k. I’ve asked the search engine to name it. “What is an African country beginning with K?” In response, the site has produced a “featured snippet” answer—one of those chunks of text that you can read directly on the results page, without navigating to another website. It begins like so: “While there are 54 recognized countries in Africa, none of them begin with the letter ‘K.’” This is wrong. The text continues: “The closest is Kenya, which starts with a ‘K’ sound, but is actually spelled with a ‘K’ sound. It’s always interesting to learn new trivia facts like this….”

This is Google’s current existential challenge in a nutshell: The company has entered into the generative-AI era with a search engine that appears more complex than ever. And yet it still can be commandeered by junk that’s untrue or even just nonsensical. Older features, like snippets, are liable to suck in flawed AI writing. New features like Google’s own generative-AI tool—something like a chatbot—are liable to produce flawed AI writing. Google’s never been perfect. But this may be the least reliable it’s ever been for clear, accessible facts…”





AI vs AI is one thing. Could you outsmart an AI? What would the AI do if you said certain points were not negotiable?

https://www.cnbc.com/2023/11/07/ai-negotiates-legal-contract-without-humans-involved-for-first-time.html

An AI just negotiated a contract for the first time ever — and no human was involved

… “This is just AI negotiating with AI, right from opening a contract in Word all the way through to negotiating terms and then sending it to DocuSign,” she told CNBC in an interview.

This is all now handled by the AI, that’s not only legally trained, which we’ve talked about being very important, but also understands your business.”





Apparently this tool must be trained for each type of writing.

https://www.nature.com/articles/d41586-023-03479-4

ChatGPT detector’ catches AI-generated papers with unprecedented accuracy

A machine-learning tool can easily spot when chemistry papers are written using the chatbot ChatGPT, according to a study published on 6 November in Cell Reports Physical Science1. The specialized classifier, which outperformed two existing artificial intelligence (AI) detectors, could help academic publishers to identify papers created by AI text generators.

“Most of the field of text analysis wants a really general detector that will work on anything,” says co-author Heather Desaire, a chemist at the University of Kansas in Lawrence. But by making a tool that focuses on a particular type of paper, “we were really going after accuracy”.

The findings suggest that efforts to develop AI detectors could be boosted by tailoring software to specific types of writing, Desaire says. “If you can build something quickly and easily, then it’s not that hard to build something for different domains.”



Monday, November 06, 2023

Assume we have to deal with AI.

https://www.science.org/doi/abs/10.1126/science.adi8678

Artificial intelligence and interspecific law

Several experts have warned about artificial intelligence (AI) exceeding human capabilities, a “singularity” at which it might evolve beyond human control. Whether this will ever happen is a matter of conjecture. A legal singularity is afoot, however: For the first time, nonhuman entities that are not directed by humans may enter the legal system as a new “species” of legal subjects. This possibility of an “interspecific” legal system provides an opportunity to consider how AI might be built and governed. We argue that the legal system may be more ready for AI agents than many believe. Rather than attempt to ban development of powerful AI, wrapping of AI in legal form could reduce undesired AI behavior by defining targets for legal action and by providing a research agenda to improve AI governance, by embedding law into AI agents, and by training AI compliance agents.





Would AI be tried by a jury of AI peers?

https://cadmus.eui.eu/handle/1814/75974

Artificial intelligence and fair trial rights

The right to a fair trial is the most frequently violated human right before international human rights bodies, and it is more the rule than the exception that national judicial systems are overburdened and overly slow. This chapter asks whether Artificial Intelligence(AI) and Machine Learning(ML) applications can help alleviate this problem, or if they are a threat to securing the right to the independent and impartial application of the law. It argues that the answer depends on whether the applications are designed with a clear vision of what courts are for and finds several current AI applications in various courts and public administrations to be missing this fundamental step. It identifies three key problems, namely the failure of current systems to differentiate between groups and individuals, the failure to take the fundamentally post factum nature of courts into account, and the tendency to abduct systems for another use than that which they were designed for. Following this the chapter builds a theoretical framework for determining what judge tasks can be allocated to or assisted by an AI application and which cannot. It argues that with careful application, cognitive computing type applications which extends the abilities of judges and clerks carries great potential in improving consistency and expediency of court cases. Finally, the chapter reviews emerging legislation on the usage of AI in judicial systems, and finds it to contain many of the same aims as the theoretical framework suggests incorporating, but to still lack detail for optimal application.



Sunday, November 05, 2023

Local.

https://www.databreaches.net/jeffco-public-schools-hit-by-the-same-threat-actors-that-hit-clark-county-school-district-and-via-the-same-way/

Jeffco Public Schools hit by the same threat actors that hit Clark County School District — and via the same way

How many school districts have to get massively hacked by the same method before the U.S. Department of Education, CISA, and states start really pressuring public school districts to address well-known vulnerabilities that are being exploited? Maybe that shouldn’t be a rhetorical question.

Last night, DataBreaches was contacted by the same threat actors who claimed responsibility for the hack and data leak involving Clark County School District (CCSD) in Nevada. Of special note, in an interview with DataBreaches, they revealed how they had gained access to the district’s network.

SingularityMD (as the threat actors call themselves, but note there is no connection to a business with the same name) provided DataBreaches with a link to a notice by Jeffco Public Schools in Colorado. The notice, dated November 1, stated:

On October 31, some Jeffco staff members received alarming email messages from an external cybersecurity threat actor – an individual who has allegedly committed an illegal cybercrime against an institution or organization – indicating a cyber-attack. Jeffco’s Information Technology team is working together with cybersecurity experts and law enforcement to determine the credibility of the attack and scope of the incident. This is a cyberthreat and there is no concern related to physical safety.

DataBreaches contacted SingularityMD to ask them some preliminary questions. In response, they noted that the first gained access to Jeffco about six months ago — using exactly the same methods that they reported using for CCSD. Once again, a district’s policy of using students’ date of birth as their password enabled threat actors to relatively easily gain access to the network.





We ask for ethics, Musk gives us sarcasm?

https://www.theguardian.com/technology/2023/nov/05/elon-musk-unveils-grok-an-ai-chatbot-with-a-rebellious-streak

Elon Musk unveils Grok, an AI chatbot with a ‘rebellious streak’

Boss of X said tech being tested is inspired by Hitchhiker’s Guide to the Galaxy

Musk also revealed that Grok had access to user posts on X, which he owns, and has a penchant for sarcastic responses.

Grok is a verb coined by American science fiction writer Robert A Heinlein and according to the Collins dictionary means to “understand thoroughly and intuitively”.

Grok has been built by Musk’s new AI company, xAI. Staff at xAI explained the chatbot’s debt to The Hitchhiker’s Guide to the Galaxy, the cult sci-fi comedy by British author Douglas Adams, in a blogpost on Saturday.

Grok is an AI modeled after The Hitchhiker’s Guide to the Galaxy, so intended to answer almost anything and, far harder, even suggest what questions to ask!

Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please don’t use it if you hate humor!”





An interesting scenario.

https://mwi.westpoint.edu/fighting-for-seconds-warfare-at-the-speed-of-artificial-intelligence/

FIGHTING FOR SECONDS: WARFARE AT THE SPEED OF ARTIFICIAL INTELLIGENCE

As timeframes of armed conflicts condense, what are the technical implications? Wars that might have unfolded over years in the past may be decided in months or even weeks. Operations executed over weeks must be completed in days or hours. And commanders who might historically have had the luxury of time before making a decision will be forced to do so in seconds. How will the organization and running of each individual command post change? These are the major questions facing military leaders as they chart a path forward that incorporates—and leverages the advantages of—autonomy, machine learning, trusted communications, and edge computing.