Saturday, March 04, 2023

Automating a task humans could do poorly?

https://www.latimes.com/politics/story/2023-03-03/surveillance-ai-coappearance-facial-recognition

The cameras know who you are. Now they want to use AI to find your friends too





Local interest.

https://www.databreaches.net/hacker-stole-bank-account-social-security-numbers-and-health-plan-info-of-colorado-school-district-employees/

Hacker stole bank account, Social Security numbers, and health plan info of Colorado school district employees

Nate Lynn reports:

Personal information belonging to some 15,000 Denver Public Schools (DPS) employees was stolen in what the district is calling a “cybersecurity incident” that went on for a month.
In a message shared on the DPS website Friday, DPS said employees discovered in January that data had been taken from the district’s network by an “unauthorized actor.”
[…]
The information stolen included the names and Social Security numbers of current and former participants in the DPS employee health plan, employee fingerprints, bank account numbers or pay card numbers, driver’s license numbers, passport numbers and health plan enrollment information.

Read more at 9News.





Worth considering?

https://www.investing.com/analysis/is-artificial-intelligence-the-next-bubble-200635861

Is Artificial Intelligence the Next Bubble?

In this article, we will look at the 5 stages of a financial bubble, suggesting AI may already be entering phase 2.





Perhaps we all should RTFM?

https://www.newscientist.com/article/2358953-ai-masters-video-game-6000-times-faster-by-reading-the-instructions/

AI masters video game 6000 times faster by reading the instructions

An artificial intelligence has learned to master an Atari skiing game in days of playing time rather than the decades it took a specialist DeepMind AI, simply by reading the instructions written for humans before it started



Friday, March 03, 2023

Another case of “We can, therefore we must?” (I have a new hammer, let’s find some nails!)

https://www.pogowasright.org/report-ice-and-the-secret-service-conducted-illegal-surveillance-of-cell-phones/

Report: ICE and the Secret Service Conducted Illegal Surveillance of Cell Phones

Mathew Guariglia of EFF writes:

The Department of Homeland Security’s Inspector General has released a troubling new report detailing how federal agencies like Immigration and Customs Enforcement (ICE), Homeland Security Investigations (HSI), and the Secret Service have conducted surveillance using cell-site simulators (CSS) without proper authorization and in violation of the law. Specifically, the office of the Inspector General found that these agencies did not adhere to federal privacy policy governing the use of CSS and failed to obtain special orders required before using these types of surveillance devices.
Even under exigent circumstances, where law enforcement use of technologies that track cell-phone use are deemed immediately necessary, law enforcement must still get a pen register order. The pen register order is required by statute and policy even though exigency otherwise excuses police from having to obtain a conventional warrant. The Inspector General noted that the agencies didn’t follow the rules in these cases either.
Cell-site simulators, also known as “Stingrays” or IMSI catchers, are devices that masquerade as legitimate cell-phone towers, tricking phones within a certain radius into connecting to the device rather than a tower.
Cell-site simulators operate by conducting a general search of all cell phones within the device’s radius, in violation of basic constitutional protections. Law enforcement use cell-site simulators to pinpoint the location of phones with greater accuracy than phone companies. Cell-site simulators can also log IMSI numbers (unique identifying numbers) of all of the mobile devices within a given area.
Unfortunately, the report redacts crucial information regarding the total number of times that each agency used CSS with and without a warrant, and when they used the devices to support external information. The OIG should release this information to the public: knowing the aggregate totals would not harm any active investigation, but rather inform public debate over the agencies’ reliance on this invasive technology. Make no mistake, cell-site simulators are mass surveillance that draws in the cell signal and collects data on every phone in the vicinity.
The fact that government agencies are using these devices without the utmost consideration for the privacy and rights of individuals around them is alarming but not surprising. The federal government, and in particular agencies like HSI and ICE, have a dubious and troubling relationship with overbroad collection of private data on individuals. In 2022 we learned that HSI and ICE had used overly-broad warrants to collect bulk financial records concerning people sending money across international borders through companies like Western Union. Mass surveillance of this kind is a massive violation of privacy and has elicited the concern of at least one U.S. senator hoping to probe into these tactics.
Most people carry cell phones on them at any given moment. EFF will continue to fight against careless government use of cell-site simulators, and we will continue to monitor federal agencies that rely on secrecy and a strategic ignorance of the law in order to wield powerful and overly broad surveillance powers and technologies.

This column was originally published at EFF.

In related coverage of the IG’s report, Zack Whittaker pulls out some specific examples of concerns, such as:

In one case highlighted by the inspector general, a county judge “did not understand” why prosecutors sought an emergency surveillance order because, not understanding the statute, the judge “believed it to be unnecessary,” leading to a raft of warrantless deployments.

Read more at TechCrunch.





Perspective.

https://krebsonsecurity.com/2023/03/highlights-from-the-new-u-s-cybersecurity-strategy/

Highlights from the New U.S. Cybersecurity Strategy

The Biden administration today issued its vision for beefing up the nation’s collective cybersecurity posture, including calls for legislation establishing liability for software products and services that are sold with little regard for security. The White House’s new national cybersecurity strategy also envisions a more active role by cloud providers and the U.S. military in disrupting cybercriminal infrastructure, and it names China as the single biggest cyber threat to U.S. interests.



(Related)

https://www.csoonline.com/article/3689870/software-liability-reform-is-liable-to-push-us-off-a-cliff.html#tk.rss_all

Software liability reform is liable to push us off a cliff

Like “SBOMs will solve everything,” there is a regular cry to reform software liability, specifically in the case of products with insecurities and vulnerabilities. US Cybersecurity and Infrastructure Security Agency (CISA) Director Jen Easterly’s comments this week brought the topic back into focus, but it’s still a thorny issue. (There’s a reason certain things are called “wicked problems.”) The proposed remedy, taking up a full page of the Biden Administration’s National Cybersecurity Strategy, will cause more problems than it solves.



Thursday, March 02, 2023

One of my favorite topics.

https://www.bespacific.com/generative-artificial-intelligence-and-copyright-law/

Generative Artificial Intelligence and Copyright Law

CRS Legal Sidebar – Generative Artificial Intelligence and Copyright Law. February 24, 2023: “Recent innovations in artificial intelligence (AI) are raising new questions about how copyright law principles such as authorship, infringement, and fair use will apply to content created or used by AI. So-called “generative AI” computer programs—such as Open AI’s DALL-E 2 and ChatGPT programs, Stability AI’s Stable Diffusion program, and Midjourney’s self-titled program—are able to generate new images, texts, and other content (or “outputs”) in response to a user’s textual prompts (or “inputs”). These generative AI programs are “trained” to generate such works partly by exposing them to large quantities of existing works such as writings, photos, paintings, and other artworks. This Legal Sidebar explores questions that courts and the U.S. Copyright Office have begun to confront regarding whether the outputs of generative AI programs are entitled to copyright protection as well as how training and using these programs might infringe copyrights in other works.”





Perspective.

https://www.newsmax.com/finance/streettalk/artificial-intelligence-market-future-technology/2023/03/01/id/1110607/

AI on Brink of Revolution Akin to the iPhone

Artificial intelligence is on the brink of a revolutionary “iPhone moment” whereby it will boost the world economy by $15.7 trillion by 2030, Bank of America strategists predict.

In a Tuesday note to clients, BofA list four reasons why AI will transform industries, as Business Insider reports.

The four driving factors driving the AI revolution:

    • Democratization of data

    • Unprecedented mass adoption

    • Warp-speed technological development

    • Abundant commercial uses





Another perspective.

https://techpolicy.press/ten-legal-and-business-risks-of-chatbots-and-generative-ai/

Ten Legal and Business Risks of Chatbots and Generative AI

The technology is advancing at a breakneck speed. As Axios put it, “The tech industry isn’t letting fears about unintended consequences slow the rush to deploy a new technology.” That approach is good for innovation, but it poses its own challenges. As generative AI advances, companies will face a number of legal and ethical risks, both from malicious actors leveraging this technology to harm businesses and when businesses themselves wish to implement chatbots or other forms of AI into their functions.

This is a quickly developing area, and new legal and business dangers—and opportunities—will arise as the technology advances and use cases emerge. Government, business and society can take the early learnings from the explosive popularity of generative AI to develop guardrails to protect against their worst behavior and use cases before this technology pervades all facets of commerce. To that end, businesses should be aware of the following top 10 risks and how to address them.



Wednesday, March 01, 2023

An aspect of ‘self-driving’ that I had not considered.

https://www.businessinsider.com/ford-patent-cars-repossess-themselves-drive-away-if-missing-payments-2023-2

Ford wants to allow your car to lock you out — and even drive itself to an impound lot or scrapyard — if you miss payments





Not sure I agree, but interesting.

https://bigthink.com/the-future/hyperwar-ai-military-warfare/

Hyperwar”: How AI could cause wars to spiral out of human control

  • Four Battlegrounds by Paul Scharre explores the competition between AI superpowers and the four key elements that define this struggle: data, computing power, talent, and institutions.

  • This book excerpt explains how artificial intelligence could soon change how militaries fight on the battlefield.

  • AI could transform battle tactics to the extent that humans can't keep up with it — a scenario which Scharre refers to as a "singularity" in warfare.





Resources.

https://mashable.com/uk/deals/learn-ai-for-free

Learn how to use AI with the best free online courses on Udemy

A wide range of online courses on artificial intelligence are available for free on Udemy. You don't need a voucher code to access these courses for free, and you can learn at your own pace.



Tuesday, February 28, 2023

I suspect they tripped over a vulnerability rather than made a deliberate choice to hack the Marshals. Unfortunately, the results are the same.

https://www.cnn.com/2023/02/27/politics/us-marshals-service-ransomeware-attack/index.html

Ransomware attack on US Marshals Service affects ‘law enforcement sensitive information’

A ransomware attack on the US Marshals Service has affected a computer system containing “law enforcement sensitive information,” including personal information belonging to targets of investigations, a US Marshals Service spokesperson said Monday evening.

The Justice Department subsequently determined it “constitutes a major incident,” according to the statement. A “major incident” is a hack that is significant enough that it requires a federal agency to notify Congress.





Can we identify the irrelevant, then eliminate it? Perhaps we need another AI?

https://www.bespacific.com/large-language-models-can-be-easily-distracted-by-irrelevant-context/

Large Language Models Can Be Easily Distracted by Irrelevant Context

Large Language Models Can Be Easily Distracted by Irrelevant Context. Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed Chi, Nathanael Schärli, Denny Zhou. [current version in PDF ]

Large language models have achieved impressive performance on various natural language processing tasks. However, so far they have been evaluated primarily on benchmarks where all information in the input context is relevant for solving the task. In this work, we investigate the distractibility of large language models, i.e., how the model problem-solving accuracy can be influenced by irrelevant context. In particular, we introduce Grade-School Math with Irrelevant Context (GSM-IC), an arithmetic reasoning dataset with irrelevant information in the problem description. We use this benchmark to measure the distractibility of cutting-edge prompting techniques for large language models, and find that the model performance is dramatically decreased when irrelevant information is included. We also identify several approaches for mitigating this deficiency, such as decoding with self-consistency and adding to the prompt an instruction that tells the language model to ignore the irrelevant information.”



Monday, February 27, 2023

Idiotic? Our current political environment.

https://www.bespacific.com/texas-asks-trump-judge-to-declare-most-of-federal-government-unconstitutional/

Texas asks Trump judge to declare most of federal government unconstitutional

Vox: “Earlier this month, Texas’s Republican Attorney General Ken Paxton filed a lawsuit claiming that the $1.7 trillion spending law that keeps most of the federal government — including the US military operating through September of 2023 is unconstitutional. Paxton’s claims in Texas v. Garland, which turn on the fact that many of the lawmakers who voted for the bill voted by proxy, should fail. They are at odds with the Constitution’s explicit text. And a bipartisan panel of a powerful federal appeals court in Washington, DC, already rejected a similar lawsuit in 2021. Realistically, this lawsuit is unlikely to prevail even in the current, highly conservative Supreme Court. Declaring a law that funds most of the federal government unconstitutional would be an extraordinary act, especially given the very strong legal arguments against Paxton’s position. But the case is a window into Paxton’s broader litigation strategy, where he frequently raises weak legal arguments undercutting federal policies before right-wing judges that he has personally chosen because of their ideology. And these judges often do sow chaos throughout the government, which can last months or longer, before a higher court steps in. Texas’s federal courts give plaintiffs an unusual amount of leeway to choose which judge will hear their case. an odd feature of these courts that Paxton often takes advantage of to ensure that his lawsuits will be heard by judges who are likely to toe the Republican line. These decisions, moreover, appeal to the deeply conservative United States Court of Appeals for the Fifth Circuit. Paxton filed the Garland case in Lubbock, Texas, where 100 percent of all federal lawsuits are heard by a Republican appointee. Two-thirds of such cases are automatically assigned to Judge James Wesley Hendrix, who will hear this suit. Hendrix, a Trump appointee to a federal court in Texas, is a bit of an unknown quantity. In his brief time on the bench, Hendrix did hand down one poorly reasoned decision undercutting a federal statute that requires most hospitals to perform medically necessary abortions. But Hendrix’s thin record does not tell us enough to know whether he’d actually be so aggressive as to declare most of the United States government unconstitutional…”





A bit simplistic?

https://fee.org/articles/how-the-ai-wars-are-proving-that-google-isnt-a-monopoly/

How the AI Wars Are Proving That Google Isn't a Monopoly

If Google is really a monopoly, why would investors be worried about a competitor developing a new product?

Google’s share price dropped nearly 10 percent earlier this month in the wake of Microsoft’s announcement that they are integrating Open AI’s ChatGPT into their search engine. Microsoft’s search engine, Bing, currently accounts for only three percent of web searches, trailing far behind Google’s impressive market share of more than 90 percent. Google’s dominance in search is so total, and has been stable for so long, that it has led many to accuse Google of being a monopoly.

Claims of monopoly are common in political discourse, and they have been made against Google by all sides of the political compass. In 2020, Senator Ted Cruz accused Google of using its “monopoly” power to stifle conservative voices. In 2019, Senator Bernie Sanders expressed his desire to break Google up over its alleged monopoly status. In January of this year, the Justice Department sued Google for “monopolizing” digital advertising, which is the way Google monetizes its search service.



(Related) and more interesting…

https://www.washingtonpost.com/technology/2023/02/26/antitrust-google-doj-tech/

Biden finds breaking up Big Tech is hard to do

Google is hiring teams of former DOJ lawyers to fight antitrust lawsuits as the battle over tech firms’ power shifts to the courts

Google has been quietly assembling a phalanx of former Justice Department lawyers as the tech titan gears up for the regulatory fight of its life against the attorneys’ former employer.

The Department of Justice offensive, a pair of lawsuits aimed at breaking up the search giant’s dominance, will play out in the courts — reflecting a new phase in the Biden administration’s years-long effort to rein in Big Tech, after a sweeping antitrust package stalled in Congress.





Because… (Or download them into Calibre and read them anywhere.)

https://www.makeuseof.com/tag/classic-novels-read-free-kindle/

35 Classic Novels You Can Read for Free on Your Kindle

There's a treasure trove of free books available on to download to your Kindle. Here are our recommendations for classic novels you should read.



Sunday, February 26, 2023

Always interesting.

https://www.rand.org/content/dam/rand/pubs/research_reports/RRA2200/RRA2249-1/RAND_RRA2249-1.pdf

Finding a Broadly Practical Approach for Regulating the Use of Facial Recognition by Law Enforcement

Communities across the United States are grappling with the implications of law enforcement organizations’ and other government agencies’ use of facial recognition (FR) technology. Although the purported benefits of FR as stated are clear, they have yet to be measured and weighed against the existing risks, which are also substantial. Given the variety of ways in which FR can be used by law enforcement, the full benefit-to-risk trade-off is difficult to account for, leading to some municipalities that ban the use of FR by law enforcement and others that have no clear regulations. This report provides an overview of what is known about FR use in law enforcement and provides a road map of sorts to help policymakers sort through the various risks and benefits relative to different types of FR use.

We categorize the various identified risks associated with FR technology and its use by law enforcement, including accuracy, bias, the scope of the search (i.e., surveillance versus investigation), data-sharing and storage practices, privacy issues, human and civil rights, officer misuse, law enforcement reactions to the FR results (e.g., street stops), public acceptance, and unintended consequences. The concerns are discussed in detail in Chapter 3, but they are summarized here.





A thoughtful AI?

https://www.tandfonline.com/doi/full/10.1080/15027570.2023.2180184

The Need for a Commander

One article in this double issue of the Journal of Military Ethics asks about what an AI (artificial intelligence) commander would look like. The underlying question is whether we are more or less inevitably moving towards a situation where AI-driven systems will come to make strategic decisions and hence be the place where the buck stops.





I don’t get it…

https://www.science.org/doi/abs/10.1126/science.add2202

Leveraging IP for AI governance

The rapidly evolving and expanding use of artificial intelligence (AI) in all aspects of daily life is outpacing regulatory and policy efforts to guide its ethical use (1). Governmental inaction can be explained in part by the challenges that AI poses to traditional regulatory approaches (1). We propose the adaptation of existing legal frameworks and mechanisms to create a new and nuanced system of enforcement of ethics in AI models and training datasets. Our model leverages two radically different approaches to manage intellectual property (IP) rights. The first is copyleft licensing, which is traditionally used to enable widespread sharing of created content, including open-source software. The second is the “patent troll” model, which is often derided for suppressing technological development. Although diametric in isolation, these combined models enable the creation of a “troll for good” capable of enforcing the ethical use of AI training datasets and models.





I think I think so too.

https://indexlaw.org/index.php/rdb/article/view/7547

DEEP LEARNING AND THE RIGHT TO EXPLANATION: TECHNOLOGICAL CHALLENGES TO LEGALITY AND DUE PROCESS OF LAW

This article studies the right to explainability, which is extremely important in times of fast technological evolution and use of deep learning for the most varied decision-making procedures based on personal data. Its main hypothesis is that the right to explanation is totally linked to the due process of Law and legality, being a safeguard for those who need to contest automatic decisions taken by algorithms, whether in judicial contexts, in general Public Administration contexts, or even in private entrepreneurial contexts.. Through hypothetical-deductive procedure method, qualitative and transdisciplinary approach, and bibliographic review technique, it was concluded that opacity, characteristic of the most complex systems of deep learning, can impair access to justice, due process legal and contradictory. In addition, it is important to develop strategies to overcome opacity through the work of experts, mainly (but not only). Finally, Brazilian LGPD provides for the right to explanation, but the lack of clarity in its text demands that the Judiciary and researchers also make efforts to better build its regulation.





Tools worth testing?

https://www.makeuseof.com/accurate-ai-text-detectors/

The 8 Most Accurate AI Text Detectors You Can Try

As language models like GPT continue to improve, it is becoming increasingly difficult to differentiate between AI-generated and human-written text. But, in some cases, like academics, it’s necessary to ensure that the text isn't written by AI.

This is where AI text detectors come into play. Though none of the tools currently available detect with complete certainty (and neither do they claim to do so), a few of these tools do provide pretty accurate results. So, here, we list down the eight most accurate AI text detectors you can try.