Sunday, January 18, 2026

If we can do it to a potential foe, they can do it to us.

https://databreaches.net/2026/01/17/us-cyberattack-blacks-out-venezuela-leads-to-maduros-capture-in-2026/

US Cyberattack Blacks Out Venezuela, Leads to Maduro’s Capture in 2026

Julian E. Barnes and Anatoly Kurmanaev report:

The cyberattack that plunged Venezuela’s capital into darkness this month demonstrated the Pentagon’s ability not just to turn off the lights, but also to allow them to be turned back on, according to U.S. officials briefed on the operation.
The Jan. 3 operation was one of the most public displays of offensive U.S. cybercapabilities in recent years. It showed that at least with a country like Venezuela, whose military does not have sophisticated defenses against cyberattacks, the United States could use cyberweapons with powerful and precise effects.
The U.S. military also used cyberweapons to interfere with air defense radar, according to people briefed on the matter, who discussed sensitive details of the operation on the condition of anonymity. (Venezuela’s most powerful radar was not functional, however.)

Read more at The New York Times.





Keeping up...

https://pogowasright.org/u-s-biometric-laws-pending-legislation-tracker-january-2026/

U.S. Biometric Laws & Pending Legislation Tracker – January 2026

Lauren Caisman and Amy de La Lama of BCLP provide a useful summary of existing and proposed biometric laws by state.

Read their write-up on BCLP.




Not just to improve vision but to see.

https://pogowasright.org/the-hidden-legal-minefield-compliance-concerns-with-ai-smart-glasses-part-4-data-security-breach-notification-and-third-party-ai-processing-risks/

The Hidden Legal Minefield: Compliance Concerns with AI Smart Glasses, Part 4: Data Security, Breach Notification, and Third-Party AI Processing Risks

Joseph Lazzarotti of JacksonLewis writes:

As we have discussed in prior posts, AI-enabled smart glasses are rapidly evolving from niche wearables into powerful tools with broad workplace appeal — but their innovative capabilities bring equally significant legal and privacy concerns.
  • In Part 1, we addressed compliance issues that arise when these wearables collect biometric information.
  • In Part 2, we covered all-party consent requirements and AI notetaking technologies.
  • In Part 3, we considered broader privacy and surveillance issues, including from a labor law perspective.
In this Part 4, we consider the potentially vast amount of personal and other confidential data that may be collected, visually and audibly, through everyday use of this technology. Cybersecurity and data security risk more broadly pose another major and often underestimated exposure from this technology.
The Risk
AI smart glasses collect, analyze, and transmit enormous volumes of sensitive data—often continuously, and typically transmitting it to cloud-based servers operated by third parties. This creates a perfect storm of cybersecurity risk, regulatory exposure, and breach notification obligations under laws in all 50 states, as well as the CCPA, GDPR, and numerous sector-specific regulations, such as HIPAA for the healthcare industry.
Unlike traditional cameras or recording devices, AI glasses are designed to collect and process data in real time. Even when users believe they are not “recording,” the devices may still be capturing visual, audio, and contextual information for AI analysis, transcription, translation, or object recognition. That data is frequently transmitted to third-party AI providers with unclear security controls, retention practices, and secondary-use restrictions.
Many AI glasses explicitly rely on third-party AI services. For example, Brilliant Labs’ Frame glasses use ChatGPT to power their AI assistant, Noa, and disclose that multiple large language models may be involved in processing. In practice, this means sensitive business conversations, images, and metadata may leave the organization entirely—often without IT, security, or legal teams fully understanding where the data goes or how it is protected.

Read more at Workplace Privacy, Data Management & Security Report.





Not just lawyers.

https://scholarworks.uark.edu/arlnlaw/23/

Ethics of Artificial Intelligence for Lawyers: Shall We Play a Game? The Rise of Artificial Intelligence and the First Cases

In the 1983 movie WarGames, a young computer hacker accidentally accesses a United States military supercomputer programmed to run nuclear war simulations. Four decades after WarGames, lawyers are now facing similar challenges of learning to use and communicate with artificial intelligence––hopefully without destroying the world. Artificial intelligence tools, such as ChatGPT, Claude, and Gemini, are quickly being incorporated into legal practice. These systems can draft documents, perform analysis, and support other legal tasks. While lawyers adjust to these new technologies, courts and regulatory authorities are actively developing appropriate frameworks to guide and supervise the use of these tools within the sector.

This first installment in this series lays the foundation with a brief history of artificial intelligence, the rise of generative models, and the problem of “hallucinations” that make these tools especially dangerous for lawyers. It also surveys the first wave of cases, where courts sanctioned attorneys and pro se litigants for relying on hallucinated citations, imposed new procedural safeguards, and began confronting broader disputes over evidence, intellectual property, education, and government transparency. The next installments will shift from cases to rules by examining the American Bar Association’s Formal Opinion 512. Formal Opinion 512 is expansive, so it will be examined in two parts, first through its guidance on competence, confidentiality, and communication, and then through its treatment of candor, supervision, and fees. From there, the series ill turn to the rapidly evolving regulatory landscape, surveying federal inaction, California’s aggressive framework, the European Union’s AI Act, and Arkansas’s initial steps. The final entries in this series will focus on practice by outlining best practices that lawyers can adopt today and previewing the new skills that will define the next frontier of lawyer competence.





Let’s gang up on AI…

http://gmp-pub.com/index.php/ILDJ/article/view/19

The Rise of Artificial Intelligence and Its Implications for International Legal Accountability

This study examines the profound challenges posed by the rapid rise of Artificial Intelligence (AI) to the framework of international legal accountability. As AI systems become increasingly autonomous, complex, and opaque, existing international legal norms historically designed for human and state actors struggle to provide adequate regulatory guidance or mechanisms for responsibility attribution. The research identifies four interconnected problem areas: legal gaps in governing AI, difficulties in assigning accountability for autonomous AI decisions, human rights and humanitarian law implications, and structural imbalances within global governance. Current international law lacks coherent provisions that address the unique characteristics of AI, including unpredictability, machine learning opacity, and cross-border impacts. These gaps complicate the process of determining liability when AI systems cause harm, especially in contexts such as algorithmic discrimination, surveillance practices, and autonomous weapons deployment. Moreover, AI amplifies risks to fundamental rights, including privacy, freedom of expression, and due process, while also challenging the principles of distinction and proportionality in armed conflict. At the global governance level, power asymmetries between technologically advanced states, developing countries, and dominant private technology corporations hinder the creation of inclusive and effective regulatory standards. Consequently, the governance of AI is fragmented, slow, and heavily influenced by actors with disproportionate technological and economic power. This study argues that a comprehensive, adaptive, and multilateral legal framework is essential to ensure accountability, protect human rights, and promote equitable global governance in the AI era. Strengthening international institutions, harmonizing global standards, and expanding oversight over non-state actors are crucial steps toward achieving a balanced and just international AI governance system.