If you weren’t paranoid before…
Did you sign up for the new White House app? Don’t use it until you read this!
Did you sign up for the new White House app? Don’t use it until you read this, because it puts your privacy and data security at risk.
Patrick Quirk takes an impressive technical piece and distills it for those of us who are not developers or coders. His article is based on original research by Thereallo, published March 28, 2026. More technically savvy readers may want to just jump to Thereallo’s analysis.
For the rest of us, Quirk writes:
The Trump White House launched an official mobile app on March 28, 2026. They called it “Unparalleled access to the Trump Administration.” A security researcher who goes by Thereallo pulled the APK, threw it into JADX, and decompiled the entire thing.
What
they found would get any cybersecurity student expelled, any
pentester fired, and any company sued. But it’s stamped with
a .gov badge,
so apparently it’s fine.
This is not a political article. This is a technical audit of a government application that violates every principle the cybersecurity industry teaches. Every standard the federal government is supposed to uphold. Every ethical boundary we are told never to cross. I’m calling out everyone responsible.
Quirk identifies the significant findings Thereallo identified. Here are just some of them.
Finding 1: GPS Tracking Pipeline — Your Location Every 4.5 Minutes
Finding 2: JavaScript Injection Into Every Website You Visit
Finding 3: Loading Code From a Random Person’s GitHub Pages
Finding 4: More Third-Party Code Execution
Finding 5: Your Data Goes Everywhere Except the Government
Read the details about these and other findings at ringmast4r.substack.com.
Worth a read.
https://www.bespacific.com/ai-in-discovery-some-tools-are-ready-others-are-not/
AI in Discovery: Some Tools Are Ready. Others Are Not.
Via LLRX – AI in Discovery: Some Tools Are Ready. Others Are Not. Generative AI is coming for legal work, whether lawyers like it or not, and much of what it brings will be genuinely useful. Discovery, though, is a different conversation. Jerry Lawson discuses why technology-assisted review (TAR), the old, reliable workhorse, should remain a critical component of your organizations’ privileged document access management.
Errors we learned to avoid years ago keep reappearing. Perhaps AI coding is to blame?
https://thehackernews.com/2026/03/the-state-of-secrets-sprawl-2026-9.html
The State of Secrets Sprawl 2026: 9 Takeaways for CISOs
Secrets sprawl isn't slowing down: in 2025, it accelerated faster than most security teams anticipated. GitGuardian's State of Secrets Sprawl 2026 report analyzed billions of commits across public GitHub and uncovered 29 million new hardcoded secrets in 2025 alone, a 34% increase year over year and the largest single-year jump ever recorded.
Perspective.
Artificial Intelligence in Federal Courts: A Random-Sample Survey of Judges
Anika Jaitley, Daniel W. Linna Jr., Hon. Xavier Rodriguez, V.S. Subrahmanian & Siyu Tao, Artificial Intelligence in Federal Courts: A Random-Sample Survey of Judges, 27 SEDONA CONF. J. _____ (forthcoming 2026). “The purpose of this study is to understand how, and to what extent, federal judges and other personnel who work in their chambers use artificial intelligence (AI) tools in their judicial work. We selected a stratified random sample of 502 federal bankruptcy, magistrate, district court, and court of appeals judges from a population of 1,738 current federal judges. Of the 502 judges that we surveyed via email, 112 responded (22.3% response rate). Although a majority of responding judges at least occasionally use AI tools in their judicial work, relatively few report using AI on a daily or weekly basis. Approximately 38% of judges reported that they did not use AI at all in their work. This pattern suggests that AI is present in federal judicial chambers but not yet a routine, embedded part of most judges’ decision-making processes. Respondents report more frequent use of legal-specific AI tools integrated into established research platforms (such as Westlaw’s AI- Assisted Research and similar tools) than of stand-alone, general-purpose AI tools such as ChatGPT, Copilot, or Gemini. This pattern indicates that vendor familiarity and perceived reliability may strongly shape which AI tools judges are willing to deploy in chambers. Judges’ attitudes toward AI are almost evenly split between optimism and concern. Many respondents simultaneously recognize AI’s potential efficiency gains and express unease about hallucinations, “zombie cases,” and skill atrophy. When AI training is offered by court administration, most judges attend, but a sizeable majority have not been offered such training or are unsure whether training has been available, suggesting unmet demand for high-quality, judiciary-specific education on AI.”