Tuesday, July 23, 2019


Was weak security a deliberate choice? Have they already pulled out the money they “saved?” If they did, is there any way to get that money back?
AMCA Breach: Many More Impacted Healthcare Firms Come Forward
Many more healthcare companies in the United States published press releases last week to inform customers that they had been impacted by the data breach suffered by the American Medical Collection Agency (AMCA).
All of the organizations used the same press release template, with the only difference being the number of impacted patients and the phone number that people can call for more details.
The breach at AMCA, which is also known as Retrieval-Masters Creditors Bureau, came to light in early June when two of its biggest customers, Quest Diagnostics and LabCorp, filed 8-K forms with the U.S. Securities and Exchange Commission (SEC).
AMCA, which faces several class actions, revealed in mid-June that the breach had already cost it millions of dollars and announced that it had filed for Chapter 11 bankruptcy and laid off most of its workforce.
The company’s investigation into the incident revealed that the hackers may have had access to its systems since as early as August 2018. The breach was discovered only in March 2019 after AMCA was informed that many payment cards used on its web portal had been used for fraudulent charges.
Investigators could not determine exactly which individuals were impacted so it had to be assumed that everyone who had information stored on AMCA servers was hit.




What would you call an AI that specialized in the law? Moses? Perry? “J.D.?”
Law Librarians: The Missing Link As Solo & Small Firm Lawyers Adapt to Artificial Intelligence
MyShingle – Nicole Black: “Earlier this week, I lead a roundtable discussion on Artificial Intelligence in Legal Research and Law Practice at the American Association of Law Librarians (AALL) which took place in Washington D.C. I was grateful for the invitation from @robtruman, the law librarian at the Lewis & Clark Law School because the event forced me to review all of the posts on AI and law practice that I’ve been meaning to read and because any opportunity to talk about AI – which is the work that my husband studied back in grad school in the late ‘80s before the subject was ready for prime time – is always a privilege. In this post, I’ll share some of the information in AI that I gathered in preparation for my talk. One of MyShingle’s missions has always been to ensure that solo and small firms have current information not just on new technology developments but also on how those new tools can be applied in practice. And because AI is such a fast-moving target that many solo and small firm lawyers haven’t yet had a chance to wrap their heads around, I’ve written a multi-part post that will cover everything that solo and small firm lawyers need to know…”




If healthcare is so far behind other industries, how many of these are in general use elsewhere?
A revolution: 10 use cases of artificial intelligence in healthcare




I wonder which SciFi model they have selected?
Microsoft wants to build artificial general intelligence: an AI better than humans at everything
A lot of startups in the San Francisco Bay Area claim that they’re planning to transform the world. San-Francisco-based, Elon Musk-founded OpenAI has a stronger claim than most: It wants to build artificial general intelligence (AGI), an AI system that has, like humans, the capacity to reason across different domains and apply its skills to unfamiliar problems.
Today, it announced a billion dollar partnership with Microsoft to fund its work — the latest sign that AGI research is leaving the domain of science fiction and entering the realm of serious research.
Others warn that, if poorly designed, it could be a catastrophe for humans in a few different ways. A sufficiently advanced AI could pursue a goal that we hadn’t intended — a recipe for catastrophe. It could turn out unexpectedly impossible to correct once running. Or it could be maliciously used by a small group of people to harm others. Or it could just make the rich richer and leave the rest of humanity even further in the dust.
Current AI systems are vulnerable to adversarial examples — inputs designed to confuse them — and more advanced systems might be, too. Current systems faithfully do what we tell them to do, even if it’s not exactly what we meant them to do.
And there are some reasons to think advanced systems will have problems that current systems don’t. Some researchers have argued that an AGI system that appears to be performing well at a small scale might unexpectedly deteriorate in performance when it has more resources available to it, as the best route to achieving its goals changes.




Not sure I look like a classic, but I feel like an ‘old master.’
This website uses AI to turn your selfies into haunted classical portraits
Bored of using AI to age yourself into a desiccated husk? Why not use it to turn your selfies into harrowing but artistic portraits instead? Head over to aiportraits.com. home of a fun little widget built by researchers at the MIT-IBM Watson AI Lab, and upload a photo to try out an artistic transformation for yourself.
The site uses an algorithm trained on 45,000 classical portraits to render your face in faux oil, watercolor, or ink. There’s a huge number of styles included in this database, covering artists from Rembrandt to Titian to van Gogh, with each input producing a unique portrait.




Where, but not how well.
Here’s where the US government is using facial recognition technology to surveil Americans
A new map shows how widespread the use of facial recognition technology is.




Social lawyers? Please.
Reconciling Social Media and Professional Norms for Lawyers, Judges, and Law Professors
McPeak, Agnieszka, The Internet Made Me Do It: Reconciling Social Media and Professional Norms for Lawyers, Judges, and Law Professors (May 1, 2019). Idaho Law Review, Vol. 55, No. 2, 2019. Available at SSRN: https://ssrn.com/abstract=3418088
“Social media platforms operate under their own social order. Design decisions and policies set by platforms steer user behavior. Additionally, members of online communities set informal expectations that form a unique set of norms. These social media norms—like oversharing, disinhibition, and anonymity—become common online, even though similar conduct might be shunned in the real world. For lawyers, judges, and law professors, a different set of norms apply to both their online and offline conduct. Legal ethics rules, codes of judicial conduct, workplace policies, and general professionalism expectations dictate behavior for legal professionals. Collectively, these professional norms set a higher bar—one that fundamentally clashes with ever-evolving social media norms. This conflict between social media and professional norms must be reconciled in order for lawyers, judges, and law professors to avoid online missteps. This essay examines the clash between the norms of social media conduct with the constraints of professional norms. By doing so, it hopes to help lawyers, judges, and law professors reconcile their real-world roles with their online behavior and offers some guidance for maintaining professionalism across the board.”




Where would this work? Small University town? Retirement village?
Uber tests monthly subscription that combines Eats, rides, bikes and scooters
… Uber is testing a few different iterations in San Francisco and Chicago, but each version includes a fixed discount on every ride, free Uber Eats delivery and free JUMP (bikes and scooters) rides. The pass costs $24.99 per month.



No comments: