Saturday, February 22, 2020

After the pain of ransomware…
New Jersey Hospital Network Faces Lawsuit Over Ransomware Attack
A proposed class-action lawsuit has been filed against New Jersey's largest hospital health network over a ransomware attack that happened in December.
Threat actors infected the computer systems of Hackensack Meridian Health, causing a system-wide shutdown on December 2. The attack disrupted services at 17 urgent care centers, hospitals, and nursing homes operated by the network.
News of the attack was leaked to the media on December 5. Eight days later, Hackensack confirmed that it had paid an undisclosed sum to retrieve files encrypted in the ransomware attack.
Now, a proposed class-action lawsuit has been filed in a Newark district court by two plaintiffs seeking compensation, reimbursement of out-of-pocket expenses, statutory damages, and penalties.
The plaintiffs are also seeking to secure injunctive relief that will require Hackensack Meridian Health to undergo annual data security audits, make improvements to its security systems, and provide three years of credit monitoring services to breach victims free of charge.
In the 45-page complaint, the plaintiffs allege that Hackensack Meridian Health failed to adequately protect patients' data. They accuse the healthcare provider of running its network in a “reckless manner” that left its computer systems vulnerable to cyber-attackers.
The lawsuit further alleges that as a result of the attack, patients suffered major disruptions to their medical care for two days and were forced to seek alternative care and treatment.

PIH sued after notifying patients of phishing attack that could have exposed their protected health information
On January 24, I posted a breach notification from PIH Health with a commentary on how long it took from the time of the phishing attack to notification of almost 200,000 potentially affected patients. There was nothing in their notification, however, that suggested that patients had actually had their protected health information stolen or misused. Nor was their information destroyed or corrupted. Their information was in email accounts and could have been accessed by an unauthorized individual. From what I read, no patient had their care interrupted or even delayed.
On February 20, a potential class action lawsuit was filed against PIH.
The complaint, filed in the Central District of California with one named plaintiff, Daniela Hernandez, does not describe any actual injury or harm that Ms Hernandez suffered as a result of the breach, other than the usual claims of imminent harm, costs, etc. The complaint also includes counts under California and New Jersey laws.
The complaint was filed by the same law firm as two other class action lawsuits I recently noted and it contains some of the same claims and language that I thought were seriously exaggerated in the other complaints.
It was a poor decision on PIH’s part, I think, not to offer affected patients complimentary credit monitoring or restoration services, and I did question the timeliness of the notification, but consider the following allegations from the complaint:
As a direct and proximate result of Defendant’s breaches of its fiduciary duties, Plaintiff and Class Members have suffered and will suffer injury, including but not limited to: (i) actual identity theft; (ii) the compromise, publication, and/or theft of their Private Information; (iii) out-of-pocket expenses associated with the prevention, detection, and recovery from identity theft and/or unauthorized use of their Private Information; (iv) lost opportunity costs associated with effort expended and the loss of productivity addressing and attempting to mitigate the actual and future consequences of the Data Breach, including but not limited to efforts spent researching how to prevent, detect, contest, and recover from identity theft; (v) the continued risk to their Private Information, which remains in Defendant’s possession and is subject to further unauthorized disclosures so long as Defendant fails to undertake appropriate and adequate measures to protect the Private Information in its continued possession; (vi) future costs in terms of time, effort, and money that will be expended as result of the Data Breach for the remainder of the lives of Plaintiff and Class Members; and (vii) the diminished value of Defendant’s services they received.
I am obviously unimpressed with these lawsuits and think they are only going to drive up the cost of healthcare and cyberinsurance. Maybe the legal community needs to speak up more about firms that are filing suits like these.
Or maybe I’m missing something and these suits are an absolutely wonderful way to try to get healthcare entities to take greater precautions against hacks and ransomware attacks because they’re not motivated enough already? Maybe, but somehow I doubt that.

The cost of poor management.
US government fines Wells Fargo $3 billion for its 'staggering' fake-accounts scandal
The settlement with the Justice Department and Securities and Exchange Commission, years in the making, resolves Wells Fargo's criminal and civil liabilities for the fake-accounts scandal that erupted nearly four years ago.
The deal does not, however, remove the threat of prosecution against current and former Wells Fargo employees.
Prosecutors slammed Wells Fargo for the "staggering size, scope and duration" of the unlawful conduct uncovered at one of America's largest and most powerful banks.

So, who doesn't Russia like? Perhaps we should be asking, what do Bernie and Donald have in common?
Bernie Sanders briefed by U.S. officials that Russia is trying to help his presidential campaign

Heated Intelligence briefing relayed to Trump by House Republican allies
Republican lawmakers vocally objected to an intelligence briefing assessment that Russia prefers President Donald Trump to win in 2020 — and Rep. Devin Nunes of California, a close Trump ally, told the President about the election meddling briefing afterward, according to a person familiar with the matter.

More concern about e-mug shots?
Kate Allen and Wendy Gillis report:
Federal and provincial regulators are launching an investigation into whether Clearview AI, the company that makes facial recognition technology used by at least four Ontario police forces, breaks Canadian privacy laws.
The investigation was initiated “in the wake of numerous media reports that have raised questions and concerns about whether the company is collecting and using personal information without consent,” according to a joint statement.
Read more on The Star.

Not everyone is concerned about facial recognition.
A POLICE INVESTIGATOR in Spain is trying to solve a crime, but she only has an image of a suspect’s face, caught by a nearby security camera. European police have long had access to fingerprint and DNA databases throughout the 27 countries of the European Union and, in certain cases, the United States. But soon, that investigator may be able to also search a network of police face databases spanning the whole of Europe and the U.S.
According to leaked internal European Union documents, the EU could soon be creating a network of national police facial recognition databases. A report drawn up by the national police forces of 10 EU member states, led by Austria, calls for the introduction of EU legislation to introduce and interconnect such databases in every member state. The report, which The Intercept obtained from a European official who is concerned about the network’s development, was circulated among EU and national officials in November 2019. If previous data-sharing arrangements are a guide, the new facial recognition network will likely be connected to similar databases in the U.S., creating what privacy researchers are calling a massive transatlantic consolidation of biometric data.

Google is not out of the woods.
Google reaches a settlement with state AGs after contesting consultants in antitrust probe
The settlement, which is pending in a Texas court, would allow the consultants to continue to advise the states’ investigation but also impose certain confidentiality restrictions on them, a source told CNBC.

I think every English class I ever took had a section on how to write a letter. My students tell me they have not been taught how to write an email.
The Best Way to End an Email Professionally

Friday, February 21, 2020

Another good (almost great) bad example.
CISA Shares Details About Ransomware that Shut Down Pipeline Operator
Although they considered a range of physical emergency scenarios, the victim’s emergency response plan did not specifically consider the risk posed by cyberattacks,” CISA said in an alert Tuesday. “The victim cited gaps in cybersecurity knowledge and the wide range of possible scenarios as reasons for failing to adequately incorporate cybersecurity into emergency response planning.”
According to the CISA alert: “The victim failed to implement robust segmentation between the IT and [Operational Technology] networks, which allowed the adversary to traverse the IT-OT boundary and disable assets on both networks.
CISA said the attackers were able to gain initial access to the facility’s IT through a successful spearphishing link, a social engineering operation that would have targeted a specific individual to click and download the malware.
The attackers used commodity ransomware—conveniently available on the dark web–to “Encrypt Data for Impact,” so that assets such as Human Machine Interfaces were no longer accessible, causing a “Loss of View.

Consider a “false flag” attack by Russia to convince Iran they need closer ties to Moscow.
Massive DDoS Attack Shuts Down Iran’s Internet, Tehran Blames Washington
The head of Iran Civil Defense has accused Washington of the latest large-scale cyber-attack that targeted Iranian infrastructure. The coordinated Distributed Denial of Service (DDoS) attack affected two mobile operators and partially shut down Iran’s internet for hours. Iranian officials said they stopped the DDoS attack after activating Iran’s digital fortress DZHAFA shield. He added that the frequent cyber-attacks had become Washington’s only option after its failure to respond to Iran’s shooting down of United States’ unmanned aerial vehicle and Iranian missile attacks on Iraq’s Ain al-Assad US military base.

Privacy seems to be catching on.
Here’s an update on some state-level privacy legislation in New Hampshire,  Massachusetts, and Washington State:
Michael Boldin writes:
Today, the New Hampshire House approved a bill to ban government use of facial recognition surveillance technologies. The proposed law would not only help protect privacy in New Hampshire; it would also hinder one aspect of the federal surveillance state.
A bipartisan coalition of four Republicans, three Democrats and one Libertarian introduced House Bill 1642 (HB1642 ) on Jan. 8. The legislation would ban the state and its political subdivisions from using facial recognition and would make any such information obtained in violation of the act inadmissible in court.
Read more on TenthAmendmentCenter.
Mike Maharrey writes:
Yesterday, a Massachusetts joint legislative committee passed a bill that would put strict limitations on the use of automated license plate reader systems (ALPRs) by the state. Passage into law would also place significant roadblocks in the way of a federal program using states to help track the location of millions of everyday people through pictures of their license plates.
Rep. William Straus (D-Bristol) introduced House Bill 3141 (H3141) last year and it carried over to the 2020 session. The legislation would restrict law enforcement use of ALPRs to specific, enumerated “legitimate law enforcement purposes.” The proposed law would also put strict limitations on the retention and sharing of data gathered by license plate readers.
On Feb. 18, the Joint Committee on Transportation passed H3141.
Read more on TenthAmendmentCenter.
Colin Wood reports:
A hearing this week in the Washington state House of Representatives will determine if the state legislature will go forward with a bill that would give the state sweeping new data-privacy rules. Members on Friday will take up the Washington Privacy Act, which passed out of the state Senate last week and, if enacted, would give the state’s 7.5 million residents digital privacy protections on par with those recently imposed in California.
Read more on StateScoop.
These are not the only states with privacy laws in the legislative hopper, so to speak. As Aaron Kirkpatrick reports today on ABA Risk & Compliance:
When this article went to press, at least eight states—Connecticut, Hawaii, Massachusetts, Mississippi, New Jersey, New Mexico, Rhode Island, Texas—had seen proposed legislation similar to CCPA, and even more states had seen approaches less intense than CCPA.
For example, some states don’t include CCPA’s private right of action under which consumers can sue companies for monetary compensation should their data be negligently handled. Other states, such as Nevada, have chosen to only include organizations that sell personal data under the law’s umbrella.
And that’s just versions of CCPA. There are also bills like New York’s even more protective and radical Privacy Act, although that bill doesn’t seem to have a lot of traction at this point.

Could this happen in other countries?
Swiss court rules defamatory Facebook likes ‘can be illegal’
The case related to a dispute between animal rights activists from 2015. The perpetrator had liked and shared several posts critical of fellow animal rights activist Erwin Kessler.
In groups like ‘Vegan in Zurich’ and ‘Indyvegan’, the perpetrator had liked and shared posts which portrayed as a neo-Nazi who harboured anti-Semitic ideas.
The Zurich court fined the perpetrator saying the social media actions amounted to defamation. The Federal Court on Thursday upheld the verdict.

How to Amazon? Interesting read.
Why Amazon knows so much about you
BBC News article includes extensive history, narrative, graphics, photos and insight into how and why Amazon collects massive amounts of data Amazon on users through multiple channels of e-commerce and devices – by Leo Kelion – “You might call me an Amazon super-user. I’ve been a customer since 1999, and rely on it for everything from grass seed to birthday gifts. There are Echo speakers dotted throughout my home, Ring cameras inside and out, a Fire TV set-top box in the living room and an ageing Kindle e-reader by my bedside. I submitted a data subject access request, asking Amazon to disclose everything it knows about me Scanning through the hundreds of files I received in response, the level of detail is, in some cases, mind-bending. One database contains transcriptions of all 31,082 interactions my family has had with the virtual assistant Alexa. Audio clips of the recordings are also provided. The 48 requests to play Let It Go, flag my daughter’s infatuation with Disney’s Frozen. Other late-night music requests to the bedroom Echo, might provide a clue to a more adult activity…”

I’m not sure I’m ready for tires that talk to me.
The Amazing Ways Goodyear Uses Artificial Intelligence And IoT For Digital Transformation
Goodyear uses internet of things technology in its Eagle 360 Urban tire. The tire is 3D printed with super-elastic polymer and embedded with sensors. These sensors send road and tire data back to the artificial intelligence-enhanced control panel that can then change the tread design to respond to current road conditions on the fly and share info about conditions with the broader network. If the tire tread is damaged, the tire moves the material and begins self-repair.
Another tire innovation from Goodyear is the Oxygene model, another 3D-printed tire that has embedded sensors connected to the internet of things and also uses living moss and photosynthesis to power its electronics. The self-generated electricity powers onboard sensors, an AI-processing unit, as well as a light strip that illuminates when a driver brakes or changes lanes.

Of course they are. Governance of Artificial Intelligence by legislatures of Questionable Intelligence.
AI Laws Are Coming
The pace of adoption for AI and cognitive technologies continues unabated with widespread, worldwide, rapid adoption. Adoption of AI by enterprises and organizations continues to grow, as evidenced by a recent survey showing growth across each of the seven patterns of AI. However, with this growth of adoption comes strain as existing regulation and laws struggle to deal with emerging challenges. As a result, governments around the world are moving quickly to ensure that existing laws, regulations, and legal constructs remain relevant in the face of technology change and can deal with new, emerging challenges posed by AI.
Research firm Cognilytica recently published a report on Worldwide AI Laws and Regulations that explores the latest legal and regulatory actions taken by countries around the world across nine different AI-relevant areas.

Everyone does it, they’re just not so blatant.
Trump Backs Supporter Larry Ellison in Court Fight With Google
The Trump administration urged the U.S. Supreme Court to reject an appeal by Alphabet Inc.’s Google, boosting Oracle Corp.’s bid to collect more than $8 billion in royalties for Google’s use of copyrighted programming code in the Android operating system.
The administration weighed in on the high-stakes case on the same day that President Donald Trump attended a re-election campaign fundraiser in California hosted by Oracle’s co-founder, billionaire Larry Ellison.

Thursday, February 20, 2020

My Computer Security class was discussing hacking options in our last class. This one is so subtle “humans” may not even notice the change.
Tesla Autopilot gets tricked into accelerating from 35 to 85 mph with modified speed limit sign
A group of hackers has managed to trick Tesla’s first-generation Autopilot into accelerating from 35 to 85 mph with a modified speed limit sign that humans would be able to read correctly.

For our ongoing discussion…
Here Are All the Ways People Have Found to Hack Voting Machines

Free ebook.
Fighting Disinformation Online
RAND Corporation – Kavanagh, Jennifer, Samantha Cherney, Hilary Reininger, and Norah Griffin, Fighting Disinformation Online: Building the Database of Web Tools. Creative Commons Attribution-Non Commercial 4.0 International License, 2020.
Today’s information ecosystem brings access to seemingly infinite amounts of information instantaneously. It also contributes to the rapid spread of misinformation and disinformation to millions of people. In response to this challenge and as part of the RAND Corporation’s Truth Decay initiative, RAND researchers worked to identify and characterize the universe of online tools targeted at online disinformation, focusing on those tools created by nonprofit or civil society organizations. This report summarizes the data collected by the RAND team in 2018 and 2019 and serves as a companion to the already published web database. The report includes information on our inclusion and exclusion criteria, a discussion of methodology, a list of domains or characteristics that we coded for every tool (e.g., tool type, delivery platform), a summary of descriptive statistics that provides multiple different snapshots of both available tools and those in development, and a series of deep dives that describe each of the types of tools in the database and how each works to counter the disinformation challenge.”

Apparently it take years to recognize (or admit) your error.
Algorithms Were Supposed to Fix the Bail System. They Haven't
If you are booked into jail in New Jersey, a judge will decide whether to hold you until trial or set you free. One factor the judge must weigh: the result from an algorithm called PSA that estimates how likely you are to skip court or commit another crime.
New Jersey adopted algorithmic risk assessment in 2014 at the urging, in part, of the nonprofit Pretrial Justice Institute. The influential Baltimore organization has for years advocated use of algorithms in place of cash bail, helping them spread to most states in the nation.
Then, earlier this month, PJI suddenly reversed itself. In a statement posted online, the group said risk-assessment tools like those it previously promoted have no place in pretrial justice because they perpetuate racial inequities.

Gartner’s 2020 Magic Quadrant For Data Science And Machine Learning Platforms Has Many Surprises
Gartner recently published its magic quadrant report on data science and machine learning (DSML) platforms. The market landscape for DS, ML and AI is extremely fragmented, competitive, and complex to understand. Gartner attempted to stack rank the vendors based on a well-defined criterion. Refer to the inclusion and exclusion criterion for details on the parameters considered by Gartner.

A trend or freakishly unique?
Can Wyoming regulate the internet? Its Legislature is trying.
This week, members in both chambers of the Wyoming Legislature voted to take on three distinct pieces of legislation that could extend a number of laws typically seen in the real world into the digital realm, part of an increasing trend in state legislatures across the country to apply real-world legislation to the digital world.
One bill, which passed second reading in the Senate on Monday, would permit judges to issue warrants for digital records stored on out-of-state servers. Another would essentially extend First Amendment protections to coders or app developers to spare them from prosecution if their products are used in criminal activity, granting what essentially amounts to a “shield law” for computer programmers.
Most ambitious of all was Teton County Democrat Mike Yin’s House Bill 101, which — if passed — would have held all internet service providers accountable for protecting the personal information of their users.

Wednesday, February 19, 2020

16 DDoS attacks take place every 60 seconds, rates reach 622 Gbps
DDoS attacks are aimed at disrupting online services. A flood of illegitimate traffic is generated by PCs, Internet of Things (IoT), and other devices which send request after request, and these queries eventually overwhelm a service. users are then unable to get through. There are different forms of DDoS that target particular aspects of a service, but resource exhaustion and HTTP floods tend to be common.

Black boxes? Are the algorithms subject to FOIA?
Report on Artificial Intelligence in Federal Agencies
Washington, D.C., Stanford, Calif., and New York, February 18, 2020 — The Administrative Conference of the United States (ACUS), Stanford Law School, and New York University School of Law are pleased to announce the release of a major report exploring federal agencies’ use of artificial intelligence (AI) to carry out administrative law functions. This is the most comprehensive study of the subject ever conducted in the United States. The report, entitled Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies, examines the growing role that machine learning and other AI technologies are playing in federal agency adjudication, enforcement, and other regulatory activities. Based on a wide-ranging survey of federal agency activities and interviews with federal officials, the report maps current uses of AI technologies in federal agencies, highlights promising uses, and addresses challenges in assuring accountability, transparency, and non-discrimination. Stanford Law School Professors David Freeman Engstrom and Daniel Ho, NYU Law Professor Catherine Sharkey, and California Supreme Court Justice Mariano-Florentino Cuéllar served as principal advisors on the report. They received research assistance from 30 Stanford law, computer science, and engineering students, and five NYU Law students, who participated in the Spring 2019 Stanford policy lab, Administering by Algorithm: Artificial Intelligence in the Regulatory State. Stanford’s Institute for Human-Centered Artificial Intelligence, with which Engstrom, Ho, and Cuéllar are affiliated, also provided seed funding for the report…”

A flipped classroom tool? Would videos be more interesting to my students than articles like this one?
Quickly Turn Articles Into Videos With InVideo
This morning I was browsing Product Hunt when I saw a new product that was promoting itself as a way to create "insanely good social videos." The service is called InVideo. While it is fairly easy to use to make audio slideshow-style videos, that's not why I'm mentioning it today. The reason I'm mentioning it is that contains a feature to convert written articles into videos.
InVideo offers lots of tools and templates for making audio slideshow videos to share on social media and elsewhere. One of those tools lets you copy the text of an article into a template then have InVideo automatically select images to match the text of the article. A similar InVideo template lets you enter the URL of an article and have a video made with images that are automatically selected to make the text of the article. In both cases parts of the text appear on the slides with the images. And in both cases you can manually override the automatic image selections.
When your InVideo video is complete you can download it for free with a watermark applied to it. Alternatively, you can invite other people to join InVideo and the watermark is removed. Or you can purchase an InVideo subscription to have all watermarks removed.
InVideo probably isn't a tool that students can use because it does require a phone number in order to sign up. That said, it could be useful for teachers who want to provide their students with a visual summary of the key points of a long passage of text.

Tuesday, February 18, 2020

Security tips.
How Location Tracking Works on Your Phone in 2020
gizmodo: “How phones track location is changing – if you’ve upgraded to the latest Android 10 or iOS 13 updates, you may have noticed more prompts around what apps can do with data about your whereabouts. Here’s what those new prompts mean, and how you can get your phone’s location tracking settings set up in a way that you’re comfortable with…”

How to kill a technology? Except for Big Brother, of course.
Automated facial recognition breaches GDPR, says EU digital chief
Commissioner Margrethe Vestager believes facial recognition in the EU requires consent
Margrethe Vestager, the European Commission’s executive vice president for digital affairs, told reporters that “as it stands right now, GDPR would say ‘don’t use it’, because you cannot get consent,” EURACTIV revealed today.
GDPR classes information on a person’s facial features as biometric data, which is labeled as “sensitive personal data.” The use of such data is highly restricted, and typically requires consent from the subject — unless the processing meets a range of exceptional circumstances.
These exemptions include it being necessary for public security. This has led the UK’s data regulator to allow police to use facial recognition CCTV, as it met “the threshold of strict necessity for law enforcement purposes.”

Did an AI write this article? Would a contract granting me ownership of the patent in exchange for an uninterruptible supply of electricity solve this? My AI thinks it would.
Why AI systems should be recognized as inventors
The Artificial Inventor Project is exposing the limitations of existing patent laws
Existing intellectual property laws don’t allow AI systems to be recognized as inventors, which threatens the integrity of the patent system and the potential to develop life-changing innovations.
Current legislation only allows humans to be recognized as inventors, which could make AI-generated innovations unpatentable. This would deprive the owners of the AI of the legal protections they need for the inventions that their systems create.
The Artificial Inventor Project team has been testing the limitations of these rules by filing patent applications that designate a machine as the inventor— the first time that an AI’s role as an inventor had ever been disclosed in a patent application. They made the applications on behalf of Dr Stephen Thaler, the creator of a system called DABUS, which was listed as the inventor of a food container that robots can easily grasp and a flashing warning light designed to attract attention during emergencies.
The European Patent Office (EPO) and the United Kingdom Intellectual Property Office (UKIPO) both rejected the application, on the grounds that the inventor designated in the application had to be a human being — and not a machine.

Slowly correcting the journal model.
Open access journals get a boost from librarian much to Elsevier’s dismay
ars technica: “A quiet revolution is sweeping the $20 billion academic publishing market and its main operator Elsevier, partly driven by an unlikely group of rebels: cash-strapped librarians. When Florida State University cancelled its “big deal” contract for all Elsevier’s 2,500 journals last March to save money, the publisher warned it would backfire and cost the library $1 million extra in pay-per-view fees. But even to the surprise of Gale Etschmaier, dean of FSU’s library, the charges after eight months were actually less than $20,000. “Elsevier has not come back to us about ‘the big deal’,” she said, noting it had made up a quarter of her content budget before the terms were changed. Mutinous librarians such as Ms. Etschmaier remain in a minority but are one of a host of pressures bearing down on the subscription business of Elsevier, the 140-year-old publisher that produces titles including the world’s oldest medical journal, The Lancet. The company is facing a profound shift in the way it does business, as customers reject traditional charging structures. Open access publishing—the move to break down paywalls and make scientific research free to read—is upending the funding model for journals, at the behest of regulators and some big research funders, while online tools and the illicit Russian pirate-site Sci-Hub are taking readers…”

As a huge fan of SciFi, I refuse this definition. Think of it more as hypothesis testing.
Fan of sci-fi? Psychologists have you in their sights
Science fiction has struggled to achieve the same credibility as highbrow literature. In 2019, the celebrated author Ian McEwan dismissed science fiction as the stuff of “anti-gravity boots” rather than “human dilemmas”. According to McEwan, his own book about intelligent robots, Machines Like Me, provided the latter by examining the ethics of artificial life – as if this were not a staple of science fiction from Isaac Asimov’s robot stories of the 1940s and 1950s to TV series such as Humans (2015-2018).
Psychology has often supported this dismissal of the genre. The most recent psychological accusation against science fiction is the “great fantasy migration hypothesis”. This supposes that the real world of unemployment and debt is too disappointing for a generation of entitled narcissists. They consequently migrate to a land of make-believe where they can live out their grandiose fantasies.

Free Learning tool?
Socratic, the homework-helper app picked up by Google, gets an AI-enhanced Android release
Back in 2017, Socratic was launched on Android, offering students assistance with their homework. The app was pretty interesting, and managed to catch Google's eye, leading to an acquisition. Last year we got word that an updated version of the app was about to debut, with a heavy emphasis on tapping into Google's AI algorithms to improve performance. Now that new Android edition is finally available to download.