Saturday, November 30, 2019

The next big Privacy issue? Probably not.
As Amazon Ring Partners With Law Enforcement on Surveillance Video, Privacy Concerns Mount
While Amazon takes special care to position its Ring video doorbell product as a friendly, high-tech version of the traditional “neighborhood watch,” U.S. lawmakers and privacy advocates are becoming increasingly skeptical. As they see it, Amazon Ring is putting into place few if any safeguards to protect personal privacy and civil rights. Now that Amazon Ring is partnering with hundreds of law enforcement and police agencies around the nation to share surveillance video, these privacy concerns are only mounting.
… Currently, at least 630 police departments around the nation have some form of partnership agreement in place with Amazon Ring. That number is up significantly by more than 200 since August, and Amazon Ring appears to be on a massive outreach program to get even more police departments to sign on to its surveillance video partnerships.
According to the basic type of agreement with law enforcement agencies, police can keep and share surveillance videos with anyone they want, even if there is no evidence of a crime that has taken place.
… Amazon Ring has put into place some privacy and civil liberties safeguards. For example, owners of Ring video doorbells are under no obligation to provide surveillance video, even when requested or suggested. And Amazon Ring specifically protects the identity of Ring video doorbell owners, such that local police departments cannot “retaliate” against anyone who refuses a surveillance video request. [“I see you have a Ring doorbell, citizen. Why not voluntarily give me the video?” Bob]

Without access to the same sources the author had, Facebook must rely on the State to tell it what is truth and what is fake. How 1984-ish...
Singapore tells Facebook to correct user's post in test of 'fake news' laws
Singapore instructed Facebook on Friday to publish a correction on a user’s social media post under a new “fake news” law, raising fresh questions about how the company will adhere to government requests to regulate content.
The government said in a statement that it had issued an order requiring Facebook “to publish a correction notice” on a Nov. 23 post which contained accusations about the arrest of a supposed whistleblower and election rigging.
Singapore said the allegations were “false” and “scurrilous” and initially ordered user Alex Tan, who runs the States Times Review blog, to issue the correction notice on the post. Tan, who does not live in Singapore and says he is an Australian citizen, refused and authorities said he is now under investigation.
Facebook often blocks content that governments allege violate local laws, with nearly 18,000 cases globally in the year to June, according to the company’s “transparency report.”
But the new Singapore law is the first to demand that Facebook publish corrections when directed to do so by the government, and it remains unclear how Facebook plans to respond to the order.
The case is the first big test for a law that was two years in the making and came into effect last month.

(Related) You have to do this thousands of times each hour.
The context: The vast majority of Facebook’s moderation is now done automatically by the company’s machine-learning systems, reducing the amount of harrowing content its moderators have to review. In its latest community standards enforcement report, published earlier this month, the company claimed that 98% of terrorist videos and photos are removed before anyone has the chance to see them, let alone report them.
Facebook’s AI uses two main approaches to look for dangerous content. One is to employ neural networks that look for features and behaviors of known objects and label them with varying percentages of confidence (as we can see in the video above).
If the system decides that a video file contains problematic images or behavior, it can remove it automatically or send it to a human content reviewer. If it breaks the rules, Facebook can then create a hash—a unique string of numbers—to denote it and propagate that throughout the system so that other matching content will be automatically deleted if someone tries to re-upload it. These hashes can be shared with other social-media firms so they can also take down copies of the offending file.
Facebook is still struggling to automate its understanding of the meaning, nuance, and context of language. That’s why the company relies on people to report the overwhelming majority of bullying and harassment posts that break its rules: just 16% of these posts are identified by its automated systems

The Russians are doing it again. (Whatever “it” is)

Friday, November 29, 2019

Not perfect security, but a darn good response.
Cloudy biz Datrix locks down phishing attack in 15 mins after fat thumb triggers email badness
… He explained that someone within the company had been thumbing through emails on their mobile phone and accidentally tapped a link sent from a compromised supplier of Datrix's. In turn, that compromised the person's inbox, allowing the attackers to "access a bunch of internal emails, read them and send them to our finance department".
Those emails, sent to tempt finance bods into paying fake invoices, linked to a fake domain: (with a lowercase L) (instead of
On top of that, around 300 emails were sent to customers whose details were in emails sent to the hapless Datrix worker. Wirszycz said the company shut off the compromised email account within 15 minutes, preventing the sending of "several thousand" emails.

As happens, this is two days after my lecture on forensics.

GDPR guidance.
UK ICO publishes new guidance on special category data
On November 14, 2019, the UK Information Commissioner’s Office (“ICO”) published detailed guidance on the processing of special category data. The guidance sets out (i) what are the special categories of data, (ii) the rules that apply to the processing of special category data under the General Data Protection Regulation (“GDPR”) and UK Data Protection Act 2018 (“DPA); (iii) the conditions for processing special category data; and (iv) additional guidance on the substantial public interest condition, including what is an “appropriate policy document”.
Under the GDPR, stricter rules apply to the processing of special category data, which includes genetic and biometric data as well as information about a person’s health, sex life, sexual orientation, racial or ethnic origin, political opinions, religious or philosophical beliefs, and trade union membership. As noted in the guidance, there is a presumption that “this type of data needs to be treated with greater care” because the “use of this data could create significant risks to the individual’s fundamental rights and freedoms”. This blog post provides a summary of the key takeaways from the ICO’s guidance.

What if it doesn’t like what it sees?
… For better or worse, many applications of in-car AI are right around the corner. In the near future, you can expect cars to help detect distracted drivers, be more conscious of their real owner, and help improve the ride experience by tuning the environment of the car to the preferences of its passengers. But as we know all too well, technological advancements come without impactful tradeoffs.
… A camera installed near the steering wheel monitors the driver’s behavior. Affectiva’s AI measures the frequency and length of blinking eyes to determine whether a driver is drifting into drowsiness and signals a warning and recommends playing music, changing the temperature, or pulling over.
The AI is also being developed to detect distractions, such as when drivers are texting, eating, talking on the phone, or turning their heads to talk to passengers.

One step on a slippery slope. Apps that are a curiosity, then perhaps useful, then earning a discount on health insurance, then mandatory is you want health insurance, then capable of terminating anyone with a serious (costs lots of money) health risk.
How a Smartphone Can Turn Your Bathroom Into a Home Medical Lab
Israel’s is the first firm to get U.S. approval for a lab test by phone. Its urinalysis kits identify kidney dysfunction and other ailments

Smart Toilets: The Jetpack of the Bathroom
Now, researchers at the University of Wisconsin-Madison are envisioning a toilet that can analyze urine for indicators of disease (such as blood, protein, or metabolites), connect to the internet, and send the information to your phone or your doctor.

A collection of useful tools.
Twelve Good Tools for Creating Mind Maps & Flowcharts - Updated

Thursday, November 28, 2019

So collectively there was damage, but no individual damage?
Facebook must face data breach class action on security, but not damages: judge
A federal judge said up to 29 million Facebook Inc users whose personal information was stolen in a September 2018 data breach cannot sue as a group for damages, but can seek better security at the social media company after a series of privacy lapses.
In a decision late Tuesday night, U.S. District Judge William Alsup in San Francisco said neither credit monitoring costs nor the reduced value of stolen personal information was a “cognizable injury” that supported a class action for damages.
Alsup also said damages for time users spent to mitigate harm required individualized determinations rather than a single classwide assessment.
Users were allowed to sue as a group to require Facebook to employ automated security monitoring, improve employee training, and educate people better about hacking threats.
Alsup rejected Facebook’s claim that these were unnecessary because it had fixed the bug that caused the breach.
“Facebook’s repetitive losses of users’ privacy supplies a long-term need for supervision,” at least at this stage of the litigation, Alsup wrote.
… The case is Adkins v Facebook Inc, U.S. District Court, Northern District of California, No. 18-05982.

Online scammers stealing billions in credit card fraud
… A survey suggested that 21 per cent of people — about 11 million adults nationwide — had to replace or cancel their credit card as a result of attempted fraud over the past year. Victims lost an average of £846 each, meaning that nationally £4.7 billion is thought to have been stolen.

Will the Internet become a web of exceptions? “Yes, there is a country called Crimea, but not in Russia.”
Apple changes Crimea map to meet Russian demands
Apple has complied with Russian demands to show the annexed Crimean peninsula as part of Russian territory on its apps.

“I’m shocked! Shocked I tell you!”
Study Shows Only 12% of Companies Are Ready For New CCPA Data Privacy Regulation
With just six weeks to go before the new California Consumer Privacy Act (CCPA) goes into effect on January 1, 2020, a surprisingly large percentage of companies are still not ready to handle the compliance demands of the new data privacy regulation. According to a study of 85 companies by New York-based data privacy technology company Ethyca, only 12% of companies have reach an “adequate state of compliance” ahead of the new data privacy regulation becoming law. Moreover, nearly four in ten companies (38%) need at least 12 months to become compliant. With the state attorney general’s office in California suggesting that enforcement actions will begin immediately, that could present a number of problems for compliance laggards.

Some ‘rights’ are exerciseable at the individuals whim.
German Court Backs Murderer's 'Right to be Forgotten'
A man convicted of murder 37 years ago has the right to be forgotten and have his name removed from online search results, Germany's highest court ruled on Wednesday.
The constitutional court in Karlsruhe found in favour of a man who was given a life sentence for killing two people on a yacht in 1982.
The man, who was released from prison in 2002, is now fighting to distance his family name from reports about the case.
The decision could mean publications are forced to restrict search engine access to their online archives in such cases.
His full name still appears in online searches as part of an archived article in German weekly Der Spiegel.
His case was initially rejected by a federal court in 2012 on the basis that his right to privacy did not outweigh public interest and press freedom.
But Germany's highest court has now thrown out that initial ruling, meaning his case will now return to the federal courts.
Yet the court also insisted that individuals could not unilaterally claim a right to be forgotten and that its decision had been influenced by the amount of time that had passed since the crime.

Not sure I agree.
Legal Industry—While Lagging With AI—Sees Benefits of Its Use
According to the new “What Jobs Are Affected by AI?” report from the Brookings Institution, the legal industry is the least exposed to AI.
Still, AI-backed advanced analytics, legal research and document creation are removing some human-powered tasks while also allowing lawyers to improve their counseling and work more creatively.
However, the growing adoption of AI in the profession could lead to fewer paralegals, administrative assistants and associates. Indeed, the Brookings report noted, “While lawyers may still make the ultimate decisions, lower-level researchers and paralegals may see their ranks dwindle as AI saves firms time and improves accuracy.”

Another browser for my students to plat with.
Meet Kilos, a New Search Engine for the Dark Web
Etay Maor, CSO at deep web threat intelligence firm IntSights, has taken a close look at Kilos. It is not the first nor the only facility for searching across dark web sites, with other services like Torch and TorLinks – but it offers extensive filtering capabilities for locating specific products from within numerous dark markets.

Confirmed: I am getting wiser!

Wednesday, November 27, 2019

The Bank of Bangladesh hack was not a unique occurrence.
EastNets SWIFT Cyber Fraud Survey Report Reveals More Than 4 Out 5 Banks Are Targeted
As banks are battling the growing risk of SWIFT* payment messaging fraud, EastNets today released its How Banks are Combating the Rise in SWIFT Cyber Fraud survey report that reveals that most of the 200 banks surveyed experienced an electronic SWIFT fraud attempt since 2016.
In addition, two-thirds of banks responded that SWIFT cybercrime attempts have been increasing since 2016. Worryingly, only two-fifths of banks are "very confident" that they have detected every attempt at cyber SWIFT fraud since 2016.
Download the full report; How Banks are combating the Rise in SWIFT Cyber Fraud, at

When the ‘somebody’ out to get you is country.
In just three months, Google sent 12k warnings about government-backed attacks
Most of these alerts were sent to users in the US, South Korea, Pakistan, and Vietnam.
The alerts are nothing more than basic emails. Google sends these alerts to Gmail users once the company detects they've been targeted with malicious emails linked to a nation-state hacking operation.
These emails can carry links to download malware, file attachments booby-trapped to infect users, or links to phishing sites where hackers collect a target's credentials for various online accounts.

For the Security, Forensic and Hacker toolkits.
FIDL: FLARE’s IDA Decompiler Library
This blog post introduces the FLARE IDA Decompiler Library (FIDL), FireEye’s open source library which provides a wrapper layer around the Hex-Rays API.

It is good to see a young lawyer writing comprehensively about security and privacy. I wish she would do more.
Public Phone Charging Stations: Convenience… at a Price.

Toward a set of Best Practices.
Data Privacy vs. Customer Analytics: How to Do Both
Companies across all industries are under increasing pressure to become more data driven by expanding their customer data analytics initiatives. However, these initiatives often conflict with – and can be stymied by – evolving data privacy regulations if not proactively dealt with. I’ve spoken with companies across retail, telecommunications, financial services and the automotive industry who are all wrestling with this data utility/data privacy trade-off in key analytical areas such as personalization and predictive modeling. This leaves companies facing what can be an existential question. How can we use customer data to drive new business opportunities while at the same time protect that data and comply with new, complex regulations?

Toward a US Privacy law?
Starting Point for Negotiation: An Analysis of Senate Democratic Leadership’s Landmark Comprehensive Privacy Bill
In substance, the bill primarily emphasizes individual control, codifying strong rights for individuals to be informed of data processing, and to be able to access, delete, correct, and port their data. The definition of covered data is broad, aligning with the GDPR and most other US privacy bills to date (data that “identifies, or is linked or reasonably linkable to an individual or a consumer device, including derived data”), although it excludes “de-identified data.” The FTC is tasked with rulemaking to enable centralized opt-outs for non-sensitive data, while “sensitive data” requires opt-in consent.

Perspective. Some quotable quotes.
The Fourth Industrial Revolution is redefining the economy as we know it
The Fourth Industrial Revolution (4IR) upends current economic frameworks. Who makes money - and how - has changed. Demographics have changed. Even the skills that brought our society to where we are today have changed. Leaders must account for these transformations or risk leaving behind their companies, their customers and their constituents.
The top three economic frameworks in most urgent need of a 4IR overhaul include income generation, labour force participation and gross domestic product (GDP) measures. Let’s unpack these concepts one at a time and redefine what they mean as we advance bravely into the Fourth Industrial Revolution.
The implications of these changes mark an inflection point in world history: no longer do the poor make up the majority of the world population. That title now belongs to the middle class – who also provide the majority of demand in the global economy.
Depending on GDP as a measure of success in the Fourth Industrial Revolution will adversely affect policy decisions because technology as a product has a deflationary effect.

More books to read!
NYPL 2019 Best Books for Adults
Welcome to the 2019 Best Books for Adults. The New York Public Library is a premier resource for connecting readers with great books, with a staff dedicated to spreading a love of reading and sharing their book expertise. Our librarians—through their experience recommending books to patrons and as readers themselves—have highlighted their picks for 100 best books written for adults and published in 2019. No matter what kind of reader you are, what genres or subjects you normally gravitate to, we’re confident that you will find a book to pull you deep into its world or open yours up. Browse through the categories below, or go straight to our top 10 list (selected by a vote among our staff ), and find your next great read…”

Tuesday, November 26, 2019

You knew this, right? Will it change under CCPA?
The California DMV Is Making $50M a Year Selling Drivers’ Personal Information
DMVs across the country are selling data that drivers are required to provide to the organization in order to obtain a license. This information includes names, physical addresses, and car registration information. California’s sales come from a state which generally scrutinizes privacy to a higher degree than the rest of the country.

Hacking AI. (Huge volumes of data, pointing in all directions. Add a small volume that points to the “answer” you want.)
Tiny alterations in training data can introduce "backdoors" into machine learning models
The attack is related to adversarial examples, a class of attacks that involve probing a machine-learning model to find "blind spots" -- very small changes (usually imperceptible to humans) that cause machine learning classifiers' accuracy to shelve off rapidly (for example, a small change to a model of a gun can make an otherwise reliable classifier think it's looking at a helicopter ).

For my students. (Not sure why a search on my school email says I use it in Columbia.)
Personal Email Security Guide
Everyone uses email, it’s incredibly useful, and you can’t sign-up for accounts or do much of anything online without one. However, email is still a hacker’s favorite route to attacking a target, because most users don’t bother to secure their accounts.
Stay Safe: Make sure to double-check the sender’s email address and domain name for signs of forgery or misspellings. Reputable businesses and banks will never ask for personal and sensitive information via email. If a message asks for your password, credit card details or social security number, that’s a phishing email. The most basic step to do in such an uncertain situation is to use reverse lookup service to find out more information about sender.

Suggesting that regulators are beginning to understand search algorithms?
Google Will Restrict Sharing of User Data for Google Ads Under EU Privacy Pressure
Google is taking yet another step to guarantee that a key revenue generator for the company – its programmatic advertising platform based on Real-Time Bidding (RTB) technology – remains compliant with EU privacy regulations. Under the terms of the European General Data Protection Regulation (GDPR), Google must minimize the amount of user data that it collects and then shares with third parties such as advertisers looking to buy Google Ads. With that in mind, Google’s ad exchange will stop telling advertisers what categories of websites users are visiting starting in February 2020.
Ireland’s top data protection regulator, for example, is taking a closer look at Google’s programmatic advertising technology, based on concerns that data shared with advertisers might enable them to create comprehensive user profiles once they combine their own user data sets with the user data being shared by Google.

Perhaps they haven’t read the Asimov stories where the three laws fail?
Scientists developed a new AI framework to prevent machines from misbehaving
In what seems like dialogue lifted straight from the pages of a post-apocalyptic science fiction novel, researchers from the University of Massachusetts Amherst and Stanford claim they’ve developed an algorithmic framework that guarantees AI won’t misbehave.
The framework uses ‘Seldonian’ algorithms, named for the protagonist of Isaac Asimov’s “Foundation” series, a continuation of the fictional universe where the author’s “Laws of Robotics” first appeared.
According to the team’s research, the Seldonian architecture allows developers to define their own operating conditions in order to prevent systems from crossing certain thresholds while training or optimizing. In essence, this should allow developers to keep AI systems from harming or discriminating against humans.
The current AI development paradigm places the burden of combating bias on the end user. For example, Amazon’s Rekognition software, a facial recognition technology used by law enforcement, works best if the accuracy threshold is turned down but it demonstrates clear racial bias at such levels. Cops using the software have to choose whether they want to use the technology ethically or successfully.
The Seldonian framework should take this burden off the end-user and place it where it belongs: on the developers.

Requires broad access to health records. Introduces new opportunities for bias.
Startup Deep 6 Lands $17 Million To Use AI To Help Find Patients For Clinical Trials
Of the many inefficiencies throughout the long, expensive drug-discovery process, patient recruitment for clinical trials can be a particularly brutal bottleneck. For example, various studies have shown that as many as 40% of trials fail to meet their enrollment goals.
Enter Deep 6, a Pasadena-based startup announcing $17 million in fresh funding at a $50 million valuation for its artificial intelligence-powered technology that can suggest candidates for clinical trials in “minutes instead of months.” It has raised $22 million total.
One of the reasons that clinical trial recruitment takes so long is that the patient health information necessary to determine if someone is a good match for a given trial is spread across electronic medical records (EMRs), physicians notes, pathology reports and other forms of documentation.

Perspective. Why AI is growing?
AI Stats News: Chatbots Increase Sales By 67% But 87% Of Consumers Prefer Humans
Business leaders saved an average of $300,000 in 2019 from their chatbots, with the greatest impact occurring across support and sales teams; the sales function is the most common use case for chatbots (41%), followed closely by support (37%) and marketing (17%); chatbots increased sales by an average of 67%, with 26% of all sales starting through a chatbot interaction; 35% of business leaders said chatbots helped them close sales deals; top automated tasks performed by chatbots are routing website visitors, collecting information, and qualifying leads; chatbots speed up response times by an average of 4x and increase customer support satisfaction scores by 24% [Intercom survey of 500 business leaders]

The WSJ got it wrong? Impossible!
Why many in the search community don’t believe the WSJ about Google search
Follow up to previous posting – an article by – How Google Interferes With Its Search Algorithms and Changes Your Results please read – Search is complicated. The WSJ appeared set on seeing that complexity through a conspiratorial lens. Barry Schwartz on November 18, 2019. “…At first, I thought maybe the Wall Street Journal had uncovered something. But as I read through page after page while being shuttled down the West Side Highway towards my office in West Nyack, New York, I was in disbelief. Not disbelief over anything Google may have done, but disbelief in how the Wall Street Journal could publish such a scathing story about this when they had absolutely nothing to back it up. The subtitle of the story read, “The internet giant uses blacklists, algorithm tweaks and an army of contractors to shape what you see.” This line alone shows a lack of understanding on how search works and why the WSJ report on Google got a lot wrong, as my colleague Greg Sterling reported last week. Google is not certainly perfect, but almost everything in the Wall Street Journal report is incorrect. I’ll go through many of the points [in this article]…”

Another peek at Brave.
There are more competing web browsers than ever, with many serving different niches. One example is Brave, which has an unapologetic focus on user privacy and comes with a radical reimagining of how online advertising ought to work.
Brave was one of the first browsers to include built advertisement and tracker blockers, leapfrogging the likes of Opera. It also came with its own cryptocurrency, called BAT (or Basic Attention Token), allowing users to reimburse the sites and creators they like.
Essentially, Brave wants to re-imagine how the Internet works: not just on a usability level, but on an economic level. It’s an undeniably radical vision, but you wouldn’t expect any less, given its founding team.

Monday, November 25, 2019

Monopolizing our privacy? Not sure I buy (or completely understand) these arguments.
Ben Brody reports:
Antitrust authorities probing Facebook Inc. and Alphabet Inc.’s Google have struggled with scrutinizing companies whose products are popular and free. Now they may have a solution: Use privacy as a test.
As the U.S. Justice Department, Federal Trade Commission, Congress and the states investigate whether internet companies are flouting antitrust laws, academics and even some regulators are pushing to go beyond the traditional focus on price as a determinant of harm. Enforcers, they say, should also consider privacy lapses as a proxy for anti-competitive behavior.
Read more on Bloomberg
[From the article:
Their legal reasoning goes like this: Monopolists generally stop innovating, let product quality slip and treat customers poorly, knowing no competitor has the ability to grab market share. Repeated privacy lapses can be a sign that a company -- Facebook is often cited as a prime example -- has let product quality and customer service slip, knowing its social-media dominance is unassailable.

Because 4% isn’t enough.
UK Data Protection Watchdog Asks for Seizure Powers
For most businesses 4% of annual global turnover is indeed a significant amount. However in a world of Googles and Facebooks, many have questioned whether even these larger fines could be absorbed as a “cost of doing business” with little or no deterrent effect. Nonetheless with data protection authorities across Europe flexing their muscles, the increase in potential fines seemed, for now, sufficient.
But this week (8 November) the British watchdog, the Information Commissioner’s Office (ICO) said that it wanted further powers to seize assets – including data – under the Proceeds of Crime Act 2002 (POCA).
The proposed new rules would only apply in the case of criminal offenses, which are recordable under the current data protection law. But the only sanction available to the courts is a fine. Similar to the “cost of doing business” scenario noted above, criminals could shrug this off as the fine is likely to be much less than the financial gains made by the offender. “This will inevitably lead to a greater disparity between the deterrent and punitive effects of sanctions imposed in relation to civil breaches and criminal offences,” said the ICO.

Someone my students should follow.
Stephen Wolfram on the future of programming and why we live in a computational universe
The British-born computer scientist's life is littered with exceptional achievements -- completing a PhD in theoretical physics at Caltech at age 20, winning a MacArthur Genius Grant at 21, and creating the technical computing platform Mathematica (which is used by millions of mathematicians, scientists, and engineers worldwide), plus the Wolfram Language, and the Wolfram|Alpha knowledge engine.
For all his other achievements, Wolfram is probably best known for launching Wolfram|Alpha, the computational knowledge engine that underpins Apple's Siri digital assistant's ability to answer questions from "What's the tallest building in the US?" to "How many days until Christmas?".

Perspective. Architecture for AI.
8 ways to prepare your data center for AI’s power draw
For data centers running typical enterprise applications, the average power consumption for a rack is around 7 kW. Yet it’s common for AI applications to use more than 30 kW per rack, according to data center organization AFCOM. That’s because AI requires much higher processor utilization, and the processors – especially GPUs – are power hungry. Nvidia GPUs, for example, may run several orders of magnitude faster than a CPU, but they also consume twice as much power per chip. Complicating the issue is that many data centers are already power constrained.
Cooling is also an issue: AI-oriented servers require greater processor density, which means more chips crammed into the box, and they all run very hot. Greater density, along with higher utilization, increases the demand for cooling as compared to a typical back-office server. Higher cooling requirements in turn raise power demands.

Gartner: Cloud computing revenues to jump in coming years
Public cloud-services technology revenues are projected to grow by more than 50 percent worldwide in the next three years, to about $355 billion in 2022, according to a new report from IT consulting and research firm Gartner.
Cloud-application services, also known as software-as-a-service, would remain by far the largest segment of the cloud-computing market. Its predicted returns would surge by more than 50 percent in the next three years, to approximately $151 billion in 2022, reflecting companies’ ability to scale up their use of such subscription-based software.
Cloud-system infrastructure services, also known as infrastructure-as-a-service, would see their revenues nearly double, to about $74 billion, by 2022, Gartner projected. The firm attributes the growth to the demands of modern applications and workloads, which they say require infrastructure that traditional data centers cannot meet.

Before the music dies…
Internet Archive and Boston Public Library to digitize and preserve over 100,000 vinyl LPs
Internet Archive Blogs – “Imagine if your favorite song or nostalgic recording from childhood was lost forever. This could be the fate of hundreds of thousands of audio files stored on vinyl, except that the Internet Archive is now expanding its digitization project to include LPs. Earlier this year, the Internet Archive began working with the Boston Public Library (BPL) to digitize more than 100,000 audio recordings from their sound collection. The recordings exist in a variety of historical formats, including wax cylinders, 78 rpms, and LPs. They span musical genres including classical, pop, rock, and jazz, and contain obscure recordings like this album of music for baton twirlers, and this record of radio’s all-time greatest bloopers. Unfortunately, many of these audio files were never translated into digital formats and are therefore locked in their physical recording. In order to prevent them from disappearing forever when the vinyl is broken, warped, or lost, the Internet Archive is digitizing these at-risk recordings so that they will remain accessible for future listeners…” [This work is made possible by the Music Modernization Act ]
[From the Internet Archive Blog:
Currently, there are more than 900 LPs from the Boston Public Library LP collection available on