Saturday, August 10, 2019


Another one I’m not going to like. Do we all agree on what must be censored? Define bias.
White House proposal would have FCC and FTC police alleged social media censorship
A draft executive order from the White House could put the Federal Communications Commission in charge of shaping how Facebook, Twitter and other large tech companies curate what appears on their websites, according to multiple people familiar with the matter.
The draft order, a summary of which was obtained by CNN, calls for the FCC to develop new regulations clarifying how and when the law protects social media websites when they decide to remove or suppress content on their platforms. Although still in its early stages and subject to change, the Trump administration's draft order also calls for the Federal Trade Commission to take those new policies into account when it investigates or files lawsuits against misbehaving companies. Politico first reported the existence of the draft.
If put into effect, the order would reflect a significant escalation by President Trump in his frequent attacks against social media companies over an alleged but unproven systemic bias against conservatives by technology platforms. And it could lead to a significant reinterpretation of a law that, its authors have insisted, was meant to give tech companies broad freedom to handle content as they see fit.
The Trump administration's proposal seeks to significantly narrow the protections afforded to companies under Section 230 of the Communications Decency Act, a part of the Telecommunications Act of 1996. Under the current law, internet companies are not liable for most of the content that their users or other third parties post on their platforms. Tech platforms also qualify for broad legal immunity when they take down objectionable content, at least when they are acting "in good faith."


(Related?) Would this redefine a “Clear and present danger” test?
White House questions tech giants on ways to predict shootings from social media
Top officials in the Trump administration expressed interest in tools that might anticipate mass shootings or predict attackers by scanning social media posts, photos and videos during a meeting Friday with tech giants including Facebook, Google and Twitter.
In response, though, tech leaders expressed doubt that such technology is feasible, while raising concerns about the privacy risks that such a system might create for all users, two of the sources said.




Coming soon to your neighborhood.
Ring, the smart doorbell home security system Amazon bought for over $1 billion last year, is involved in some fairly unnerving arrangements with local law enforcement agencies. Wouldn’t you like to know if the cops in your town are among them?
That’s precisely what Shreyas Gandlur, an incoming senior studying electrical engineering at the University of Illinois at Urbana-Champaign put together, using Amazon’s own demands for narrative control over the law enforcement agencies it works with to help build an interactive map:
… Where ring is concerned, FFTF’s map only includes about 50 cities, a far cry from the “more than 225" police departments reported by Gizmodo late last month. (Ring has declined to share the exact figure.) Finding the rest was, in a sense, trivial.
“Ring pre-writes almost all of the messages shared by police across social media, and attempts to legally obligate police to give the company final say on all statements about its products,” my colleague Dell Cameron wrote, a detail Gandlur seized on.
“I added a bunch of agencies I found by literally searching ‘excited to join neighbors by ring’ on Twitter and searching similar phrases on Google,” Gandlur said. “Nothing too complicated and it’s pretty funny that Ring controlling the content of police press releases came to my aid since basically every agency releases the same statement.” If Ring hoped to obfuscate which towns were using for surveillance purposes, it clearly failed.




AI and the GDPR
The Right to Human Intervention: Law, Ethics and Artificial Intelligence
Τhe paper analyses the new right of human intervention in use of information technology, automatization processes and advanced algorithms in individual decisionmaking activities. Art. 22 of the new General Data Protection Regulation (GDPR) provides that the data subject has the right not to be subject to a fully automated decision on matters of legal importance to her interests, hence the data subject has a right to human intervention in this kind of decisions.
[From the Conclusion
As may be clarified, human intervention does not always lessen the danger of discrimination and that technology can prevent bias, proposing not only privacy, but also fairness by design. This can be achieved through the application of the principle of justice when it comes to algorithms, which will prevent discrimination. We not only need human intervention, but also algorithmic neutrality, or 'correct' policy-directed algorithms, as with human intervention, unfair factors may inappropriately affect decisions.




How AI thinks.
Causal deep learning teaches AI to ask why
Most AI runs on pattern recognition, but as any high school student will tell you, correlation is not causation. Researchers are now looking at ways to help AI fathom this deeper level.



Friday, August 09, 2019


“Hey! It’s the law!” I can see we need to discuss procedure…
Black Hat: GDPR privacy law exploited to reveal personal data
About one in four companies revealed personal information to a woman's partner, who had made a bogus demand for the data by citing an EU privacy law.
The security expert contacted dozens of UK and US-based firms to test how they would handle a "right of access" request made in someone else's name.
In each case, he asked for all the data that they held on his fiancee.
In one case, the response included the results of a criminal activity check.
Other replies included credit card information, travel details, account logins and passwords, and the target's full US social security number
… "Generally if it was an extremely large company - especially tech ones - they tended to do really well," he told the BBC.
"Small companies tended to ignore me.
… Mr Pavur has, however, named some of the companies that he said had performed well.
He said they included:
  • the supermarket Tesco, which had demanded a photo ID
  • the domestic retail chain Bed Bath and Beyond, which had insisted on a telephone interview
  • American Airlines, which had spotted that he had uploaded a blank image to the passport field of its online form
An accompanying letter said that under GDPR, the recipient had one month to respond.
It added that he could provide additional identity documents via a "secure online portal" if required. This was a deliberate deception since he believed many businesses lacked such a facility and would not have time to create one.
The idea, he said, was to replicate the kind of attack that could be carried out by someone starting with just the details found on a basic LinkedIn page or other online public profile.


(Related) Or, you could buy a canned procedure. There is probably a lot of money waiting for anyone who can make all this privacy stuff work.
Securiti.ai Raises $31 Million Series A To Help Companies Comply With California Consumer Privacy Act
As companies scramble to meet the data transparency requirements mandated by the California Consumer Privacy Act (CCPA) or face hefty fines, a San Jose-based company has put forth a solution that’s at the intersection of security and regulatory operations. Newly launched Securiti.ai
Under CCPA, consumers can request all personal information stored by a company, have their data deleted, learn how their information was used and opt-out of having their information shared with third parties. The law, which goes into effect on January 1, 2020, applies to California-based companies and those that serve California consumers.
Manually complying with an influx of consumer requests can be impractical if not impossible, and that’s if companies know all the places their consumer’s data lives . That’s where Securiti.ai comes in, Jalil says.
The first thing we had to crack was to not only discover the data that belongs to a particular consumer but find the owner of the data,” Jalil says. Securiti.ai’s platform uses an artificial intelligence-enabled chatbot to retrieve consumer data.
… “CCPA in California is the very first regulation, but there 15 others coming in North America alone and there are 30-plus globally,” Jalil said. “Privacy ops will allow companies to comply with one assessment.”


(Related)
German court decides on the scope of GDPR right of access
In a previous post, this blog reported on German guidance on the scope of the right of access under Art. 15 of the GDPR and in particular on the right to receive a copy. The Supervisory Authority of Hesse region stated that the term “copy” in Art 15 GDPR should not be understood literally but rather in the sense of a “summary”.
This somewhat relaxed interpretation appears to conflict with an earlier decision of the Labor Appeals Court of Stuttgart which ordered an employer to provide actual copies of all information held by the company regarding an employee’s performance and behavior to that employee.
More recently, the Appeal Court of Cologne held that the customer of an insurance company is entitled to access all personal data pertaining to him and processed by the company, including any internal notes regarding conversations between company employees and the customer. The company argued that it was impracticable to compile the information due to the large amounts of customer information processed by it. The court was unimpressed, stating that the company was compelled to adapt its IT systems to the requirements of the GDPR
These first court decisions on Art. 15 of the GDPR confirm that the right of access is becoming a powerful tool in litigation. Germany’s code of civil procedure does not provide for a general right to discovery. The right of access could make up for this and significantly affect outcomes in civil and labor law cases.




Should we trust vendor promises?
Exclusive: Critical U.S. Election Systems Have Been Left Exposed Online Despite Official Denials
The top voting machine company in the country insists that its election systems are never connected to the internet. But researchers found 35 of the systems have been connected to the internet for months and possibly years, including in some swing states.
… “We ... discovered that at least some jurisdictions were not aware that their systems were online,” said Kevin Skoglund, an independent security consultant who conducted the research with nine others, all of them long-time security professionals and academics with expertise in election security.




My AI says she can do it in three years.
A 20-Year Community Roadmap for AI Research in the US is Released
The Computing Community Consortium (CCC) is pleased to release the completed Artificial Intelligence (AI) Roadmap, titled A 20-Year Community Roadmap for AI Research in the US – An HTML version is available here. This roadmap is the result of a year long effort by the CCC and over 100 members of the research community, led by Yolanda Gil (University of Southern California and President of AAAI ) and Bart Selman (Cornell University and President Elect of AAAI). Comments on a draft report of this roadmap were requested in May 2019. Thank you to everyone in the community who participated in workshops, helped write the report, submitted comments, and edited drafts. Your input and expertise helped make this roadmap extremely comprehensive. From the Roadmap – Major Findings:
I – Enabled by strong algorithmic foundations and propelled by the data and computational resources that have become available over the past decade, AI is poised to have profound positive impacts on society and the economy.
II – To realize the potential benefits of AI advances will require audacious AI research, along with new strategies, research models, and types of organizations for catalyzing and supporting it.
III – The needs and roles of academia and industry, and their interactions, have critically important implications for the future of AI.
IV – Talent and workforce issues are undergoing a sea change in AI, raising significant challenges for developing the talent pool and for ensuring adequate diversity in it.
V – The rapid deployment if AI-enabled systems is raising serious questions and societal challenges encompassing a broad range of capabilities and issues.
VI – Significant strategic investments in AI by the United States will catalyze major scientific, technological, societal, and economic progress…”




For a discussion of Big Data and analysis. If Zillow notes an undervalued house in an area where prices are rising, why not jump on it?
Zillow Is Buying And Selling Lots Of Homes And It’s Almost Half Its Business Now
BuzzFeedNews – Zillow made more than 40% of its revenue last quarter from selling homes: “Zillow, the real estate search and advertising platform, has gotten into the house-flipping business in a big way. That means the company earned about 41.5% of its revenue from selling homes in the three months ending June 30, according to its most recent earnings report. Zillow made $599.6 million in revenue last quarter, $248.9 million of which came from its Homes segment, which refers to the “buying and selling of homes directly through the Zillow Offers service,” which it kicked off in 2018. Zillow is now buying thousands of properties, investing in minor repairs, and then selling them — essentially flipping houses — in 15 markets around the country, with plans to be in 26 markets by mid-2020. It collects a fee from the seller with each of these transactions. The company sold 786 homes and bought 1,535 homes from April to June…”




Reminding my students that “big” does not equal “profitable.”
Uber lost over $5 billion in one quarter, but don’t worry, it gets worse
The ride-hailing giant reported losing a whopping $5.2 billion in the last three months.
Lyft, which reported its earnings Wednesday, fared better but still posted a loss of $644 million during the quarter.




For my geeks…
IBM Research launches explainable AI toolkit
IBM Research today introduced AI Explainability 360, an open source collection of state-of-the-art algorithms that use a range of techniques to explain AI model decision-making.
The launch follows IBM’s release a year ago of AI Fairness 360 for the detection and mitigation of bias in AI models.



Thursday, August 08, 2019


I think we should be charging more for my Computer Security classes.
Cybersecurity Pros Name Their Price as Hacker Attacks Swell
It took a $650,000 salary for Matt Comyns to entice a seasoned cybersecurity expert to join one of America’s largest companies as chief information security officer in 2012. At the time, it was among the most lucrative offers out there.
This year, the company had to pay $2.5 million to fill the same role.
In the 12 months ended August 2018, there were more than 300,000 unfilled cybersecurity jobs in the U.S., according to CyberSeek, a project supported by the National Initiative for Cybersecurity Education. Globally, the shortage is estimated to exceed 1 million in coming years, studies have shown.




No doubt the FBI is hoping this will establish a precedent they can point to.
Whatsapp Is Fighting To Keep Millions Of Users Untraceable
WhatsApp, the encrypted messaging service that has built a 400 million strong user base in India, is squaring off in a Tamil Nadu courthouse in a case that could force the company to weaken its privacy protections. The Madras high court recently began hearing a case filed by two petitioners asking the country to force people to link their WhatsApp accounts to their Aadhaar, India’s controversial biometric ID number for nearly all of the country’s 1.4 billion residents.
The case — the first in the country to consider traceability in social media — could set legal precedent for all tech companies operating in India. Privacy experts fear the case is a convenient opportunity for India’s nationalist government to force platforms to become surveillance tools.




How many people will allow Walmart (or Amazon) into their homes?
Walmart’s new wireless bridge device looks to take on Amazon Key
When it comes to the smart home, maybe the partnership with Google isn’t enough for Walmart anymore. According to an FCC filing, the retailer has submitted an application for a Wi-Fi-to-Z-wave bridge product that links to a Z-wave garage door opener. The application was filed by Project Franklin LLC on behalf of Walmart. So what is this device and what is Project Franklin?
Answering the first question is relatively easy. According to the FCC filing, the device is a Wi-Fi bridge that will connect to a Nortek Z-wave-enabled garage door opener. What’s interesting is the user manual for the product states that the product should be installed professionally, and offers a phone number to call if there are issues. The number dials Walmart’s InHome customer care line, which is Walmart’s grocery delivery service.




Could we fine or imprison an AI?
When Robots Make Legal Mistakes
Morse, Susan C., When Robots Make Legal Mistakes (July 22, 2019). Oklahoma Law Review, Vol. 72, 2019. Available at SSRN: https://ssrn.com/abstract=3424110
The questions presented by robots’ legal mistakes are examples of the legal process inquiry that asks when the law will accept decisions as final, even if they are mistaken. Legal decision-making robots include market robots and government robots . In either category, they can make mistakes of undercompliance or overcompliance. A market robot’s overcompliance mistake or a government robot’s undercompliance mistake is unlikely to be challenged. On the other hand, government enforcement can challenge a market robot’s undercompliance mistake, and an aggrieved regulated party can object to a government robot’s overcompliance mistake. Especially if robots cannot defend their legal decisions due to a lack of explainability, they will have an incentive to make decisions that will avoid the prospect of challenge. This incentive could encourage counterintuitive results. For instance, it could encourage market robots to overcomply and government robots to undercomply with the law.”




So easy even Mark Zuckerberg can do it.
AI Ethics Guidelines Every CIO Should Read
You don't need to come up with an AI ethics framework out of thin air. Here are five of the best resources to get technology and ethics leaders started.
Future of Life Institute Asilomar AI Principles
Developed in conjunction with the 2017 Asilomar conference, this list of principles has been universally cited as a reference point by all other AI ethics frameworks and standards introduced since it was published.




Another viewpoint. (Colorado author)
The Ethics of Artificial Intelligence in the Workplace
according to statistics from Adobe, only 15 percent of enterprises are using AI as of today, but 31 percent are expected to add it over the coming 12 months, and the share of jobs requiring AI has increased by 450 percent since 2013.
As an increasing number of AI enabled devices are developed and utilized by consumers and enterprises around the globe, the need to keep those devices secure has never been more important. AI’s increasing capabilities and utilization dramatically increase the opportunity for nefarious uses. Consider the dangerous potential of autonomous vehicles and weapons like armed drones falling under the control of bad actors.
As a result of this peril, it has become crucial that IT departments, consumers, business leaders and the government, fully understand cybercriminal strategies that could lead to an AI-driven threat environment. If they don’t, maintaining the security of these traditionally insecure devices and protecting an organization’s digital transformation becomes a nearly impossible endeavor.
How can we ensure safety for a technology that is designed to learn how to modify its own behavior? Developers can’t always determine how or why AI systems take various actions, and this will likely only grow more difficult as AI consumes more data and grows exponentially more complex.




A future resource?
NHS to set up national artificial intelligence lab
The Health Secretary, Matt Hancock, said AI had "enormous power" to improve care, save lives and ensure doctors had more time to spend with patients.
He has announced £250m will be spent on boosting the role of AI within the health service.
Increasing the use of AI will also pose challenges for the health service - from training staff to enhancing cyber-security and ensuring patient confidentiality.
The other challenge with an AI is it can only ever be as good as the data it learns from.



Wednesday, August 07, 2019


Slick! Uses far less gas than wardriving.
With warshipping, hackers ship their exploits directly to their target’s mail room
This newly named technique — dubbed “warshipping” — is not a new concept. Just think of the traditional Trojan horse rolling into the city of Troy, or when hackers drove up to TJX stores and stole customer data by breaking into the store’s Wi-Fi network. But security researchers at IBM’s X-Force Red say it’s a novel and effective way for an attacker to gain an initial foothold on a target’s network.
It uses disposable, low cost and low power computers to remotely perform close-proximity attacks, regardless of the cyber criminal’s location,” wrote Charles Henderson, who heads up the IBM offensive operations unit.
… “Once we see that a warship has arrived at the target destination’s front door, mailroom or loading dock, we are able to remotely control the system and run tools to either passively, or actively, attack the target’s wireless access,” wrote Henderson.




We love our employees even as we surveil the heck out of them!
How Technology Transformed Insider Fraud – and How New Technology Is Fighting Back
In criminal cases, investigators home in on suspects by ascertaining who had the means, motive, and opportunity to perpetrate the crime. By that tripartite standard, it shouldn’t be surprising that occupational fraud – fraud carried out by company employees, executives, and other insiders – outranks virtually all other forms of fraud faced by modern organizations.
Technology may be one of the great enablers of insider fraud – but paradoxically, it’s also indispensable to combating it.
Here’s a look at how insider fraud has evolved, and how technology has guided its evolution.




You only need to worry when there is a microphone involved. Or a camera. Or an Internet connection.
Revealed: Microsoft Contractors Are Listening to Some Skype Calls
Contractors working for Microsoft are listening to personal conversations of Skype users conducted through the app's translation service, according to a cache of internal documents, screenshots, and audio recordings obtained by Motherboard. Although Skype's website says that the company may analyze audio of phone calls that a user wants to translate in order to improve the chat platform's services, it does not say some of this analysis will be done by humans. [Are we assuming AI now? Bob]




Because it does exactly what you ask it to do?
6 reasons why AI projects fail
Eighteen months ago, Mr. Cooper launched an intelligent recommendation system for its customer service agents to suggest solutions to customer problems. The company, formerly known as Nationstar, is the largest non-bank mortgage provider in the U.S., with 3.8 million customers, so the project was viewed as a high-profile cost-saver for the company. It took nine months to figure out that the agents weren't using it, says CIO Sridhar Sharma. And it took another six months to figure out why.
The recommendations the system was offering weren't relevant, Sharma found, but the problem wasn't in the machine learning algorithms. Instead, the company had relied on training data based on technical descriptions of customer problems rather than how customers would describe them in their own words.




Free is good!
Millions of Books Are Secretly in the Public Domain. You Can Download Them Free
Vice – A quirk of copyright law means that millions of books are now free for anyone to read, thanks to some work from the New York Public Library: “Prior to 1964, books had a 28-year copyright term. Extending it required authors or publishers to send in a separate form, and lots of people didn’t end up doing that. Thanks to the efforts of the New York Public Library, many of those public domain books are now free online. Through the 1970s, the Library of Congress published the Catalog of Copyright Entries, all the registration and renewals of America’s books. The Internet Archive has digital copies of these. but computers couldn’t read all the information and figuring out which books were public domain, and thus could be uploaded legally, was tedious. The actual, extremely convoluted specifics of why these books are in the public domain are detailed in a post by the New York Public Library, which recently paid to parse the information in the Catalog of Copyright Entries. In a massive undertaking, the NYPL converted the registration and copyright information into an XML format. Now, the old copyrights are searchable and we know when, and if, they were renewed. Around 80 percent of all the books published from 1923 to 1964 are in the public domain, and lots of people had no idea until now…”



Tuesday, August 06, 2019


A challenge for my Computer Security students.
Connected Cars Could be a Threat to National Security, Group Claims
The cyber threat to connected cars (cars with a connection to the internet) is known and accepted. Now Los Angeles-based Consumer Watchdog (CW) has elevated that threat to one of national security in a new report titled, "Kill Switch: Why Connected Cars Can be Killing Machines and How to Turn Them Off."
CW claims to have talked to an unnamed but concerned group "of car industry technologists and engineers" in compiling its report (PDF ).




Who would you like to win and by how much?
The scramble to secure America’s voting machines
Paperless voting devices are a gaping weakness in the patchwork U.S. election system, security experts say. But among these 14 states and their counties, efforts to replace these machines are slow and uneven, a POLITICO survey reveals.




Not “raised” in the Venture Capital sense, more in the Bonnie & Clyde sense.
UN Report: North Korea Cyber Experts Raised Up to $2 Billion
A panel monitoring U.N. sanctions says North Korean cyber experts have illegally raised money for the country’s weapons of mass destruction programs “with total proceeds to date estimated at up to $2 billion.”
The experts said in a new report to the Security Council that North Korea is using cyberspace “to launch increasingly sophisticated attacks to steal funds from financial institutions and cryptocurrency exchanges to generate income” in violation of sanctions.
The experts’ report, seen Monday by The Associated Press, said large-scale attacks against cryptocurrency exchanges by North Korea allow the country “to generate income in ways that are harder to trace and subject to less government oversight and regulation than the traditional banking sector.”
North Korea also continues to have access to the global financial system, “through bank representatives and networks operating worldwide” as a result of “deficiencies” by U.N. member states in implementing financial sanctions and Pyongyang’s “deceptive practices,” the experts said.




When $5,000,000,000 is chicken feed.
EPIC Privacy Group to Challenge $5 Billion FTC-Facebook Settlement
Perhaps it was inevitable. As soon as the FTC announced that it was letting social media giant Facebook off the hook with what now appears to be an incredibly lenient penalty of $5 billion, privacy advocates went into overdrive, challenging the fairness and adequacy of the Facebook settlement. Within days of the announced FTC-Facebook settlement, consumer privacy group EPIC (Electronic Privacy Information Center) filed a motion with a federal district court in Washington, asking it to intervene in the settlement.




Banking for the un-banked” is a big market.
Privacy Watchdogs Warn Facebook Over Libra Currency
Global privacy regulators joined forces Tuesday to demand guarantees from Facebook on how it will protect users' financial data when it launches its planned cryptocurrency, Libra.
The watchdogs from Australia, the US, EU, Britain, Canada and other countries issued an open letter calling on Facebook to respond to more than a dozen concerns over how it will handle sensitive personal information of users of the digital currency.
The watchdogs said that Facebook and its subsidiary Calibra "have failed to specifically address the information handling practices that will be in place to secure and protect personal information".
Facebook's handling of user data, highlighted by the Cambridge Analytica scandal, had "not met the expectations of regulators or their own users", they said.




My students tell me they VPN to EU servers so they can access US resources without tracking. Are we seeing a flight to stronger Privacy laws?
Twitter users are escaping online hate by switching profiles to Germany, where Nazism is illegal
CNBC:
    • Seeking to shield themselves from online hatred, some Twitter users say they’ve switched their account locations to Germany where local laws prevent pro-Nazi content.
    • While German laws make it harder for explicitly hateful content to remain online, local researchers say it is not a hate-free internet utopia.
    • Germany has imposed stricter laws on social media companies about content moderation as some conservative American lawmakers have criticized the companies of showing bias in their content removal decisions…”




Will we reach a point where “everything has already been invented?”
Two AI-led inventions poke at future of patent law
A University of Surrey-based team have filed the first patent applications for inventions created by a machine. Applications were made to the US, EU and UK patent offices; they are for a machine using artificial intelligence as the inventor of two ideas for a beverage container and a flashing light.
Media's attention toward this move resonates with last year's prediction by Baker McKenzie that "Patentability of AI-created inventions, liability for infringement by AI, and patent subject-matter eligibility of AI technologies are the top three areas of patent law that will be disrupted by AI."
There is now a site for a project focused on intellectual property rights and the output of artificial intelligence. This is the Artificial Inventor Project, and it can clarify why the topic of AI and patents is so relevant today.




Mr Zillman collects very complete (huge) and useful lists.
Healthcare Bots and Subject Directories 2019
This guide focuses on a wide range of selected resources from health sciences, technology, academic, government and genetic research sectors, identifying traditional, complimentary and alternative sources to execute expert healthcare related subject matter searches. It is divided into three categories: 1) Search Engines and Selected Bots and 2) Directories, Subject Trees and Subject Tracers 3) Health Forums Online for Expert Support.




My students seem divided on the digital vs paper bit. (But agree, cheaper is better.)
The Radical Transformation of the Textbook
Wired – Digital-first. Open source. Subscription. The way textbooks are bought and sold is changing—with serious implications for higher education: “For several decades, textbook publishers followed the same basic model: Pitch a hefty tome of knowledge to faculty for inclusion in lesson plans; charge students an equally hefty sum; revise and update its content as needed every few years. Repeat. But the last several years have seen a shift at colleges and universities—one that has more recently turned tectonic. In a way, the evolution of the textbook has mirrored that in every other industry. Ownership has given way to rentals, and analog to digital. Within the broad strokes of that transition, though, lie divergent ideas about not just what learning should look like in the 21st century but how affordable to make it…
“The major publishers are publicly traded companies, under pressure to demonstrate constant growth. Pearson’s digital-first strategy is a significant step toward a more sustainable business model. Under the new system, ebooks will cost an average of $40. Those who prefer actual paper can pay $60 for the privilege of a rental, with the option to purchase the book at the end of the term. The price of a new print textbook can easily reach into the hundreds of dollars; under digital-first, students have to actively want to pay that much after a course is already over, making it an unlikely option for most…”