Saturday, March 14, 2020


A precedent for Clearview?
LinkedIn Appeals Important CFAA Ruling Regarding Scraping Public Info Just As Concerns Raised About Clearview
Last fall we were happy to see the 9th Circuit rule against LinkedIn in its CFAA case against HiQ. If you don't recall, the CFAA is the "anti-hacking" law that has been widely abused over the years to try to shut down perfectly reasonable activity. At issue is whether "scraping" information violates a terms of service, and thus, the CFAA. A few years back, the same court ruled in favor of Facebook against Power Ventures, saying that even though Power's users gave permission to Power and handed over their login credentials, Power was violating the CFAA in scraping Facebook, because the information was behind a registration wall -- and because Facebook had sent a cease-and-desist.
In the HiQ case, despite what seemed to be a similar fact pattern, the court ruled against LinkedIn, saying it could not block HiQ's scraping via a CFAA claim, with the main "difference" being that LinkedIn information was publicly viewable, and therefore should be open to scraping.
Of course, one thing that's notable since the 9th Circuit ruling came down -- all of the attention that Clearview AI has received over the last few months, for its frightening facial recognition app, built of of scraping "public" social media images and profiles. This use of scraping has convinced some -- even some who seemed to support the HiQ ruling -- that perhaps there should be limits on scraping. I think that's a kneejerk reaction, and focusing in too narrowly on the wrong issue. The issue there is not with scraping, but with the specific use of the data as an attack on privacy going well beyond the internet itself (i.e., tracking and identifying people out in the real world). It's one thing to focus on that issue, as opposed to saying that's an argument against free scraping.




A good ‘bad example?’
Sunshine Behavioral Health Group Faces Class Action Under CCPA After Data Breach Affecting 3,500 Patients
Linn F. Freedman of Robinson & Cole LLP writes that Sunshine Behavioral Health Group is facing a potential class action lawsuit. The case is Fuentes v. Sunshine Behavioral Health Group LLC and it was filed this week in the Central District of California. The case is drawing some attention because it it one of the first suits to be filed under California’s new Consumer Privacy Act (CCPA). As Freedman explains, if the plaintiff can show he was injured and the injury was due to the defendant violating the law, the plaintiff might survive a motion to dismiss.
The plaintiff, Hector Fuentes, claims that since the data breach, which the complaint alleges began on March 1, 2017:
someone has attempted to fraudulently open a credit card in Mr. Fuentes’ name. Since the Data Breach, Mr. Fuentes has begun receiving magazine subscriptions in his name that he did not purchase and receiving invoices for those magazine subscriptions. Since learning of the Data Breach, Mr. Fuentes has become worried that he will become a victim of identity theft or other fraud which is causing him stress and anxiety. Since learning of the Data Breach, Mr. Fuentes has spent in excess of 10 hours of his own time trying to make sure he has not and does not become victimized because of the Data Breach.
So Fuentes is alleging damages, and claims that the damages were due to Sunshine not having adequate security in place, despite having been put on notice by federal law enforcement and HHS about the risk of hacks. As Freedman notes, however, it is not clear from the complaint whether Fuentes provided 30 days notice to Sunshine to implement security measures before he filed suit seeking to require them to implement security measures.
But there also appear to be other problems with the plaintiff’s complaint.
As regular readers may recall, DataBreaches.net broke the story of the data leak after being tipped to it by a researcher. This site first notified Sunshine of their leak on September 4, 2019 and followed up when they did not take immediate action. The second phone call resulted in them taking some steps to protect the data. But when Sunshine did not disclose the breach by 60 days after this site notified them, DataBreaches.net went public about the leak and what this site found in the data. This site also reported the fact that in November, it notified Sunshine again after realizing that their files were still available for download without any login required if one had already noted the urls for the files during the initial leak. Given that Sunshine Behavioral Health deals with the treatment of alcohol and drug addiction, its patient population and patient records are very sensitive.
Was the exposed data exfiltrated, as the Fuentes’s complaint alleges? Certainly it must have been exfiltrated by at least one party, as this site had been provided a copy of the data by the whitehat researcher who had discovered the leak. But how many other entities accessed, viewed, and/or exfiltrated their data? Sunshine Behavioral Health did not respond to inquiries by DataBreaches.net until their external counsel got involved and contacted this site to inquire as to whether we would destroy any data and certify that we had destroyed it. It was only then that this site was able to get statements confirming that Sunshine Behavioral Health had reported the incident to HHS/OCR and to affected patients, but no other information was provided.
From a quick skim of the complaint, it appears that a lot of the complaint seems to be premised on treating this as a hacking case resulting from the defendant’s’s negligence, but this wasn’t a hacking case. Not to minimize the seriousness of a leak of sensitive information, but this was a data leak or help yourself situation, and the risk of becoming a fraud victim or identity theft victim from a leak may not be the same as the risks of those outcomes from a hack situation.
The complaint also raises the issue that Sunshine’s notification to patients was not timely under either HIPAA or California’s Confidentiality of Medical Information Act (CMIA). And also of concern to the plaintiff, Sunshine allegedly did not offer those affected any fraud insurance or mitigation for those who might become fraud victims. According to the complaint, Sunshine (only) offered those affected 24 months of credit monitoring, which is not the same thing.
The complaint is confusing in that regard, because Sunshine’s notification on their website dated January 21 (well before the complaint was filed), includes this statement:
If we have confirmed that your personal information was affected by the incident, we are offering MyIDCare protection through ID Experts for 24 months at no cost.
MyIDCare does appear to include the kind of mitigation help the plaintiff is asking for– identity recovery and assistance and $1 million ID theft insurance.
Sunshine Behavioral Health was asked if they wished to comment on the litigation but did not respond at all by publication time.




Some exemptions will become commonplace?
Privacy Advocates and Businesses Take Issue With India’s New Data Protection Law
India’s long-awaited national data protection law, the Personal Data Protection Bill, is under inspection by a joint parliamentary committee. The bill has yet to be adopted as a law, and could potentially change in form before it is, but at the moment looks to become one of the world’s strongest pieces of legislation of this nature. At least in terms of the way it regulates private companies; privacy advocates are voicing opposition to the fact that it makes broad exceptions for government agencies, such that they would have essentially unfettered access to personal data with little oversight. Private companies are also objecting to the terms, which stipulate fines and costs they feel are too high.




Nice to know China is taking care of its US customers.
Chinese billionaire Jack Ma to send 500K coronavirus test kits, 1 million face masks to US


(Related) It would be nice if our most famous (just ask him) billionaire also did something useful. “I know more than the Google!”
Trump says Google is building a site to help people find coronavirus tests
messaging from Alphabet reps, after President Donald Trump and others described the effort at a White House press conference, stressed that the project the company is working on is in its early stages and will initially be offered to residents in and around San Francisco and Silicon Valley.
At the press conference, Trump said Google had “1,700 engineers working on this right now.”




Anything to get rid of my students find my students jobs!
Future-Proof Your Career With This FREE Ebook
In this free copy of Career Leap, worth $16, Michelle Gibbings answers these questions, showing you “what you need to know, how you need to change and how you can prepare for the inevitable tides of change.”
This free offer expires 24 March 2020.



Friday, March 13, 2020


Security and Architecture.
The Internet of Things is a security nightmare reveals latest real-world analysis: unencrypted traffic, network crossover, vulnerable OSes
No less than 98 per cent of traffic sent by internet-of-things (IoT) devices is unencrypted, exposing huge quantities of personal and confidential data to potential attackers, fresh analysis has revealed.
What’s more, most networks mix IoT devices with more traditional IT assets like laptops, desktops and mobile devices, exposing those networks to malware from both ends: a vulnerable IoT device can infect PCs; and an unpatched laptop could give an attacker access to IoT devices - and vast quantities of saleable data.
Those are the big conclusions from a real-world test of 1.2 million IoT devices across thousands of physical locations in the United States, carried out by Palo Alto Networks.




Not sure about all aspects of privacy, but that face stuff is easy?
Washington Privacy Act fails again, but state legislature passes facial recognition regulation
For the second year running, lawmakers in the state of Washington have failed to pass sweeping data privacy legislation. The Washington Privacy Act, or SB 6281 — akin to Europe’s GPDR or California’s CCPA — would have allowed individuals to request that companies delete their data. But today Washington state House and Senate lawmakers did succeed in passing SB 6280, which addresses public and private facial recognition use. The bill requires facial recognition training and bias testing and mandates that local and state government agencies disclose use of facial recognition. It also creates a task force to consider recommendations and discrimination against vulnerable communities.




That bad?
What you need to know about the Metropolitan Police's new facial recognition technology
The new technology was introduced across London locations in January
Facial recognition technology led to its first arrest in February but incorrectly flagged seven other innocent citizens on the same day.
An estimated 8,600 faces were scanned in Oxford Circus, generating eight match alerts.
However only one was an accurate identification, meaning the software had an 87.5 per cent failure rate.


(Related) The opposite of open.
Homeland Security sued over airport face recognition secrecy
The American Civil Liberties Union filed the lawsuit in a New York federal court on Thursday, demanding that the agency turn over records to understand the scope of its airport face recognition system. The group wants to know who Homeland Security works with — including private companies and airlines — as well as internal policies and guidance on how the system is used.
Although U.S. citizens can opt-out of having their faces scanned, it’s not always openly advertised.




Interesting: Assessing Productivity as a Function of IT Maturity
It’s Time to Reset the IT Talent Model
How do you identify which talent in your technology teams create the most value for your business?
This question plagues IT leaders and gets at the heart of a conundrum many organizations face today in their quest to transform digitally. All CIOs know they have star engineers on their teams who are more motivated, creative, and productive than their peers. But what sets them apart from solid but middling performers? Most organizations have no reliable way of pinpointing these crucial differences in performance. As a result, leaders struggle to retain stars, reward them fairly, and hire others of equal caliber.



Thursday, March 12, 2020


Know anyone who could use this information? (Architecture as security enabler)
Why are governments so vulnerable to ransomware attacks?
Emisoft estimates that over 2019, ransomware attacks impacted at least 948 government agencies, educational entities, and healthcare providers. Analysis conducted by Recorded Future suggests that 81 successful ransomware attacks took place against US government bodies across the year, and these incidents would often have a knock-on effect of impacting high numbers of towns and cities in their local areas.
Florida County, Louisiana, New Orleans, and Texas are only a handful of regions where ransomware has caused severe disruption. If ransomware infiltrates a government network, this can lead to the shutdown or a loss of access to core government systems, thereby impacting local community services.
IBM research has already suggested that many US local and state government agencies are "overconfident in their attitude towards malware and cybersecurity incidents, and now, Deloitte further implies that governments are simply not doing enough.
On Wednesday, Deloitte released a report, "Ransoming government: What state and local governments can do to break free from ransomware attacks," which explores how these attacks are able to take place -- and what government officials should be doing to tackle the ransomware challenge.
State and local governments will often pay up as the most logical course of action rather than attempt to restore systems through backups -- if this is even possible -- or face the possibility of weeks and weeks relying on pen-and-paper records. Cyberinsurance may cover a portion of payouts, and unfortunately, not paying up can sometimes prove to be significantly more costly.




Best Practice: encrypt your data.
Dutch government loses hard drives with data of 6.9 million registered donors
The Dutch government said it lost two external hard disk storage devices that contained the personal data of more than 6.9 million organ donors.
The hard drives stored electronic copies of all donor forms filed with the Dutch Donor Register between February 1998 to June 2010, officials from the Dutch Minister of Health, Wellness, and Sport said earlier this week.
The disks were last used in 2016 and were placed inside a secure vault for storage, as Dutch authorities rotated to using newer drives.
Officials never said if the data contained on the hard drives was encrypted or not. [Suggests they do not know. Bob]




An infographic. Perhaps a picture will save you 1000 words?
Cybersecurity Trends to Know in 2020
This infographic from Paradyn lays out the top cybersecurity trends you should know about in 2020 including:
  • GDPR
  • AI-powered security solutions
  • Cloud security
  • IoT security
  • Next-gen authentication technology
Scroll down to the infographic to discover more about the latest cybersecurity trends today!




This should be obvious, but is more likely never thought of…
The haphazard response to COVID-19 demonstrates the value of enterprise risk management
Just 12% of more than 1,500 respondents believe their businesses are highly prepared for the impact of coronavirus, while 26% believe that the virus will have little or no impact on their business, according to a survey by Gartner.
This lack of confidence shows that many organizations approach risk management in an outdated and ineffective manner,” said Matt Shinkman, vice president in the Gartner Risk and Audit practice. “The best-prepared organizations can expect to enjoy many business advantages over their less-prepared peers as they minimize the disruption caused by the coronavirus.”




A bit of background.
This Is the Ad Clearview AI Used to Sell Your Face to Police
Clearview AI emailed advertisements to police departments in August 2019 with the subject line “How To Solve Crimes Instantly With Face Search Technology,” using the Fraternal Order of Police’s online platform FOPConnect.
Clearview is like Google Search for faces,” the ad copy reads. “It only takes one photo of a suspect’s face, one quick tap on your cell phone or computer, and one second of search time. Get results from mug shots, social media, and other publicly available sources.”




Another reason for my students to write every week.
Thought leadership drives trust in cyber-leaders, sharing best practice
Senior security executives in UK prefer to work with organisations that publish thought leadership over ones that don’t - and are willing to pay a premium.
Nearly 85 percent of senior executives from UK businesses across telecom, IT financial services, retail and the public sector prefer to work with organisations that publish thought leadership over ones that don’t, according to a survey by Code Red.
More than 80 percent of the respondents said that thought leadership material issued by a company is a good indicator of the type and calibre of that organisation’s thinking. Nearly 75 percent were willing to pay a premium to work with a thought leader, said the survey.




Worth trying?
WEEKLY SELF-STUDY PLAN TO ACE DATA SCIENCE
Data Science is a vast field where statistics and programming go hand-in-hand. In order to ace this field, enthusiasts must follow a learning routine that involves practising, reading, competing as well as engaging with the community. This is a 4-weeks plan which can be repeated every month to enhance your depth of understanding in Data Science. It includes both theoretical and real-world practical resources related to data science and machine learning. The plan is tailored to provide you with the necessary tools one need to become a master in Data Science.
In this article, we discuss 4 simple yet powerful weekly self-study methods, which will help a data science enthusiast to be ahead of the curve.
Program Type: Self-Paced
Estimated Duration: 4 Weeks
Pre-Requisite: Basics of Machine Learning, Big Data, Python, SQL
Tools: Python, R, SQL, Hadoop, MapReduce and Tableau



Wednesday, March 11, 2020


Not all computer systems are managed by IT. Shouldn’t they be?
Most Medical Imaging Devices Run Outdated Operating Systems
You'd think that mammography machines, radiology systems, and ultrasounds would maintain the strictest possible security hygiene. But new research shows that a whopping 83 percent of medical imaging devices run on operating systems that are so old they no longer receive any software updates at all.
That issue is endemic to internet of things devices generally, many of which aren't designed to receive software improvements or offer only a complicated path to doing so. But medical devices are an especially troubling category for the issue to show up in, especially when the number of devices with outdated operating systems is up 56 percent since 2018. You can attribute most of that increase to Microsoft ending support for Windows 7 in January. As new vulnerabilities are found in the operating system, any device still running it won't get patches for them.




Not all IT decisions are based on experience. Some just sound good.
New Data Rules Could Empower Patients but Undermine Their Privacy
In a move intended to give Americans greater control over their medical information, the Trump administration announced broad new rules on Monday that will allow people for the first time to use apps of their choice to retrieve data like their blood test results directly from their health providers.
The Department of Health and Human Services said the new system was intended to make it as easy for people to manage their health care on smartphones as it is for them to use apps to manage their finances.
Prominent organizations like the American Medical Association have warned that, without accompanying federal safeguards, the new rules could expose people who share their diagnoses and other intimate medical details with consumer apps to serious data abuses.




Try to make sense of this… The report should be available later today.
Giant Report Lays Anvil on US Cyber Policy
Released today, the bipartisan Cyberspace Solarium Commission makes more than 75 recommendations that range from common-sense to befuddling.
Today, the US Cyberspace Solarium Commission published its final report. The 182-page document is the culmination of a year-long, bipartisan process to develop a new cyber strategy for the United States




How does DNA differ from fingerprints?
Alexia Rodriguez of ACLU writes:
Every two minutes, we shed enough skin cells to cover nearly an entire football field. With a single sneeze, we can spew 3,000 cell-containing droplets into the world. And, on average, we leave behind between 40 and 100 hairs per day. As long as we live in the world and leave our homes each day, we can’t avoid leaving a trail of our DNA in our wake.
Every strand of DNA holds a treasure trove of deeply personal information, from our propensity for medical conditions to our ancestry to our biological family relationships. And increasingly, police are accessing and testing the DNA contained in our unavoidably shed genetic material without judicial oversight. That’s why we’re asking a court to require police to get a warrant before collecting the DNA we unavoidably leave behind.
Read more on ACLU.




For the Privacy team. Start learning AI.
AI Predicted to Take Over Privacy Tech
More than 40% of privacy tech solutions aimed at ensuring legal compliance are predicted to rely on Artificial Intelligence (AI) over the course of the next three years, analysts from the business research and advisory firm Gartner Inc have found.
The company—which is set to present these findings among others at the Gartner IT Symposium/Xpo™ 2020 in Toronto, Canada in May—has found that reliance on privacy tech to ensure compliance with various privacy laws is expected to increase by at least 700% between 2020 and 2023.
This marks an increase from the 5% of privacy tech solutions that are AI driven today to the more than 40% that are predicted to become available within the next 36 months.




The first of many, I’m sure. Each possible/potential use of public information requires a specific consent?
Vermont sues secretive facial recognition company Clearview AI
The state of Vermont has sued the company behind a facial recognition tool that has built a vast database from photos of private individuals it has gathered across the internet and social media platforms without consent.
Attorney General TJ Donovan announced Tuesday his office has filed a lawsuit in Vermont Superior Court in Chittenden County against Clearview AI, alleging the secretive business has violated the state’s consumer protection law by illegally collecting images of Vermont residents, including children, and selling this information to private businesses, individuals and law enforcement.
In January, around the time when the New York Times published its report on Clearview, the company registered as a data broker in Vermont — an entity that collects information about individuals from public and private sources for profit.
Data brokers that sell Vermonters’ data must register annually with the state’s data broker registry and provide certain information about business practices. In the registry, Clearview reported that it knowingly “possesses the brokered personal information of minors,” according to the attorney general’s lawsuit.
The state’s lawsuit also makes the case that when an individual uploads a photograph to Facebook for “public” viewing, they consent to others looking at the photograph but are not consenting to the “mass collection of those photographs by an automated process that will then put those photographs into a facial recognition database.”




For all my students.
Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI
Many organizations have published principles intended to guide the ethical development and deployment of AI systems; however, their abstract nature makes them difficult to operationalize. Some organizations have therefore produced AI ethics checklists, as well as checklists for more specific concepts, such as fairness, as applied to AI systems. But unless checklists are grounded in practitioners’ needs, they may be misused. To understand the role of checklists in AI ethics, we conducted an iterative co-design process with 48 practitioners, focusing on fairness. We co-designed an AI fairness checklist and identified desiderata and concerns for AI fairness checklists in general. We found that AI fairness checklists could provide organizational infrastructure for formalizing ad-hoc processes and empowering individual advocates. We discuss aspects of organizational culture that may impact the efficacy of such checklists, and highlight future research directions.




Recreational privacy.



Tuesday, March 10, 2020


It’s not all counter-hacking. There are lots of hacker-wanna-bes that download tools from the dark web.
A hacker says hackers are hacking hackers in new hacking campaign
Cybereason’s Amit Serper found that the attackers in this years-long campaign are taking existing hacking tools — some of which are designed to exfiltrate data from a database through cracks and product key generators that unlock full versions of trial software — and injecting a powerful remote-access trojan. When the tools are opened, the hackers gain full access to the target’s computer.
Serper said the attackers are “baiting” other hackers by posting the repackaged tools on hacking forums.
But it’s not just a case of hackers targeting other hackers, Serper told TechCrunch. These maliciously repackaged tools are not only opening a backdoor to the hacker’s systems, but also any system that the hacker has already breached.




Keeping my students secure.
5 Common Social Media Privacy Issues (And How to Fix Them)




Interesting.
Law Enforcement’s Facial Recognition Law-lessness: Comparing European and US Approaches
Coming to Europe from the United States to talk about law enforcement use of facial recognition (FRT) at multi-stakeholder gatherings is like walking through the looking glass. It’s not clear exactly what metaphor fits best. Where Europeans have a lush forest of legal regulations for police use of technology and data—and still feel they are lacking what they need—in the United States, we live in a desert landscape bereft of laws where police do what they wish with virtually no regulation at all.
To be clear—and much more on this below—there’s every reason to be skeptical of some of the legal justifications offered for FRT particularly in the United Kingdom. And some of those countries may be moving too quickly, beyond even where police in the United States tread. But that, in its own way, is the point: there are legal justifications required, and given, and people know them, and can call them out as insufficient if they just don’t measure up. In the United States, it is all hush-hush, maybe even with a dose of deceiving the public mixed in, making it nearly impossible to hold law enforcement to account.




Ensuring Privacy by making it too expensive to go in the other direction? Fighting facial recognition with facial recognition?
CCPA and face recognition to ensure personal privacy
After years of public debate, face recognition is becoming a tool businesses can utilize to comply with these newly defined consumer rights. Do you ever wonder what happens to the video footage that is stored at a retailer or when you walk into your local grocery store? In California, individuals now have the ability to request this information along with other personally identifiable information which these businesses have collected, thus transitioning the torch of power to the consumer. In Europe, GDPR provides similar protection regarding personal information that has been collected, stored and sold. Fulfilling customer data requests in a timely manner becomes nearly impossible for enterprise businesses to handle with current tools; they would need to hire a team to sift through hours and days of data trying to locate a person and their associated information. With the power of face recognition, these data requests can be accomplished in seconds and in turn, increase consumer trust and sentiment with the business. The future of understanding how your data moves through an enterprise is through face recognition.
Face recognition will now act as a tool of enablement for protecting our freedoms and personal data rights. The efficiency of data extraction is paramount in this digital age and can cripple businesses who are ill-prepared. Imagine a situation where fifty people show up to the same retail store demanding they produce every bit of personal information that has been collected on them. It will significantly impact the store’s operational functionality and require an immediate diversion of resources. It’s the new age DDoS attack…




For your consideration.
Artificial Intelligence: The Fastest Moving Technology
If artificial intelligence is truly our fasting moving technology, the law has been lagging far behind. Addressing the emerging legal issues requires an understanding of the technology and how it works. In his Technology Law column, Peter Brown examines how AI functions and some of its legal implications.



Monday, March 09, 2020


Corporate auditors should already be addressing CMMC.
The Pentagon’s first class of cybersecurity auditors is almost here
The Pentagon hopes to have the first class of auditors to evaluate contractors’ cybersecurity ready by April, a top Department of Defense official said March 5.
The auditors will be responsible for certifying companies under the new Cybersecurity Maturity Model Certification (CMMC), which is a tiered cybersecurity framework that grades companies on a scale of one to five. A score of one designates basic hygiene and a five represents advanced hygiene.




Cases, just in case. (Free PDF)
Cybersecurity Law, Policy, and Institutions (version 3.0)
This is the full text of my interdisciplinary “eCasebook” designed from the ground up to reflect the intertwined nature of the legal and policy questions associated with cybersecurity. My aim is to help the reader understand the nature and functions of the various government and private-sector actors associated with cybersecurity in the United States, the policy goals they pursue, the issues and challenges they face, and the legal environment in which all of this takes place. It is designed to be accessible for beginners from any disciplinary background, yet useful to experienced audiences too.
The first part of the book focuses on the “defensive” perspective (meaning that we will assume an overarching policy goal of minimizing unauthorized access to or disruption of computer systems). The second part focuses on the “offensive” perspective (meaning that there are contexts in which unauthorized access or disruption might actually be desirable as a matter of policy).




Another perspective.
An Ambitious Reading of Facebook’s Content Regulation White Paper
Corporate pronouncements are usually anodyne. And at first glance one might think the same of Facebook’s recent white paper, authored by Monika Bickert, who manages the company’s content policies, offering up some perspectives on the emerging debate around governmental regulation of platforms’ content moderation systems. After all, by the paper’s own terms it’s simply offering up some questions to consider rather than concrete suggestions for resolving debates around platforms’ treatment of such things as anti-vax narratives, coordinated harassment, and political disinformation. But a careful read shows it to be a helpful document, both as a reflection of the contentious present moment around online speech, and because it takes seriously some options for “content governance” that – if pursued fully – would represent a moonshot for platform accountability premised on the partial but substantial, and long-term, devolution of Facebook’s policymaking authority.




For my Architecture students.
The Emergence Of ML Ops
In the latter part of the 2000s, DevOps solutions emerged as a set of practices and solutions that combines development-oriented activities (Dev) with IT operations (Ops) in order to accelerate the development cycle while maintaining efficiency in delivery and predictable, high levels of quality. The core principles of DevOps include an Agile approach to software development, with iterative, continuous, and collaborative cycles, combined with automation and self-service concepts. Best-in-class DevOps tools provide self-service configuration, automated provisioning, continuous build and integration of solutions, automated release management, and incremental testing.
DevOps approaches to machine learning (ML) and AI are limited by the fact that machine learning models differ from traditional application development in many ways. For one, ML models are highly dependent on data: training data, test data, validation data, and of course, the real-world data used in inferencing. Simply building a model and pushing it to operation is not sufficient to guarantee performance. DevOps approaches for ML also treat models as “code” which makes them somewhat blind to issues that are strictly data-based, in particular the management of training data, the need for re-training of models, and concerns of model transparency and explainability.
As organizations move their AI projects out of the lab and into production across multiple business units and functions, the processes by which models are created, operationalized, managed, governed, and versioned need to be made as reliable and predictable as the processes by which traditional application development is managed.



Yesterday I was the most popular Blog in Turkmenistan. I have no idea why.

  





Sunday, March 08, 2020


I’ll wait until they form a union.
If AI Has Human Rights, Some Are Worried That Self-Driving Cars Might Turn On Us
Should AI have human rights?
It’s a seemingly simple question, though the answer has tremendous consequences.
Presumably, your answer is either that yes, AI should have human rights, or alternatively, that AI should not have human rights.
Take a pick.
But pick wisely.
There is a bit of a trick involved though because the thing or entity or “being” that we are trying to assign human rights to is currently ambiguous and currently not even yet in existence.
In other words, what does it mean when we refer to “AI” and how will we know it when we discover or invent it?
At this time, there isn’t any AI system of any kind that could be considered sentient, and indeed by all accounts, we aren’t anywhere close to achieving the so-called singularity (that’s the point at which AI flips over into becoming sentient and we look in awe at a presumably human-equivalent intelligence embodied in a machine).




Are we ready? (Hint: Hell no!)
Policy controls that govern agency activity generally contain at least two components: (1) a substantive policy; and (2) a governance structure for ensuring implementation of and compliance with that policy. Effective controls require both. This proposal focuses on the second component, the governance structure. Specifically, it addresses routine monitoring, annual audits, enforcement of the AG’s policies that govern the facial recognition system, and public transparency. Established facial recognition policies, including those that the Task Force has looked to as models, recognize the importance of establishing such a governance structure.




Free is good.
Download “Windows 10 All-In-One For Dummies” For FREE
Simply click here to download Windows 10 For Dummies from TradePub. You will have to complete a short form to access the ebook, but it’s well worth it!
Note: this free offer expires 17 Mar 2020.