Saturday, September 14, 2019


Slow day. At least this article is interesting.


Another perspective.
10 reasons why the GDPR is the opposite of a ‘notice and consent’ type of law
… A ‘notice and consent’ framework puts all the burden of protecting privacy and obtaining fair use of personal data on the person concerned, who is asked to ‘agree’ to an endless text of ‘terms and conditions’ written in exemplary legalese, without actually having any sort of choice other than ‘all or nothing’ (agree to all personal data collection and use or don’t obtain access to this service or webpage). The GDPR is anything but a ‘notice and consent’ type of law.
There are many reasons why this is the case, and I could go on and get lost into the minutiae of it. Instead, I’m listing 10 high level reasons, explained in plain language, to the best of my knowledge:



Friday, September 13, 2019


If you are using a secure App, a not-so-secure App could be eavesdropping.
Before Android 10, only one app could access an audio input at once; if an app tried to ask permission to an input while it was in use by something else, the app would be blocked. As of Android 10, audio inputs can be shared by multiple apps, but only in some cases.


(Related) Let there be light!
Android Flashlight Apps Request up to 77 Permissions
Several years ago, users had to download and install flashlight applications on their devices, but Android now includes the functionality natively. However, flashlight applications continue to exist, and there are hundreds of them.
… Of the analyzed apps, 408 request just 10 permissions or less, which seems fairly reasonable. However, there are 262 apps that ask for 50 permissions or more (up to 77). Thus, the average number of permissions requested by a flashlight app is 25.
… Some of the requested permissions, however, are difficult to explain for flashlight applications, the security researcher says.
For example, 77 of the applications request permission to record audio, 180 request permission to read contact lists, and 21 of them want to be able to write contacts.




Since the bill defines an “Officer camera” as a “body-worn camera” the patrol car dash cam is exempt, personal devices used for security are exempt, drones are exempt, in fact everything else is exempt.
California lawmakers ban facial-recognition software from police body cams
California lawmakers on Thursday temporarily banned state and local law enforcement from using facial-recognition software in body cameras, as the most populous US state takes action against the technology.
The bill, AB 1215, marks the latest legislative effort to limit adoption of facial-recognition technology, which critics maintain raises privacy and accuracy concerns. Now the bill, also referred to as the Body Camera Accountability Act, heads to Governor Gavin Newsom, who must decide whether or not to sign it into law by October 13. If he does, it will go into effect in January.
the bill prohibits the use of biometric surveillance technology, which includes facial-recognition software, in police body cameras. It also prohibits police from taking body-camera footage and running it through facial-recognition software at a later time. It does not prevent state and local police from using facial-recognition technology in other ways, such as in stationary cameras


(Related) Most criminals have cars with license plates. You have a car with a license plate. Therefore, you might be a criminal!
Joe Cadillic writes:
Our worst fears about automatic license plate readers (ALPR) are much worse than we could have imagined.
Two months ago, I warned everyone that police in Arizona were using ALPR’s to “grid” entire neighborhoods. But this story brings public surveillance to a whole new level.
Last month, Rekor Systems announced that they had launched the Rekor Public Safety Network (RPSN) which gives law enforcement real-time access to license plates.
Any state or local law enforcement agency participating in the RPSN will be able to access real-time data from any part of the network at no cost. The Company is initially launching the network by aggregating vehicle data from customers in over 30 states. [If you subscribe you must share your data? Bob] With thousands of automatic license plate reading cameras currently in service that capture approximately 150 million plate reads per month, the network is expected to be live by the first quarter of 2020.”
RPSN is a 30 state real-time law enforcement license plate database of more than 150 million people.
And the scary thing about it is; it is free.
Read more on MassPrivateI.




So much for a global currency?
FACEBOOK’S LIBRA CRYPTOCURRENCY WILL BE BLOCKED IN EUROPE, FRANCE SAYS
France has said it will block the development of Facebooks Libra cryptocurrency as it poses a threat to “monetary sovereignty”.
At the opening of an OECD conference on cryptocurrencies, French economy and finance minister Bruno Le Maire said: “I want to be absolutely clear: In these conditions, we cannot authorise the development of Libra on European soil.”
Facebook’s Libra cryptocurrency was announced earlier this year and is set to launch at some point in 2020. Despite Libra having certain technological similarities with bitcoin, its creators hope that its more centralised infrastructure will allow it to become a global currency that could rival the US dollar.




“Alexa, wake me up when the boss gets back from lunch.”
Gartner: Get ready for more AI in the workplace
Artificial intelligence (AI) will be widely adopted in office environments in a variety of ways over the next few years as businesses invest in digital workplace initiatives, Gartner analysts said today.
The trend is expected to gather steam as voice-activated personal assistants that have proved a hit at home begin to make inroads in the office.
By 2025, the technology will “certainly be mainstream,” said Matthew Cain, vice president and distinguished analyst at Gartner – even though privacy and security concerns have limited deployments so far.




27 hands in a row? Pure skill.
Superhuman AI Bots Pose a Threat to Online Poker Firms, Morgan Stanley Says
The threat for online poker players is not the human desktop card sharks playing against you, but the superhuman artificial intelligence bots that could infiltrate games, according to analysts at Morgan Stanley.




For my geeks. Even if you only try a few of these, you’ll be ahead of the curve.
11 Ways Novices Can Start the Process of Learning AI Programming




Who fact checks the fact checkers?
Facebook Took Down A Fact-Check Of An Anti-Abortion Video After Republicans Complained
The fact-check was conducted by three doctors who determined an anti-abortion activist's claim that "abortion is never medically necessary" was false.



Thursday, September 12, 2019


How convenient.
Baltimore acknowledges for first time that data was destroyed in ransomware attack
… Auditor Josh Pasch told the mayor and other top city officials at a meeting of the city’s spending board that without the data, his team has been unable to check some claims the department made about its performance. The data was stored locally and not backed up.




Heads up!
North Korean Hackers Use New Tricks in Attacks on U.S.
A report published in April by South Korea-based ESTsecurity describes attacks launched by Kimsuky against entities in South Korea and the United States.
As part of this campaign, which the cybersecurity firm has dubbed “Autumn Aperture,” the hackers sent out emails with specially crafted Word documents that the targeted user was likely to open. One of the files contained the notes of an individual who gave a presentation at the Nuclear Deterrence Summit earlier this year in Virginia. Another document was a report from a U.S.-based university affiliate discussing a North Korean ballistic missile submarine. The last document described in Prevailion’s report appeared to originate from the U.S. Treasury Department and contained a North Korea sanctions license.
When opened, each of the Word documents instructed the targeted user to enable macros before displaying content. This is a widely used technique that allows attackers to install malware on the victim’s device.




Will US Privacy laws stop (or even address) incompetent security managers?
198 Million Car-Buyer Records Exposed Online for All to See
… The non-password protected Elasticsearch database belonged to Dealer Leads, which is a company that gathers information on prospective buyers via a network of SEO-optimized, targeted websites. According to Jeremiah Fowler, senior security researcher at Security Discovery, the websites all provide car-buying research information and classified ads for visitors. They collect this info and send it on to franchise and independent car dealerships to be used as sales leads. The exposed database in total contained 413GB of data.




Where we are…
8 AI Trends in Today's Big Enterprise
A new report, AI Transforming the Enterprise, from consulting giant KPMG, provides a view into top corporate leadership's perspective of where enterprises are with their efforts.
8 Trends
  • Rapid shift from experimental to applied technology.
  • Automation, AI, analytics, and low-code platforms are converging.
  • Enterprise demand is growing.
  • New organizational capabilities are critical.
  • Internal governance is emerging as a key area.
  • The need to control AI.
  • Rise of AI-as-a-Service.
  • AI could shift the competitive landscape.




Find the solution, become rich and famous?
Data Privacy Regulations’ Implications on AI
Investment in artificial intelligence (AI) is growing, with 60% of adopters raising their budgets 50% year over year, according to Constellation Research. But working with AI under emerging privacy standards is complex, requiring a dynamic balance that allows for continued innovation without misstepping on regulatory requirements. Under privacy regulations, businesses are responsible for gaining consent to use personal data and being able to explain what they are doing with that data. There is a real concern that black box automation systems that offer no explanations and require the long-term storage of large customer data sets will simply not be permitted under these regulations.




I’ve been telling my students they need to know more than how to spell AI.
Should You Be Thinking About AI-Proofing Your Career?
… So should you be thinking about the prospect of being replaced by an AI-driven algorithm? And if so, is there a way for you to AI-proof your career?
The High-Level View: AI Is Coming
Let’s start with a high-level assessment of the future of AI. AI is going to continue to advance, at rates that continue accelerating well into the future. In 2040, we may look back on the AI available today the same way our ubiquitous-internet-enjoying culture looks back on the internet of 1999.
Essentially, it’s conceivable that one day, far into the future, automation and AI will be capable of handling nearly any human responsibility. It’s more a question of when, not if, the AI takeover will be complete. Fortunately, by then, AI will be so embedded and so phenomenally powerful, our access to resources will be practically infinite and finding work may not be much of a problem.
But setting aside those sci-fi visions, it’s realistically safe to assume that AI will soon start bridging the gap between blue-collar and white-collar jobs. Already, automated algorithms are starting to handle responsibilities in journalism, pharmaceuticals, human resources, and law—areas once thought untouchable by AI.
… That said, AI isn’t a perfect tool. AI and automation are much better than humans at executing rapid-fire, predictable functions, but there are some key areas in which AI tends to struggle, including:
  • Abstract brainstorming and problem solving.
  • Human interactions.
  • Situations with many (or unpredictable) variables.




Will AI contracts require AI Lawyers to review them?
Skype Mafia’ Backs A.I. Startup Automating Contract Negotiations
Prominent members of Europe's so-called "Skype Mafia," all co-founders or early employees of the voice-over-Internet conferencing service, are backing Pactum, a startup that uses artificial intelligence to automate business contract negotiations.
Founded late last year but only emerged from stealth mode on Wednesday, Pactum uses a chatbot-like interface to conduct contract talks. The bot can offer changes to standard terms, including price, delivery conditions and days to pay, in order to reach a better deal. The company is based in Mountain View, Calif., with engineering offices in Tallinn, Estonia, where Skype's first engineering offices were also located.
… The idea behind Pactum, Kaspar says, it to deploy the chatbot with firms that have hundreds of thousands or millions of suppliers, which means they previously have relied on standard contracts. "We can start a conversation with 5 million suppliers and in 15 minutes, negotiate bespoke contracts for each of them, and automatically update the contract terms," he says.




I will have nightmares of people peeing on their smartphones...
Healthy.io raises $60 million to help patients complete urine tests on their phone




For my students. Some work for teachers too.
… Some other noteworthy businesses and apps that provide student discounts to anyone with an EDU email address include Best Buy, Autodesk, LastPass, FedEx, Squarespace, Newegg, and Dell. Indeed, it’s always worth doing a quick search to see if there are EDU benefits before you buy or subscribe to anything on the web.



Wednesday, September 11, 2019


Oops!
The potential for a 'miscalculated' enemy cyberattack keeps me up at night, warns Pentagon cyber chief
When asked what kept him up at night, Deputy Assistant Secretary of Defense for Cyber Policy Ed Wilson told members of Congress it was the possibility of an enemy erring in an attack.
"I think it would be the miscalculation of an adversary that is trying to seek ... an outcome it miscalculates with regards to how they go about doing it, the WannaCry-like incident, that maybe has much more implications worldwide or globally than what an actor would have anticipated. And so, that's what I guess keeps me up in the middle of the night," Wilson said.
Cybersecurity experts have long warned of the unintentional dangers posed by cyberweapons. The ambiguous nature of cyberactors means that it is often difficult to determine an adversary's intention. Governments and militaries also run the risk of falling victim to "false flags," or operations in which one actor makes it appear that another is responsible for an attack.
"Due to the difficulty of determining whether certain activity is intended for espionage or preparation for an attack, cyber operations run the risk of triggering unintended escalation," wrote Benjamin Brake, a fellow with the Council on Foreign Relations, in 2015.




A case study.
#GartnerSEC: Maersk’s Adam Banks Reflects on NotPetya Response and Recovery
Speaking in the opening keynote session of day two at the Gartner Security & Risk Management Summit 2019 in London, Adam Banks, chief technology and information officer at Maersk, reflected on the company’s response and recovery following the NotPetya attack in 2017.
When NotPetya first hit, Maersk was unable to determine exactly what was occurring, Banks explained. It took several hours to establish the cause of the attack, and the wide-spread impact. IT services, end-user devices and applications/servers were dramatically affected. As many as 49,000 laptops were destroyed and 1200 applications were inaccessible.
I didn’t go home for 70 days,” Banks said, as he worked tirelessly with the rest of the business to respond and recover.




When employees fall for phony emails…
Business Email Compromise Is a $26 Billion Scam Says the FBI
FBI's Internet Crime Complaint Center (IC3) says that Business Email Compromise (BEC) scams are continuing to grow every year, with a 100% increase in the identified global exposed losses between May 2018 and July 2019.
Also, between June 2016 and July 2019, IC3 received victim complaints regarding 166,349 domestic and international incidents, with a total exposed dollar loss of over $26 billion.




We’ll even help you write it! We’re thinking: “GDPR Lite!”
51 tech CEOs send open letter to Congress asking for a federal data privacy law
CEOs blamed a patchwork of differing privacy regulations that are currently being passed in multiple US states, and by several US agencies, as one of the reasons why consumer privacy is a mess in the US.
This patchwork of privacy regulations is creating problems for their companies, which have to comply with an ever-increasing number of laws across different states and jurisdictions.




Surveillance without adequate planning?
ICYMI: FPF’s Amelia Vance Raises Concerns about School Surveillance Technologies on WOSU
Communities should absolutely adopt the school safety measures that they think are necessary for their community, but we [also] want to make sure that they don’t have unintended consequences – that they don’t actually harm students more than they help ensure school safety,” Vance said. Listen to the full interview.
Specifically, Vance highlighted examples of students who have typed a sensitive word or phrase, like “shooting hoops,” or posted images that are falsely flagged as problematic. As a result, these students – and the school administrators – can end up trapped in time-consuming “threat assessment process” that can lead to unjust school suspension or even expulsion.
Vance noted, “You have students who have gone through the threat assessment process, which is intended to make things better for students… but what we’ve seen is, in some cases, these threat assessments are discriminating against students with autism or students with disabilities… Those students aren’t threats, they’re simply students who need additional help.”
Vance also warned that some surveillance technologies could inadvertently deter students from seeking help (e.g. searching for resources and support for depression) because they believe certain search terms they will be ‘flagged’ as potential threats.




Perhaps we will eventually learn something?
Google Hit With Sweeping Demand From States Over Ad Business
Texas Attorney General Ken Paxton’s office, which is leading the nationwide probe, on Monday issued a 29-page civil investigative demand obtained by Bloomberg. In more than 200 directives, investigators ordered the company to produce detailed explanations and documents by Oct. 9 related to its sprawling system of online advertising products.
The process of showing an ad to a single person visiting a web page can involve dozens of companies and multiple auctions and transactions. Google has worked its way into controlling much of that process, and investigators want to know exactly how powerful the company has become in this space.
Google controls about 37% of digital ad spending in the U.S., ahead of No. 2 Facebook at 22%, according to EMarketer.
The state attorneys general asked for information on how Google shares data with other companies and how it tracks behavioral data of advertisers and people on its Chrome web browser. That could signal an interest in privacy in addition to the focus on competition in the advertising market.




An article worth reading.
The Ethics of A.I. Doesn’t Come Down to ‘Good vs. Evil’
The Artificial Intelligence (A.I.) Brain Chip will be the dawn of a new era in human civilization.
The Brain Chip will be the end of human civilization.
These two diametrically opposite statements summarize the binary core of how we look at artificial intelligence (A.I.) and its applications: Good or bad? Beginning or ending? Truth or deceit?
Ethics in A.I. is about trying to make space for a more granular discussion that avoids these binary polar opposites. It’s about trying to understand our role, responsibility, and agency in shaping the final outcome of this narrative in our evolutionary trajectory.
This article divides the issues into five parts:
    1. What do we mean by ethics and A.I.?
    2. Our lack of ability to understand the intended and unintended consequences of innovation.
    3. Our lack of ability to understand the connections and ramifications between separate events.
    4. Our lack of ability to standardize fairness.
    5. Our inexperience in managing platforms with billions of people.




Both end of the normal curve seem over-represented.
How Much AI Expertise Do Thought Leaders and Companies Really Have?
Launched in early August, Certified Artificial promises a “neutral, independent third-party certification service” for helping separate the AI snake oil from the real deal. One part of this service focuses on companies requesting third-party verification of the fact that they’re using the latest AI techniques in their services and products rather than simply relying on groups of human workers or older statistical methods. Certified Artificial’s other line of business involves evaluating the quality of advice coming from certain thought-leaders who frequently discuss AI technologies and their social impacts.
Our goal is not to penalize anyone because they made a little misstep on how they talked about AI,” says Tim Hwang, partner and technical director of Certified Artificial, and director of the Harvard-MIT Ethics and Governance of AI Initiative. “We want to signal places where someone has either been consistently spreading disinformation about AI or is opining about it so it impacts in a way that erases a lot of people doing really amazing work in this space.”
The newest part of the service includes an online browser extension that anyone can install in order to see assigned ratings for thought-leaders whenever their names pop up in search engines or websites. Those experts who demonstrate both technical knowledge about AI and responsible awareness of the technology implications may receive gold, silver, or bronze certification badges. On the other hand, individuals who frequently spread misinformation about AI can receive a “Do Not Recommend” badge.




Perspective.
Sandvine releases 2019 Global Internet Phenomena Report
The Global Internet Phenomena Report is the authoritative view on how applications are consuming the world's internet bandwidth.
Some highlights from this edition of the report include:
  • Video is over 60% of the total downstream volume of traffic on the internet.
  • Netflix is 12.60% of the total downstream volume of traffic across the entire internet and 11.44% of all internet traffic.
  • Google is 12% of overall internet traffic, driven by YouTube, search, and the Android ecosystem.
  • Gaming traffic and gaming-related bandwidth consumption is increasing as gaming downloads, Twitch streaming, and eSports go mainstream.
  • BitTorrent is over 27% of total upstream volume of traffic, and over 44% in EMEA alone.
  • Facebook applications make up over 15% of the total internet traffic in APAC.
  • The report includes spotlights on the traffic share leaders for video, social networking, messaging, audio streaming, and gaming.
These highlights and more will be shared in the full report, which is available now.




Perspective. For my geeks.
Rethinking software development in the AI era
Data is fast replacing code as the foundation of software development. Here’s how leading organizations anticipate processes and tools transforming as developers navigate this paradigm shift.
Today, applications are deterministic. They are built around loops and decision trees. If an application fails to work correctly, developers analyze the code and use debugging tools to track the flow of logic, then rewrite code in order to fix those bugs.
That's not how applications are developed when the systems are powered by AI and machine learning. Yes, some companies do sometimes write new code for the algorithms themselves, but most of the work is done elsewhere, as they pick standard algorithms from open source libraries or choose from the options available in their AI platforms.
These algorithms are then transformed into working systems by selecting the right training sets and telling the algorithms which data points — or features — are the most important and how much they should be weighed.




Potential tool?
Glide Now Lets You Publish App Templates
Glide is probably my favorite new tool of 2019. The free service lets you take a Google Sheet and quickly turn it into a mobile app. It can be used to create all kinds of apps including staff directories, study guides, scavenger hunts, and local tourism guides. My tutorial on how to use Glide can be seen here.
This week Glide introduced a new feature that lets you share your app as a template. This means that once you've created an app that you like you can share it and let others make a copy of it to modify for their own needs.




Resource list. (and I love lists)
Ten Free Tools for Creating Mind Maps and Flowcharts - Updated for 2019-20



Tuesday, September 10, 2019


Train, train, train – then expect failure?
Cybercriminals count on human interaction in 99% of attacks, research shows
Cybercrooks exploit human flaws in about 99% of their attacks, using social engineering across email, cloud applications and social media to gain a foothold in a targeted infrastructure, new research shows. Almost all cyber-attacks begin with luring employees into clicking on malicious content.
Cybercriminals target mainly people, rather than systems, to install malware, steal data or initiate fraudulent transactions, according to Proofpoint’s 2019 Human Factor report.




You can insure anything, but you have to define “anything” rather exactly.
On Cybersecurity Insurance
Good paper on cybersecurity insurance: both the history and the promise for the future. From the conclusion:
Policy makers have long held high hopes for cyber insurance as a tool for improving security. Unfortunately, the available evidence so far should give policymakers pause.




Having done a bit of web scraping myself, I’m pleased to see formal vindication.
Appeals court rules web scraping doesn’t violate anti-hacking law
arstechnica: “Scraping a public website without the approval of the website’s owner isn’t a violation of the Computer Fraud and Abuse Act, an appeals court ruled on Monday. The ruling comes in a legal battle that pits Microsoft-owned LinkedIn against a small data-analytics company called hiQ Labs. HiQ scrapes data from the public profiles of LinkedIn users, then uses the data to help companies better understand their own workforces. After tolerating hiQ’s scraping activities for several years, LinkedIn sent the company a cease-and-desist letter in 2017 demanding that hiQ stop harvesting data from LinkedIn profiles. Among other things, LinkedIn argued that hiQ was violating the Computer Fraud and Abuse Act, America’s main anti-hacking law. This posed an existential threat to hiQ because the LinkedIn website is hiQ’s main source of data about clients’ employees. So hiQ sued LinkedIn, seeking not only a declaration that its scraping activities were not hacking but also an order banning LinkedIn from interfering. A trial court sided with hiQ in 2017. On Monday, the 9th Circuit Appeals Court agreed with the lower court, holding that the Computer Fraud and Abuse Act simply doesn’t apply to information that’s available to the general public…


(Related)
Capital One Hack Prosecution Raises New and Old Questions about Adequacy of CFAA
While Congress has made periodic amendments, the CFAA is outdated and has failed to maintain pace with advances in technology. The antiquated provisions of the CFAA create challenges for prosecutors. For example, the prosecution of Sergey Aleynikov, a former high-frequency trader at Goldman Sachs, hit a snag when the trial court dismissed a CFAA charge—holding that Section 1030 does not criminalize actions taken by an employee who had permissible access to information that the employee subsequently misappropriates (“In short, unless an individual lacks authorization to access a computer system, or exceeds the authorization that has been granted, there can be no violation of § 1030(a)(2)(C).”). Similarly, in the so-called “cannibal cop” prosecution, the Second Circuit held that a person cannot be prosecuted under the CFAA when the person has approved access to information, yet accesses the information with an improper motive.




Can we still use biometrics for security? Stay tuned! Consent is not enough?
Swedish GDPR Fine Highlights Legal Challenges in Use of Biometrics
In late August 2019, the Swedish data protection regulator issued its first ever fine under the General Data Protection Regulation (GDPR). The fine was for 200,000 Swedish Krona, which is just over $20,700.
The action was brought against the Skelleftea municipality, where a local school had run a trial facial biometric recognition system to track 22 students for a period of three weeks. The school had obtained the consent of both the students and their parents, and the trial was intended to improve school administration. The trial was a success, and the school had planned to expand the trial before the regulator stepped in and blocked it.
The regulator's decision was that the consent obtained did not satisfy GDPR consent requirements. According to the European Data Protection Board's commentary on the incident, "consent was not a valid legal basis given the clear imbalance between the data subject [the students] and the controller [the school]." The wider question for business and security is whether this same 'imbalance' also exists between employee and employer.
It appears that it does, making the required use of biometrics (which is defined as personal data, in fact, a 'special category' of personal data) for purposes of authentication and access potentially problematic throughout Europe. This would also apply to the European offices of American companies.


(Related) “I hate guns!” Lizzy Borden
Madison Carter reports:
The Lockport City School District began classes last week — without its long discussed AEGIS facial recognition technology in place.
The State Department of Education told the district to hold off on installing the system while more questions were answered about its use and scope.
Superintendent Michelle Bradley told our 7 Eyewitness News I-Team that as of right now, the system is set to be implemented tracking only guns, not faces at all.
Read more on WKBW.




Oh wow, you’re going to rat me out? I better get my spin version out there fast!”
Facebook warns about iPhone privacy change that could unsettle Facebook users
Less than two weeks before a likely iOS software update that will give iPhone users regular pop-ups telling them which apps are collecting information location in the background, Facebook has published a blog post about how the Facebook app uses location data.
The blog post appears to be a way to get out in front of software changes made by Apple and Google that could unsettle Facebook users given the company’s poor reputation for privacy.




Can you be Buddhist if you have no naval to contemplate?
Robot priests can bless you, advise you, and even perform your funeral
For now, Mindar is not AI-powered. It just recites the same preprogrammed sermon about the Heart Sutra over and over. But the robot’s creators say they plan to give it machine-learning capabilities that’ll enable it to tailor feedback to worshippers’ specific spiritual and ethical problems.
This robot will never die; it will just keep updating itself and evolving,” said Tensho Goto, the temple’s chief steward. “With AI, we hope it will grow in wisdom to help people overcome even the most difficult troubles. It’s changing Buddhism.”




I could see using this technology to find parts for all my old appliances.
Syte snaps up $21.5M for its smartphone-based visual search engine for e-commerce
Visual search has become a key component for how people discover products when buying online: If a person doesn’t know the exact name of what he or she wants, or what they want is not available, it can be an indispensable tool for connecting them with things they might want to buy.
Syte’s approach is notable in how it engages shoppers in the process of the search. Users can snap pictures of items that they like the look of, which can then be used on a retailer’s site to find compatible lookalikes. Retailers, meanwhile, can quickly integrate Syte’s technology into their own platforms by way of an API.




Geek tools. At some point these could be mandatory.
AI-powered code review now available for Visual Studio Code
DeepCode is bringing its AI-powered code review capabilities to Visual Studio Code. The company announced an open-source extension that will enable developers to use DeepCode to detect bugs and issues in Visual Studio Code.
DeepCode is designed to alert users about critical vulnerabilities and avoid bugs going into production. It uses a machine learning bot to continuously learn from bugs and issues, and determine the intent of code. The bot is currently free to enterprise teams of up to 30 developers.




Maybe “other people” means students?