Saturday, December 14, 2019


Not sure if this is a really big emergency or if they need to declare an emergency to free up funds and other resources. Something tripped an alarm at 5AM but nothing identified until 11AM? Stay tuned.
New Orleans Declares State Of Emergency Following Cyber Attack
The City of New Orleans has suffered a cybersecurity attack serious enough for Mayor LaToya Cantrell to declare a state of emergency.
The attack started at 5 a.m. CST on Friday, December 13, according to the City of New Orleans’ emergency preparedness campaign, NOLA Ready, managed by the Office of Homeland Security and Emergency Preparedness. NOLA Ready tweeted that "suspicious activity was detected on the City’s network," and as investigations progressed, "activity indicating a cybersecurity incident was detected around 11 am." As a precautionary measure, the NOLA tweet confirmed, the City’s IT department gave the order for all employees to power down computers and disconnect from Wi-Fi. All City servers were also powered down, and employees told to unplug any of their devices.
During a press conference, Mayor Cantrell confirmed that this was a ransomware attack. A declaration of a state of emergency was filed with the Civil District Court in connection with the incident.
It's not known what ransomware malware was used during the attack, and Mayor Cantrell has said that no ransom demand has been made at this point in time.




It’s always something.
Multi-Cloud Security Is the New #1 IT Challenge for Businesses
Most businesses now have an IT infrastructure that makes use of multiple cloud services providers. A new study from Business Performance Innovation (BPI) Network finds that multi-cloud security has become the biggest immediate IT challenge for businesses, as the authorization and authentication handoffs between these different services provide ample opportunity for things to go wrong.
The mass movement of businesses to a multi-cloud provider model can be traced back to a number of things: a desire to not be locked in to one vendor’s products, lack of necessary tools from a single vendor (or that vendor not offering those particular tools at a competitive price point), and network improvements such as lower latency and downtime.
There is, however, a widespread errant belief that somehow a multi-cloud setup is inherently more secure. This can be true, but only if sensitive data is exclusively stored on and accessed from a private part of the cloud that is properly monitored and managed by IT staff. What tends to happen in reality is that these disparate cloud components end up being difficult to integrate and train company personnel on. This leads to all sorts of mishaps, from misconfigured storage buckets being breached to vendors being given access to a much higher level of sensitive data than is required.




For my Security students.
CCPA FAQ
I am pleased to announce my new CCPA FAQ that covers all the key details of the California Consumer Protection Act.
With the CCPA effective date looming in just over two weeks, many people are have a lot of questions about what the Act requires and how they should be prepared to comply.
I also have a number of other CCPA resources including a whiteboard that distills the requirements of the law into one page and a training guide that discusses the CCPA’s training requirements and makes recommendations for how organizations can meet these requirements.




There must be another way, but does its size or culture make it unavailable to India?
India shuts down internet once again, this time in Assam and Meghalaya
The shutdown of the internet in Assam and Meghalaya, home to more than 32 million people, is the latest example of a worrying worldwide trend employed by various governments: preventing people from communicating on the web and accessing information.
And India, the world’s second largest internet market with more than 650 million connected users, continues to exercise this measure more than any other nation.




For every Ying there is a Yang. (Making your lawyers work for a change?)
The AI Transparency Paradox
In recent years, academics and practitioners alike have called for greater transparency into the inner workings of artificial intelligence models, and for many good reasons. Transparency can help mitigate issues of fairness, discrimination, and trust — all of which have received increased attention. Apple’s new credit card business has been accused of sexist lending models, for example, while Amazon scrapped an AI tool for hiring after discovering it discriminated against women.
At the same time, however, it is becoming clear that disclosures about AI pose their own risks: Explanations can be hacked, releasing additional information may make AI more vulnerable to attacks, and disclosures can make companies more susceptible to lawsuits or regulatory action.
Last is the importance of engaging with lawyers as early and as often as possible when creating and deploying AI. Involving legal departments can facilitate an open and legally privileged environment, allowing companies to thoroughly probe their models for every vulnerability imaginable without creating additional liabilities.
Indeed, this is exactly why lawyers operate under legal privilege, which gives the information they gather a protected status, incentivizing clients to fully understand their risks rather than to hide any potential wrongdoings. In cybersecurity, for example, lawyers have become so involved that it’s common for legal departments to manage risk assessments and even incident-response activities after a breach. The same approach should apply to AI.


(Related) Even more work for lawyers.
Facebook The Plaintiff: Why The Company Is Suddenly Suing So Many Bad Actors
When Facebook caught the New Zealand–based company Social Media Series Limited selling likes from fake users on Instagram, the tech giant did something out of character. It sued.
The lawsuit, filed in April, was a departure from Facebook’s previously less confrontational approach to those it caught abusing its platform. When people and companies ran afoul of its policies, Facebook would slap them with bans and cease-and-desist letters but rarely took them to court. But in a turbulent moment for the company — with antitrust investigations mounting and US presidential candidates seeking to break it up the social media giant is attempting to demonstrate it’s serious about cleaning up its act. And that means sending a message via the courts.




Perspective. Another Amazon monopoly?
Watch out, UPS. Morgan Stanley estimates Amazon is already delivering half of its packages
Amazon is already delivering about half of its own packages in the U.S., according to a Morgan Stanley estimate on Thursday, and will soon pass both United Parcel Service and FedEx in total volume.
Amazon Logistics is the e-commerce giant’s in-house logistics operation. Morgan Stanley said Amazon Logistics “more than doubled its share” of U.S. package volumes from about 20% a year ago and is now shipping at a rate of 2.5 billion per year. For comparison, Morgan Stanley estimates UPS and FedEx have U.S. shipping volumes of 4.7 billion and 3 billion packages per year, respectively.”
We see more of this going forward as our new bottom-up US package model assumes Amazon Logistics US packages grow at a 68% [compound annual growth rate from 2018 to 2022],” Morgan Stanley said.



Friday, December 13, 2019


Perhaps we need another definition? (Would 100 per week change the UK’s mind? How about 100,000?)
The Application of International Law to Cyberspace: Sovereignty and Non-intervention
The term “cyber attack” sounds dramatic, invoking images of war. Many commentators have talked about how the law on the use of force and the law of armed conflict apply to cyber attacks. But the reality is that cyber incursions by one State into another State’s territory are both more frequent and less dramatic than attacks that rise to the level of a use of force. The United Kingdom estimates that it is on the receiving end of an average of ten cyber attacks a week, most by State-sponsored hackers. These low level, persistent attacks do not constitute a use of force nor reach the level of intensity required to trigger an armed conflict. They will often leave no physical trace. But they can cause significant economic and political damage in the victim State. And they can violate other rules of international law, namely the principle of sovereignty, and/or the prohibition on intervention in another State’s affairs.




A start on Best Practices. A metric for failures.
5 Steps to Securing Your Enterprise Mobile App




Better lawyers or naive management?
Facebook Won’t Change Web Tracking in Response to California Privacy Law
Facebook Inc. has told advertisers it doesn’t need to make changes to its web-tracking services to comply with California’s new consumer-privacy law, setting up a potential early clash over how the closely watched law will be enforced once it goes into effect.
Facebook is one of several companies in the $130 billion U.S. digital-ad industry that maintains that routine data transfers about consumers may not fit the law’s definition of “selling” data. Other major competitors, including Alphabet Inc. ’s Google, have introduced new tools to comply with the law’s mandate to stop collecting data if a user opts out.




Worth a deep read.
EFF on the Mechanics of Corporate Surveillance
EFF has published a comprehensible and very readable "deep dive" into the technologies of corporate surveillance, both on the Internet and off. Well worth reading and sharing.
Boing Boing post.




A book list for Privacy wonks. I picked a couple…
Notable Privacy and Security Books 2019
Here are some notable books on privacy and security from 2019. To see a more comprehensive list of nonfiction works about privacy and security, Professor Paul Schwartz and I maintain a resource page on Nonfiction Privacy + Security Books.




For my students. And me. Streaming FREE.
The Age Of A.I.’: Robert Downey Jr. Hosts YouTube Documentary Series – Watch The Trailer
Hey Alexa, how is artificial intelligence reshaping our world? Robert Downey Jr. will explain in The Age of A.I., a new documentary series from YouTube originals that premieres December 18. Check out the first trailer above and key art below.
The eight-episode series takes a deep dive into the fascinating world of the most transformational technology in the history of humankind, per YouTube’s logline.




My AI says I find this article troubling.
Emotion-detecting tech should be restricted by law - AI Now
A leading research centre has called for new laws to restrict the use of emotion-detecting tech.
The AI Now Institute says the field is "built on markedly shaky foundations".
Despite this, systems are on sale to help vet job seekers, test criminal suspects for signs of deception, and set insurance prices.
It wants such software to be banned from use in important decisions that affect people's lives and/or determine their access to opportunities.
AI Now refers to the technology by its formal name, affect recognition, in its annual report.
"It claims to read, if you will, our inner-emotional states by interpreting the micro-expressions on our face, the tone of our voice or even the way that we walk," explained co-founder Prof Kate Crawford.
Prof Crawford suggested that part of the problem was that some firms were basing their software on the work of Paul Ekman, a psychologist who proposed in the 1960s that there were only six basic emotions expressed via facial emotions.
But, she added, subsequent studies had demonstrated there was far greater variability, both in terms of the number of emotional states and the way that people expressed them.




Does this have any direct parallel in Standard Oil or other busted trusts?
FTC Weighs Seeking Injunction Against Facebook Over How Its Apps Interact
If it materializes, the action by the Federal Trade Commission would focus on Facebook’s policies concerning it how it integrates its apps or allows them to work with potential rivals, these people said. Alongside its core social network, Facebook’s key products also include Instagram, Messenger and WhatsApp.
The potential FTC action would likely seek to block Facebook from enforcing those policies on grounds that they are anticompetitive, the people said. An injunction could seek to bar Facebook from further integrating apps that federal regulators might look to unwind as part of a potential future breakup of the company, one of the people said.



Thursday, December 12, 2019


Hacking the national treasury. Is it all North Korea?
https://www.cpomagazine.com/cyber-security/swift-fraud-on-the-rise-according-to-eastnets-survey-report/
SWIFT Fraud On the Rise According to EastNets Survey Report
According to a new report (“How Banks Are Combating the Rise in SWIFT Cyber Fraud”) from EastNets, the problem of SWIFT fraud may be more widespread and dangerous than originally thought. In the aftermath of the epic $81 million SWIFT fraud attack on Bangladesh Bank in 2016, the SWIFT interbank messaging platform immediately put new safeguards in place in order to neutralize risk. However, EastNets surveyed 200 banks worldwide and found that 4 in 5 of these banks had experienced at least one SWIFT fraud attempt since 2016, and the problem appears to be growing on an annual basis.






Hacking a “home security” device?
https://thenextweb.com/hardfork/2019/12/12/cryptocurrency-extortionists-bitcoin-ring-doorbell-cameras-ransom/
Amazon Ring owners foil $400K Bitcoin extortion plot by removing batteries
Tania Amador, a 28-year-old who lives in Grand Prarie just outside Dallas, gave a video to local news which reportedly showed that her Ring security system had been hacked by cryptocurrency hungry scammers who demanded 50 Bitcoin ($400,000).
… “I was asleep and our Ring alarm was going off like an intruder had entered our home,” Amador told WFAA. “Then we heard a voice coming from our camera.”
The voice reportedly said “Ring support! Ring Support! We would like to notify you that your account has been terminated by a hacker.”
The unscrupulous scammers then demanded a 50 Bitcoin ($400,000) payment, and threatened Amador by saying that she will be terminated herself if she doesn’t oblige.
“Pay this 50 Bitcoin ransom or you will get terminated yourself,” they said.
If this wasn’t scary enough, the hackers also managed to gain control of her Ring doorbell to make it appear that they were outside her home.
Ring has been facing a slew of privacy concerns after numerous reports that their products have been hacker by bad actors.
A quick Google demonstrates this isn’t an isolated issue. A recent Motherboard report found that there is software available specifically designed to hack Ring cameras which sells for as little as $6.
The home security company told WFAA and Amador that the hacks were a result of a third-party data breach in which Ring account details were exposed. This was not a result of Ring’s security being breached or compromised, it said.






New roles and responsibilities.
https://www.defensenews.com/congress/2019/12/11/how-congress-wants-to-help-sync-military-cyber/
How Congress wants to help sync military cyber
The government’s annual defense policy bill, if signed into law by President Donald Trump, will create several new cyber positions within the military.
The fiscal year 2020 National Defense Authorization Act outlines the roles the Department of Defense must fill — at the Pentagon and within the services.
The first position is a senior military advisory for cyber policy — who will also serve as the deputy principal cyber adviser and be at least a two-star general — within the Office of the Under Secretary of Defense for Policy.






Conclusions are obvious? Maybe the FBI should not get backdoors?
https://www.computerworld.com/article/3489718/government-encryption-busting-powers-should-be-curbed-study-says.html
Government encryption-busting powers should be curbed, study says
A new study funded by the University of Waikato and the New Zealand Law Foundation’s Information Law and Policy Project (ILAPP) has called for additional safeguards to curb the powers of government to order users and companies to decrypt encrypted data and devices.
According to principal investigator Dr Michael Dizon, the problem with these powers is that there are no express standards and guidelines with respect to how they are carried out, especially in relation to human rights.
Forcing suspects to disclose their passwords may infringe their right against self-incrimination. Requiring a company to create backdoors or vulnerabilities in encryption to allow the police access to a suspect’s data may jeopardise the privacy and security of all its other clients,” he said.
While providers have a responsibility to assist the police in search or surveillance operations if it is within their existing technical capabilities, such assistance should not involve any act that would undermine the information security of their products and services or compromise the privacy of their clients as a whole.”
The report is entitled A matter of security, privacy and trust: A study of the principles and values of encryption in New Zealand.



(Related)
https://www.vice.com/en_us/article/pkeeay/apple-dmca-take-down-tweet-containing-an-iphone-encryption-key
Apple Used the DMCA to Take Down a Tweet Containing an iPhone Encryption Key
Security researchers are accusing Apple of abusing the Digital Millennium Copyright Act (DMCA) to take down a viral tweet and several Reddit posts that discuss techniques and tools to hack iPhones.
On Sunday, a security researcher who focuses on iOS and goes by the name Siguza posted a tweet containing what appears to be an encryption key that could be used to reverse engineer the Secure Enclave Processor, the part of the iPhone that handles data encryption and stores other sensitive data.
Two days later, a law firm that has worked for Apple in the past sent a DMCA Takedown Notice to Twitter, asking for the tweet to be removed. The company complied, and the tweet became unavailable until today, when it reappeared. In a tweet, Siguza said that the DMCA claim was “retracted.”
iPhone security researchers and jailbreakers see these actions as Apple trying to clamp down on the jailbreaking community. Some in the community have questioned whether an encryption key, or posts linking to jailbreaking tools, are subject to copyright at all.






Because not every user understands.
https://www.bespacific.com/why-every-website-wants-you-to-accept-its-cookies/
Why every website wants you to accept its cookies
Vox/Recode: “…cookies are pieces of information saved about you when you’re online, and they track you as you browse. So say you go to a weather website and put in your zip code to look up what’s happening in your area; the next time you visit the same site, it will remember your zip code because of cookies. There are first-party cookies that are placed by the site you visit, and then there are third-party cookies, such as those placed by advertisers to see what you’re interested in and in turn serve you ads — even when you leave the original site you visited. (This is how ads follow you around the internet.) The rise of alerts about cookies is the result of a confluence of events, mainly out of the EU. But in the bigger picture, these alerts underscore an ongoing debate over digital privacy, including whether asking users to opt in or opt out of data collection is better, and the question of who should own data and be responsible for protecting it…”






Push back on surveillance.
New Hampshire Bill Would Limit Warrantless Drone Surveillance
Mike Maharrey writes:
A bill prefiled in the New Hampshire House would restrict the warrantless and weaponized use of drones by law enforcement. The legislation would not only establish important privacy protections at the state level; it would also help thwart the federal surveillance state.
A coalition of four Republicans filed House Bill 1580 (HB1580 ) for introduction in the 2020 legislative session. The legislation would generally prohibit government use of drones for surveillance.
Read more on Tenth Amendment Center.






An article on AI Architecture.
https://thenextweb.com/syndication/2019/12/10/10-predictions-for-data-science-and-ai-in-2020/
This is what the AI industry will look like in 2020
As we come to the end of 2019, we reflect on a year whose start already saw 100 machine learning papers published a day and its end looks to see a record-breaking funding year for AI.
To paraphrase Eric Beinhocker from the Institute for New Economic Thinking, there are physical technologies that evolve at the pace of science, and social technologies that evolve at the pace at which humans can change — much slower.
Executive understanding of data science and AI becomes more important
The realization is dawning that the bottleneck to data science value may not be the technical aspects of data science or AI (gasp!), but the maturity of the actual consumers of data science.
While some technology companies and large corporations have a head start, there is a growing awareness that in-house training programs are often the best way to develop internal maturity.



(Related)
https://www.theverge.com/2019/12/12/21010671/ai-index-report-2019-machine-learning-artificial-intelligence-data-progress
AI R&D is booming, but general intelligence is still out of reach
Trying to get a handle on the progress of artificial intelligence is a daunting task, even for those enmeshed in the AI community. But the latest edition of the AI Index report — an annual rundown of machine learning data points now in its third year — does a good job confirming what you probably already suspected: the AI world is booming in a range of metrics covering research, education, and technical achievements.
The AI Index covers a lot of ground — so much so that its creators, which include institutions like Harvard, Stanford, and OpenAI, have also released two new tools just to sift through the information they sourced from. One tool is for searching AI research papers and the other is for investigating country-level data on research and investment.






An interesting question.
https://www.techrepublic.com/article/companies-need-an-ethicist-armed-with-a-moral-compass-to-build-trust-in-ai/
Companies need an ethicist armed with a moral compass to build trust in AI
At this point in the artificial intelligence transformation, it's easier to spot the mistakes than the successes.
When Apple and Goldman Sachs rolled out the Apple credit card, one high-profile tech founder and applicant described how the team clearly failed on the "explainability" requirement for AI efforts.
Co-founder & CTO of Basecamp David Heinemeier Hansson complained about the card's application process after he and his wife both applied for the card. Her credit limit was much lower than his, even though her credit score was better. When Heinemeier Hansson tried to find out why, the first customer service agent literally had no answer:
"The first person was like "I don't know why, but I swear we're not discriminating, it's just the algorithm."
The second customer service agent highlighted the explainability fail:
"Second rep went on about how she couldn't actually access the real reasoning (again IT'S JUST THE ALGORITHM is implied)."
How can Apple and Goldman Sachs prove the credit review process is fair if no one has any clue how it works?






Written just for me.
https://www.business2community.com/marketing/the-dummies-guide-to-artificial-intelligence-for-marketing-02265922
The Dummies’ Guide to Artificial Intelligence for Marketing
Fact: AI is transforming business operations and increasingly becoming our interface with technology. At the same time, we’re a long way from it taking over our lives. As IBM software engineer Frederick P. Brooks, Jr. wrote, “There is no single development, in either technology or management technique, which by itself promises even one order of magnitude improvement within a decade in productivity, in reliability, in simplicity.”
What we are seeing, however, is a new generation of technology that is bringing greater insight and productivity to marketing and sales and a heightened experience for customers.
If you’re sitting on the fence, consider that Salesforce’s State of Marketing reports that marketers—your competition—are embracing AI-based applications and technologies.
Gartner confirms this, projecting that 30% of companies around the world will be using at least one AI-based sales application by 2020. And if you need more inspiration to act, Forrester Research estimates that data-driven insights will enable businesses to attract $1.2 trillion AWAY from companies not yet using AI.
Predictive and Prescriptive Analytics: It’s one thing to have lots of data; it’s another to be able to process it and know what it’s telling you. While companies have had tools that help the decipher what people did, they’re only beginning to use that same data to predict what customers will do. And now, with AI-based analytics, they have the potential to act on the predictions and find the best course of action to achieve the desired outcome.




Wednesday, December 11, 2019


Try not to become collateral damage.
Covert Military Information Operations and the New NDAA: The Law of the Gray Zone Evolves
In recent years, Congress has been building a domestic legal framework for gray zone competition (that is, the spectrum of unfriendly actions that states may undertake against one another, surreptitiously, that are below the threshold of actual hostilities yet more serious and disruptive than the ordinary jostling of international affairs) for military operations conducted in the cyber domain. That project has gone rather well, compared to most things Congress undertakes. Last year, it culminated in National Defense Authorization Act (NDAA) provisions that clarified CYBERCOM’s authority in this area while also ensuring a sound degree of oversight of the resulting activities. So far, so good. But the gray zone challenges that define our times of course are not limited to cyber operations as such.
Read on for an explanation of the nuts-and-bolts. Or, if you prefer, you can read the full text of the gigantic bill, or just the “joint explanatory statement issued Dec. 9 after the House and Senate conferees reached agreement at last.


(Related)
The Year 2019 in Review: Same Threats, More Targets
In 2019, almost ten years after the discovery of Stuxnet, the United States fell victim to the first cyberattack that disrupted operations in the electrical grid. Cyberattacks on critical infrastructure are becoming increasingly dangerous, yet little has been done to address them. With the modernization of old systems and the introduction of IoT devices and smart city technology, adversaries have a growing list of potential targets to attack. In 2020, governments need to adopt concrete measures to address these threats.




For the Hacker toolkit.
A technical look at Phone Extraction




Hard to reconcile.
Iran says it foiled "very big" foreign cyber attack
Iran has foiled a major cyber attack on its infrastructure that was launched by a foreign government, the Iranian telecoms minister said on Wednesday, two months after reports of a U.S. cyber operation against the country.


(Related)
Iran Banks Burned, Then Customer Accounts Were Exposed Online
After demonstrators in Iran set fire to hundreds of bank branches last month in antigovernment protests, the authorities dealt with another less visible banking threat that is only now coming to fuller light: a security breach that exposed the information of millions of Iranian customer accounts.
As of Tuesday, details of 15 million bank debit cards in Iran had been published on social media in the aftermath of the protests, unnerving customers and forcing the government to acknowledge a problem. The exposure represented the most serious banking security breach in Iran, according to Iranian media and a law firm representing some of the victims.




Data for the asking. Not the best security technique.
Web-hosting firm 1&1 hit by almost €10 million GDPR fine over poor security at call centre
1&1 has been fined €9.55 million (US $10.6 million) by Germany’s Federal Commissioner for Data Protection and Freedom of Information (BFDI), after the telecoms company was found to have not taken sufficient measures in its call centre to prevent unauthorised parties from accessing customer data.
The BfDI says that it became aware that anyone could obtain extensive personal information on 1&1’s customers simply by calling the customer care department and giving a name and date of birth.
The BfDI ruled that 1&1 was, therefore, in violation of article 32 of the GDPR legislation, by failing to take appropriate technical and organisational measures to protect the handling of personal data.




Probably not the solutions we will choose.
How to avoid a dystopian future of facial recognition in law enforcement
Civil liberties activists warn that the powerful technology, which identifies people by matching a picture or video of a person’s face to databases of photos, can be used to passively spy on people without any reasonable suspicion or their consent. Many of these leaders don’t just want to regulate facial recognition tech — they want to ban or pause its use completely.
Republican and Democratic lawmakers, who so rarely agree on anything, have recently joined forces to attempt to limit law enforcement agencies’ ability to surveil Americans with this technology, citing concerns that the unchecked use of facial recognition could lead to the creation of an Orwellian surveillance state.
Several cities, such as San Francisco, Oakland, and Somerville, Massachusetts have banned police use of the technology in the past year. A new federal bill was introduced earlier this month that would severely restrict its use by federal law enforcement, requiring a court order to track people for longer than three days. And some senators have discussed a far-reaching bill that would completely halt government use of the technology.
But the reality is that this technology already exists — it’s used to unlock people’s iPhones, scan flight passengers faces instead of their tickets, screen people attending Taylor Swift concerts, and monitor crowds like at Brazil’s famous Carnival festival in Rio de Janeiro.
Here are some of the leading ways that the US government is using facial recognition today, and where experts say there’s a need for more transparency, and for it to be more strongly regulated.




Some good and some bad.
What technology will courts be using in 5 years’ time?
National Center for State Courts – Court Technology Bulletin, December 5, 2019 – “We are pleased to share the following post from our friend, the Hon. Judge Andrea Tsalamandris from Melbourne, Australia on “how technology can be used by judges and court administration to create efficiencies in our courts, and enhance access to justice. As a judge who was appointed to the County Court of Victoria (CCV) a few years before my 50th birthday, I was very pragmatic in embracing technology in my new role. I thought it was safe to presume that when I retired in twenty years’ time, I would not be working with paper court books or handwriting my signature on court orders. My initial interest in technology was simply to see how it could make my life as a judge easier. However, after attending an E-Courts Conference in the United States in 2018, my eyes were opened to the manner in which technology could be used within courts, to benefit court users, as well as judges and court staff. Shortly after attending that conference, I was asked to chair a newly created IT committee at the CCV, to guide the court in our digital transformation. My teenage children thought this was hysterical, as they did not consider me to be in any way “tech-savvy”; and that was indeed true. But I was willing to learn and was keen to see, in practical terms, how technology could assist all areas of our court, from registry, to the courtroom and in chambers. Whenever I talk to people about our plans for the future, I invariably pose the question – what will we be doing in 5 years’ time? Most of us accept that change is coming, and that it is probably coming more quickly than any of us expect. Having spoken with other judges and court IT managers in Australia, USA, UK and UAE, here is a list of where I think we are heading…”
[Good: 2. Paperless jury trials
Over the last 18 months, the Victorian Supreme Court has conducted a number of criminal trials electronically. In such cases, each juror has been given a iPad on which exhibits are uploaded throughout the course of the trial. Each juror is able to make their own notes and mark up the documents, just as the judge is doing on their own device.
[Bad: 3. PowerPoints for jury charges
In the CCV, some judges are beginning to use PowerPoints, both for opening remarks and for the charge.




Unfortunately, we may need these…
The Constitution Annotated—Impeachment Clauses
In Custodia Legis – “The Library of Congress has updated the Constitution Annotated essays pertaining to impeachment and incorporated them in the annotations to Article I, Article II, and Article III of the Constitution. In addition, the updated impeachment essays are consolidated in Resources about Impeachment. Additional information on impeachment is available on the website’s Beyond the Constitution Annotated: Table of Additional Resources under Resources.
The Library of Congress launched the Constitution Annotated on Constitution Day, September 17, 2019. The website provides online access to the “Constitution of the United States of America: Analysis and Interpretation,” which has served as Congress’s official record of the Constitution for over a century and explains in layman’s terms the Constitution’s origins, how the nation’s most important law was crafted and ratified, and how every provision in the Constitution has been interpreted. With advanced search tools and a modern, user-friendly interface, the new website makes the 3,000 pages of the Constitution Annotated fully searchable and accessible for the first time to online audiences—including Congress, legal scholars, law students, and anyone interested in U.S. constitutional law…”




Anything to get rid of my students.