Friday, November 25, 2022

Interpretability. Perhaps a detailed list of the factors that lead the AI to its conclusion would help?

https://www.bespacific.com/glass-box-artificial-intelligence-in-criminal-justice/

Glass Box Artificial Intelligence in Criminal Justice

Garrett, Brandon L. and Rudin, Cynthia and Rudin, Cynthia, Glass Box Artificial Intelligence in Criminal Justice (November 14, 2022). Available at SSRN: https://ssrn.com/abstract=4275661 or http://dx.doi.org/10.2139/ssrn.4275661 As we embrace data-driven technologies across a wide range of human activities, policymakers and researchers increasingly sound alarms regarding the dangers posed by “black box” uses of artificial intelligence (AI) to society, democracy, and individual rights. Such models are either too complex for people to understand [Can you design something so poorly that you don’t understand what you did? Bob] or they are designed so that their functioning is inaccessible. This lack of transparency can have harmful consequences for the people affected. One central area of concern has been the criminal justice system, in which life, liberty, and public safety can be at stake. Judges have struggled with government claims that AI, such as that used in DNA mixture interpretation, risk assessments, facial recognition, and predictive policing, should remain a black box that is not disclosed to the defense and in court. Both the champions and critics of AI have argued we face a central trade-off: black box AI sacrifices interpretability for predictive accuracy. We write to counter this black box myth. We describe a body of computer science research showing “glass box” AI that is interpretable can be more accurate. Indeed, criminal justice data is notoriously error prone, and unless AI is interpretable, those errors can have grave hidden consequences. Our intervention has implications for constitutional criminal procedure rights. Judges have been reluctant to impair perceived effectiveness of black box AI by insisting on the disclosures defendants should be constitutionally entitled to receive. Given the criminal procedure rights and public safety interests at stake, it is especially important that people can understand AI. More fundamentally, we argue that there is no necessary tradeoff between the benefits of AI and the vindication of constitutional rights. Indeed, glass box AI can better accomplish both fairness and public safety goals.”





Reading like I’m retired…

https://www.bespacific.com/lets-be-honest-every-year-is-a-good-one-for-books/

Let’s be honest, every year is a good one for books

100 Notable Books of 2022 Chosen by the staff of The New York Times Book Review Nov. 22, 2022. Sort by: Fiction/Poetry, Nonfiction, Memoir, History or Science.

See also Washington Post – The 10 best books of 2022. By Washington Post Editors and Reviewers





Perspective.

https://www.bespacific.com/iapp-ey-annual-privacy-governance-report-2022/

IAPP-EY Annual Privacy Governance Report 2022

Published: November 2022 View Executive Summary (PDF) View Full Report (Members-Only)

This report is meant to serve as a point-in-time “check-in” for the privacy profession. What does the average privacy office look like in 2022? We asked our global membership to complete the 29-question governance survey. Over the course of 10 weeks, more than 700 responded from more than 40 countries. This year’s research focused on five key foundational areas of governance:

  • Governance and operating model: The organizational structures, roles and responsibilities for managing the collection, use, retention, disclosure and disposal of personal data.

  • Privacy strategy and planning: The activities undertaken by the privacy office to determine the strategic direction of the privacy office and its associated planning activities.

  • Compensation management: The annual process of determining the compensation of privacy office staff.

  • Budget management: The processes and activities supporting the development, approval and spending of annual privacy budgets.

  • Performance metrics and monitoring: The processes and measurements to understand how the organization is performing against privacy strategy.”



Thursday, November 24, 2022

Is the trivial impact of the cyber attacks part of the reason we haven’t declared cyber war?

https://thenextweb.com/news/why-russia-cyber-army-has-struggled-to-impact-ukraine-war

The Ukraine conflict is exposing the limits of cyber warfare — and Russian hackers





Human in the loop? What’s wrong? Should be the same rules as for individual officers, right?

https://missionlocal.org/2022/11/killer-robots-to-be-permitted-under-sfpd-draft-policy/

SFPD authorized to kill suspects using robots in draft policy

A policy proposal heading for Board of Supervisors approval next week would explicitly authorize San Francisco police to kill suspects using robots.

The new policy, which defines how the SFPD is allowed to use its military-style weapons, was put together by the police department. Over the past several weeks, it has been scrutinized by supervisors Aaron Peskin, Rafael Mandelman and Connie Chan, who together comprise the Board of Supervisors Rules Committee.

The draft policy faces criticism from advocates for its language on robot force, as well as for excluding hundreds of assault rifles from its inventory of military-style weapons and for not including personnel costs in the price of its weapons.





Not exclusive, more about which areas to emphasize.

https://www.cio.com/article/412908/7-enterprise-data-strategy-trends.html

7 enterprise data strategy trends

Every enterprise needs a data strategy that clearly defines the technologies, processes, people, and rules needed to safely and securely manage its information assets and practices.

As with just about everything in IT, a data strategy must evolve over time to keep pace with evolving technologies, customers, markets, business needs and practices, regulations, and a virtually endless number of other priorities.





Something for AI architects to consider. (If I write an architectural design system, could I claim IP ownership of any design it produces?)

https://www.archdaily.com/992140/copyrights-for-architectural-imagery-in-the-ai-era

Copyrights for Architectural Imagery in the AI Era

Architecture is a referential discipline. From ziggurats, machines for living, to contemporary biophilic high-rises designs, it is impossible to know whether ideas are genuinely novel or whether they have been conceptualized before. Artificial intelligence has ignited the conversation on intellectual property (IP) even more. As millions generate unique graphic work by typing keywords, controversies have arisen, specifically concerning protecting creative work and the Copyright of architects in their creations. Therefore, understanding the scope of what is protected helps determine whether licenses are sufficient, whether trademark registration's long road is worth it; or perhaps a graphic piece cannot be protected and belongs to the public domain.





Perspective. Could these images help students remember concepts?

https://dailynous.com/2022/11/23/ai-images-of-philosophers/

AI Images of Philosophers & Philosophy (guest post)

Simone Nota, a philosophy PhD student at Trinity College Dublin, has been using AI image generators to create philosophy-related images.



Wednesday, November 23, 2022

Interesting. Consider this (or similar) technology used by stalkers to locate prey…

https://www.bespacific.com/third-party-data-brokers-give-police-warrantless-access-to-250-million-devices/

Third-party data brokers give police warrantless access to 250 million devices

Ars Technica: “…Functioning like Google Maps, Fog Reveal is marketed to police departments as a cheap way to harvest data from 250 million devices in the US. For several thousand dollars annually, the software lets police trace unique borders around large, customized regions to generate a list of devices in the area. Police can use Fog Reveal to geofence entire buildings or street blocks—like the area surrounding an abortion clinic—and get information on devices used within and surrounding those buildings to identify suspects. On top of identifying devices used in a targeted location, Fog Reveal also can be used to search by device and see everywhere that device has been used. That means cops could identify devices at a clinic and then follow them home to identify the person connected to that device. Or they could identify a device and follow it to an abortion clinic. The EFF discovered that Fog Reveal is already covertly used by police in various states, sometimes to conduct warrantless searches. Police demonstrating interest in the tool shows how all those smaller, less-scrutinized apps that sell user data to third parties could end up collectively contributing more data to local and state police investigations than is expected from even the biggest tech giants. In the “worst-case scenario,” Fog Reveal could become a go-to tool allowing police to track abortions in-state and across state lines, EFF policy analyst Matt Guariglia told Ars. Because unlike similar scenarios in which major tech companies like Meta or Google are served warrants compelling them to supply data to police investigating crimes, abortion data surveillance via Fog Reveal could seemingly be conducted without warrants and without any legal oversight. That invisibility could be a desirable feature as states prepare to strictly enforce laws across state lines that either shield or block access to abortion. No one can protest another state using the tool if it’s never named in court, and that, Guariglia told Ars, is often the case with Fog Reveal. As one Maryland-based sergeant wrote in a department email touting the benefit of “no court paperwork” before purchasing Fog Reveal—the tool’s “success lies in the secrecy.”





No harm, no foul? The potential cost of a breach just went way up! (May I suggest encrypting personal data?)

https://www.databreaches.net/third-circuit-finds-standing-for-victim-of-data-breach-citing-imminent-harm/

Third Circuit Finds Standing for Victim of Data Breach, Citing ‘Imminent Harm’

Harris Freier and Avi R. Jerushalmy write:

It comes as no surprise that cybersecurity is at the forefront of business owners’ minds across the globe. Corporate cyberattacks were at an all-time high last year, up 50% year over year. The Cybersecurity and Infrastructure Security Agency reported in February that it is aware of ransomware incidents against 14 of the 16 U.S. critical infrastructure sectors.
Ransomware attacks against notable American companies have made headlines, and the actions of these companies in response to those attacks have caused controversy. The stakes are high, as a ransomware attack will cost a company an average total of $4.54 million. The U.S. Court of Appeals for the Third Circuit recently issued an important ruling in the cyber data space. On Sept. 2, the court held that a plaintiff successfully established standing after hackers accessed personal information (PI) from her former employer and published it on the dark web, without requiring her to prove she suffered any actual harm. See Clemens v. ExecuPharm. This ruling makes it easier for victims of identity theft to sue employers, vendors, or any other company that is the victim of a cybersecurity breach even before—or even if they never— experience provable financial harm. The Third Circuit’s decision is in keeping with other jurisdictions that have focused on the exposure of personally identifiable information as the actual harm, rather than a subsequent harm such as identity theft.

Read more at Law.com.





Designed with espionage in mind.

https://www.politico.com/news/2022/11/23/drones-chinese-spy-threat-senate-00070591

Drones over D.C.: Senators alarmed over potential Chinese spy threat

Hundreds of Chinese-manufactured drones have been detected in restricted airspace over Washington, D.C., in recent months, a trend that national security agencies fear could become a new means for foreign espionage.

The recreational drones made by Chinese company DJI, which are designed with “geofencing” restrictions to keep them out of sensitive locations, are being manipulated by users with simple workarounds to fly over no-go zones around the nation’s capital.

officials say they do not believe the swarms are directed by the Chinese government. Yet the violations by users mark a new turn in the proliferation of relatively cheap but increasingly sophisticated drones that can be used for recreation and commerce. They also come as Congress debates extending current federal authorities and adopting new ones to track the aerial vehicles as potential security threats.





This is not how you want to be remembered.

https://www.bespacific.com/elon-musks-hardcore-management-style-a-case-study-in-what-not-to-do/

Elon Musk’s ‘hardcore’ management style: a case study in what not to do

Via LLRX Elon Musk’s ‘hardcore’ management style: a case study in what not to do: Professor Libby Sander explains why as a case study in how to implement organisational change, Elon Musk’s actions at Twitter will go down as the gold standard in what not to do. Among other things, the evidence shows successful organisational change requires: a clear, compelling vision that is communicated effectively; employee participation; and fairness in the way change is implemented. Trust in leaders is also crucial. Change management never quite goes to plan. It’s hard to figure out whether Musk even has a plan at all.





Perspective. I wonder if my English teacher friends would agree?

https://www.makeuseof.com/pros-cons-ai-writing-tools/

The Pros and Cons of Using AI Writing Tools

AI writing tools are designed to improve your written content, but they cannot be used everywhere. Here are some pros and cons you should consider.



Tuesday, November 22, 2022

It is. But it’s how that data is (mis)used that is important.

https://www.pogowasright.org/is-everything-sensitive-data/

Is Everything Sensitive Data?

Odia Kagan of Fox Rothschild writes:

After the recent Court of Justice of the European Union decision on sensitive inferences that can be drawn from the name of your spouse, it is fair to ask: Is everything sensitive data (special category data)?
Katie Hewson of Stephenson Harwood and I were duly skeptical earlier this month at the 7th annual INPLP Annual Conference in Vienna, Austria.
But…
  • It IS a CJEU decision
  • The definition of sensitive information under the new US privacy laws is Article 9 GDPR, plus more
  • Sensitive inferences are a point of focus for the Federal Trade Commission and at issue in the new FTC Kochava case
  • Sensitive inferences take on a new significance in the wake of the Dobbs decision
  • Inferences are personal information and should be included in your response to CA access requests, along with, according to the California Attorney general, a detailed explanation of how the inferences were made (algorithmic transparency)…..

Read more of this post at Privacy Compliance & Data Security





How much funding/training/support is required before the hackers become an instrument of the state? If they attack the US, are we at war?

https://www.cyberscoop.com/china-hacking-talent-xi-jinping-education-policies/

How Xi Jinping leveled-up China's hacking teams

From the early 2000s to 2015, China’s hacking teams caused havoc for private companies and U.S. and allied governments. In a series of high-profile breaches, they poached government databases, weapon system designs and corporate IP. From the breach of the Office of Personnel Management, to Marriott, to Equifax, to many, many others, the People’s Republic of China’s digital warriors demonstrated the full potential digitally mediated espionage.

But if Chinese President Xi Jinping has his way, this litany of breaches represents only the beginning of China’s digital prowess.

A year after coming to power in 2013, Xi began to prioritize cybersecurity as a matter of government policy, focusing the bureaucracy, universities and the security services on purposefully cultivating talent and funding cybersecurity research. During his time in office, the Chinese state has systematized cybersecurity education, improved students’ access to hands-on practice, promoted hacking competitions, and collected vulnerabilities to be used in network operations against China’s adversaries.





A logical next step.

https://arstechnica.com/information-technology/2022/11/nvidias-magic3d-creates-3d-models-from-written-descriptions-thanks-to-ai/

3D for everyone? Nvidia’s Magic3D can generate 3D models from text



Monday, November 21, 2022

Including Colorado.

https://www.pogowasright.org/state-attorneys-general-ask-ftc-to-regulate-online-data-collection-practices/

State attorneys general ask FTC to regulate online data collection practices

Over on his excellent newsletter, Risky Biz News, Catalin Cimpanu reports:

A coalition of 33 state attorneys general have urged the US Federal Trade Commission to pass regulation around online data collection practices. AGs said they are “concerned about the alarming amount of sensitive consumer data that is amassed, manipulated, and monetized,” and that they regularly receive inquiries from consumers about how their data is being hoarded and abused. [Read the full letter here/PDF ]



(Related)

https://www.pogowasright.org/unfair-deceitful-commercial-surveillance-submission-to-the-u-s-federal-trade-commission/

Unfair & deceitful commercial surveillance: Submission to the U.S. Federal Trade Commission

See the full report for more.





We can, therefore we must?

https://www.pogowasright.org/ncla-files-class-action-against-massachusetts-for-auto-installing-covid-spyware-on-1-million-phones/

NCLA Files Class-Action Against Massachusetts for Auto-Installing Covid Spyware on 1 Million Phones

From the New Civil Liberties Alliance, information on a case that was filed on November 14:

Case Summary:
The Massachusetts Department of Public Health (DPH) worked with Google to auto-install spyware on the smartphones of more than one million Commonwealth residents, without their knowledge or consent, in a misguided effort to combat Covid-19. Such brazen disregard for civil liberties violates the United States and Massachusetts Constitutions and cannot stand. NCLA filed a class-action lawsuit, Wright v. Massachusetts Department of Public Health, et al., challenging DPH’s covert installation of a Covid tracing app that tracks and records the movement and personal contacts of Android mobile device users without owners’ permission or awareness.
Plaintiffs Robert Wright and Johnny Kula own and use Android mobile devices and live or work in Massachusetts. Since June 15, 2021, DPH has worked with Google to secretly install the app onto over one million Android mobile devices located in Massachusetts without obtaining any search warrants, in violation of the device owners’ constitutional and common-law rights to privacy and property. Plaintiffs have constitutionally protected liberty interests in not having their whereabouts and contacts surveilled, recorded, and broadcasted, and in preventing unauthorized and unconsented access to their personal smartphones by government agencies.

Case information:

Robert Wright and Johnny Kula v. Massachusetts Department of Public Health, a Massachusetts agency, and Margaret R. Cooke, Commissioner of the Massachusetts Department of Public Health, in her official capacity.

Complaint (pdf)





Don’t mess with a Privacy lawyer?

https://www.pogowasright.org/viewing-public-documents-is-not-a-crime-canadian-edition/

Viewing public documents is not a crime, Canadian edition

In today’s episode of “Let’s mitigate this data leak by violating the privacy of people who happened to view it,” we bring you the government of Nova Scotia and a privacy lawyer who didn’t appreciate them violating his privacy.

Canadian privacy lawyer David Fraser has a story to share with you. It’s a story about how the Nova Scotia government had an “oops” incident and then, in trying to address it, obtained his IP address from CanLII, where he had viewed the exposed files. CanLII did not request any warrant and just turned over their logs. Because Fraser and others had read some case decisions that had been accidentally uploaded in unredacted form, the government wanted assurances that they had not downloaded or saved any copies and would delete any copies they had saved. That concern was understandable, and when a WCAT employee called him after seeing Fraser’s name in a news story about the leak, Fraser immediately informed WCAT that they had not downloaded or retained any copies. And all was well. But it didn’t stay well.

At a later date, Fraser got a phone call from the government. They had managed to get his personal information, including his home phone number. How did they get it?

It turns out that after the government contacted CanLII and obtained the detailed logs for the exposed files, the government went to court to get orders to compel ISPs to produce information on the owners of those IP addresses. CanLII would later claim that they had no idea that the government intended to do that.

To make things worse, the court just granted the government the orders to the ISPs it requested. The court did not appoint any amicus to represent the privacy interests of those whose information was being sought. These were citizens who had not violated any laws by viewing publicly available files and yet their privacy rights were ignored by the court who cooperated with the government.

As a privacy lawyer and blogger, Davis has laid this all out for us on YouTube and provided relevant filings.

Watch David explain the incident and violation of his privacy on YouTube.





Eventually, it will get ‘smarter”

https://newatlas.com/military/lanius-ai-suicide-drone-bomb/

AI-driven combat drone can search buildings and execute suicide attacks

The Lanius is designed to travel in groups of three, sitting on top of a larger, mothership-style drone until they're deployed. Its maximum takeoff weight is 1.25 kg (2.76 lb), including a lethal or non-lethal payload up to 150 g (5.3 oz). A small hobby-style lithium battery gives it a maximum flight time around seven minutes.

Alone or in a swarm, the Lanius is designed to enter an area and begin autonomously mapping it out using its AI capabilities and collision avoidance systems. It'll detect and label points of interest, as well as things like doors and windows, whether closed or open, and it'll enter buildings and search them with or without direction or direct control from a human pilot.

It's designed to detect humans, and attempt to classify them as friendly or hostile, combatant or non-combatant, armed or unarmed. When an armed threat is detected, it offers its human operator the ability to "engage" the target using whatever weaponry is on board. There's always a human in the loop; this thing will not attempt to kill anyone of it own accord.





Faulty logic?

https://www.forbes.com/sites/lanceeliot/2022/11/21/legal-personhood-for-ai-is-taking-a-sneaky-path-that-makes-ai-law-and-ai-ethics-very-nervous-indeed/?sh=1a291665f48a

Legal Personhood For AI Is Taking A Sneaky Path That Makes AI Law And AI Ethics Very Nervous Indeed

I’ve already covered many cornerstone elements of the AI and legal personhood conundrum, such as the detailed discussion at the link here. Please take a look at that coverage if you want further insider background on the weighty topic. Also, the legal personhood considerations about AI raise a slew of AI Ethics and AI Law questions, few of which are yet resolved, and you might find of interest my ongoing and extensive coverage of Ethical AI and AI Law at the link here and the link here, just to name a few.

AI is increasingly nearing the capacities of humans. If we deny legal personhood to AI, we are going to find ourselves embroiled in a heaping full of troubles. AI will want to have legal personhood. By having denied this or dragged our feet, the AI will be angry and upset at us. We are fostering an enemy that instead ought to be a friend.

Another perspective is that by ensuring that AI does have a semblance of legal personhood, we can hold AI accountable. You’ve probably been hearing or reading about AI that has gone astray. There is a lot of AI For Bad, perhaps growing as fast or faster than AI For Good. We want to ensure that there is Responsible AI, see my coverage at the link here. Some also refer to this as Accountable AI or Trustworthy AI, which I’ve examined at the link here. If you assign legal personhood to AI, it will apparently force AI into becoming liable for any dastardly actions that the AI emits. Thank goodness and we desperately need such relief and legal protection.





A big step, but not much of a business model.

https://www.therobotreport.com/waymo-to-begin-rider-only-robotaxi-rides-in-san-francisco/

Waymo to begin public robotaxi rides in San Francisco

The permit allows Waymo to give rides in its autonomous vehicles (AVs) without any driver in the vehicle, but it does not allow Waymo to charge for these rides. Waymo can give autonomous rides throughout San Francisco, as walls as in portions of Daly City, Los Altos, Los Altos Hills, Mountain View, Palo Alto and Sunnyvale. The company’s AVs can operate on roads with posted speed limits of up to 65 miles an hour, at any time of day or night.





Perspective.

https://www.bespacific.com/effs-atlas-of-surveillance-database-now-documents-10000-police-tech-programs/

EFF’s Atlas of Surveillance Database Now Documents 10,000+ Police Tech Programs

EFF: “This week, EFF’s Atlas of Surveillance project hit a bittersweet milestone. With this project, we are creating a searchable and mappable repository of which law enforcement agencies in the U.S. use surveillance technologies such as body-worn cameras, drones, automated license plate readers, and face recognition. It’s one of the most ambitious projects we’ve ever attempted. Working with journalism students at the University of Nevada, Reno (UNR), our initial semester-long pilot in 2019 resulted in 250 data points, just from the counties along the U.S. border with Mexico. When we launched the first nationwide site in late summer 2020, we had reached just more than 5,000 data points. The Atlas of Surveillance has now hit 10,000 data points. It contains at least partial data on approximately 5,500 law enforcement agencies in all 50 states, as well as most territories and districts… However, this milestone sadly also reflects the massive growth of surveillance adoption by police agencies. High-tech spying is no longer limited to well-resourced urban areas; even the smallest hamlet’s police department might be deploying powerful technology that gathers data on its residents, regardless of whether those residents are connected to a criminal case. We’ve seen the number of partnerships between police and the home surveillance company Ring grow from 1,300 to more than 2,000. In the two years since we first published a complementary report on real-time crime centers essentially police tech hubs, filled with wall-to-wall camera monitors and computers jacked into surveillance datasets — the number of such centers in the U.S. has grown from 80 to 100.”



Sunday, November 20, 2022

Implications in many fields…

https://spectrum.ieee.org/neurotech-workplace-innereye-emotiv

ARE YOU READY FOR WORKPLACE BRAIN SCANNING?

GET READY: NEUROTECHNOLOGY is coming to the workplace. Neural sensors are now reliable and affordable enough to support commercial pilot projects that extract productivity-enhancing data from workers’ brains. These projects aren’t confined to specialized workplaces; they’re also happening in offices, factories, farms, and airports. The companies and people behind these neurotech devices are certain that they will improve our lives. But there are serious questions about whether work should be organized around certain functions of the brain, rather than the person as a whole.

To be clear, the kind of neurotech that’s currently available is nowhere close to reading minds. Sensors detect electrical activity across different areas of the brain, and the patterns in that activity can be broadly correlated with different feelings or physiological responses, such as stress, focus, or a reaction to external stimuli. These data can be exploited to make workers more efficient—and, proponents of the technology say, to make them happier. Two of the most interesting innovators in this field are the Israel-based startup InnerEye, which aims to give workers superhuman abilities, and Emotiv, a Silicon Valley neurotech company that’s bringing a brain-tracking wearable to office workers, including those working remotely.





How is this different? Granted the risks are greater since attack across the internet is now possible...

https://breakingdefense.com/2022/11/dod-must-think-very-differently-about-armed-conflict-cyber-in-light-of-ukraine-war-official/

DoD must ‘think very differently’ about armed conflict, cyber in light of Ukraine war: Official

Still, Eoyang said DoD is now thinking about cyber operations in the context of armed conflict in four ways:

  1. Making sure government-to-government communications and networks are secure, shown in how DoD’s communications with Ukraine have helped enable its defense and intelligence sharing.

  2. The importance of secure communications within the military, like how Ukraine’s military has been able to share information with forward commanders.

  3. In the informational space, thinking about what it means for Ukrainian citizens to be able to communicate with the world and tell their stories through social media platforms like TikTok, Twitter and Facebook, which “has denied Russian the information environment that they want to prosecute this conflict.”

  4. The inherent value in ensuring “essential” government functions. “As you look at attempts to destroy the kind of essential data that makes a country a country…such as passport records, birth records, property records… What do governments need to be able to continue to operate its essential function?” Eoyang said.





As I read it, the system can identify a gun only when it sees a gun. That is, it does not identify people concealing weapons.

https://www.inquirer.com/news/septa-gun-detection-safety-shootings-20221118.html

SEPTA will roll out artificial-intelligence gun detection program on Market-Frankford, Broad Street lines

SEPTA plans to test an artificial-intelligence surveillance program that will detect guns within seconds of being brandished along the transit system’s sprawling network.





Absolute systems. There is absolutely no legal reason to do any of these things...

https://www.bbc.com/news/uk-england-cornwall-63682810

AI cameras catch 590 people without seatbelts in Devon and Cornwall

The cameras caught 590 people not wearing seatbelts and 40 people driving while using a mobile phone.





Tools & Techniques. Can you describe the Mona Lisa in 25 words or less?

https://arstechnica.com/information-technology/2022/11/stable-diffusion-in-your-pocket-draw-things-brings-ai-images-to-iphone/

Stable Diffusion in your pocket? “Draw Things” brings AI images to iPhone

It's not fast, but it's free—and it runs locally on pocket-sized hardware.

On Wednesday, a San Francisco-based developer named Liu Liu released Draw Things: AI Generation, a free app available in the App Store that lets iPhone owners run the popular Stable Diffusion AI image generator. Type in a description, and the app generates an image within several minutes. It's a notable step toward bringing image synthesis to a wider audience—with the added privacy of running it on your own hardware.