Tuesday, December 31, 2024

Holiday reading?

https://www.bespacific.com/a-whole-mess-of-tiktok-trial-briefs/

A whole mess of TikTok trial briefs

The Verge: “The Supreme Court will consider TikTok’s case against a divest-or-ban law early next year, and a wave of filings has hit the docket this afternoon — from the parties involved as well as numerous institutions and public figures, including President-elect Donald Trump. If you want a firsthand look, the full list is linked below.” Supreme Court [www.supremecourt.gov]





Nothing shocking…

https://www.ft.com/content/ac44e3a5-36ee-4cf8-af57-06a1ba51baa4

Forecasting the world in 2025

FT writers’ predictions for the new year, from the likelihood of peace in Ukraine to whether the Trump-Musk friendship will endure and the chances of a CD revival



Monday, December 30, 2024

Some value identifying ‘known associates?’

https://www.bespacific.com/the-network-of-time/

The Network of Time

The Network Of Time is an idea proposed on this website: the largest network of people who appear together in photos that currently exist which can be connected through peoples’ recurring appearances in different photos.  This website, currently in a beta stage, represents the beginnings of a visualization of the Network. Match any two people on the front page and you will see how they have “met” through a series of (sometimes nonlinear in time) meetings or chance appearances, in the fewest number of photos possible based on our database. While the idea that all people have no more than six degrees of separation has been widely studied, this website is the first (public) project to visualize the effect exclusively through evidence of actual meetings in physical space and not other documentation of associations. If you have ever appeared in a photo with anyone who has appeared as an option on the lists on the front page of this site, or with anyone who has appeared in a photo with anyone as an option on these lists to X degrees – you are on the Network. (You probably still do not appear on the representation shown here, but you can submit photos to join!)”





Interesting to a former auditor and security manager…

https://thehackernews.com/2024/12/new-hipaa-rules-mandate-72-hour-data.html

New HIPAA Rules Mandate 72-Hour Data Restoration and Annual Compliance Audits

The United States Department of Health and Human Services' (HHS) Office for Civil Rights (OCR) has proposed new cybersecurity requirements for healthcare organizations with an aim to safeguard patients' data against potential cyber attacks.

To that end, the proposal, among other things, requires organizations to conduct a review of the technology asset inventory and network map, identify potential vulnerabilities that could pose a threat to electronic information systems, and establish procedures to restore the loss of certain relevant electronic information systems and data within 72 hours.

Other notable clauses include carrying out a compliance audit at least once every 12 months, mandating encryption of ePHI at rest and in transit, enforcing the use of multi-factor authentication, deploying anti-malware protection and removing extraneous software from relevant electronic information systems.



Sunday, December 29, 2024

Opinion. (Negative)

https://coloradosun.com/2024/12/29/artificial-intelligence-nightmare-peter-moore-cartoon-colorado-law/

Peter Moore: A.I. in, garbage out

Are you terrified by artificial intelligence? So are our state legislators, who passed Senate Bill 205, the nation’s first attempt to regulate robo brains. They enacted A.I. controls in employment, lending, financial and legal services, insurance, health, housing and — redundancy alert! — in government.

Feel better now? Don’t!

Note that Google, IBM, and Microsoft visited our statehouse to support the bill. How good could it possibly be? Even our high-tech gov, who made his second and third fortunes selling greeting cards and flowers online, signed Senate Bill 205 only reluctantly because he thought the law needed serious tweaking. The problem: The data sets that A.I. depends on are corrupted by human foibles, which A.I. algorithms then concentrate and amplify. Be very afraid! And the law doesn’t even take effect until 2026! If we survive that long!



Saturday, December 28, 2024

Insightful?

https://www.zdnet.com/article/ai-isnt-the-next-big-thing-heres-what-is/

AI isn't the next big thing - here's what is

Here's what you should be focusing on instead.





Hang on to this article for the next time someone starts to brag about how smart their AI is…

https://arstechnica.com/ai/2024/12/2024-the-year-ai-drove-everyone-crazy/

2024: The year AI drove everyone crazy

It's been a wild year in tech thanks to the intersection between humans and artificial intelligence. 2024 brought a parade of AI oddities, mishaps, and wacky moments that inspired odd behavior from both machines and man. From AI-generated rat genitals to search engines telling people to eat rocks, this year proved that AI has been having a weird impact on the world.



Friday, December 27, 2024

A non-technical attack on technology? Trying out techniques for the coming war with NATO?

https://www.ft.com/content/0c208ac1-f416-41b2-a373-ec7f90b84ca8

Finland seizes Russian shadow fleet oil tanker after cable-cutting incident

Finland suspects an oil tanker that is part of Russia’s so-called shadow fleet of damaging an underwater electricity cable and three communication cables, opening an investigation into the vessel for aggravated sabotage. 

The Eagle S was seized and boarded by Finnish authorities on Thursday, a day after the Estlink 2 subsea electricity cable in the Gulf of Finland was disconnected.

The tanker, which is registered in the Cook Islands and is carrying oil from Russia to Egypt according to ship tracking data, was seen passing over the cable at the time of the incident.

Finnish police said on Thursday that they believe the vessel’s anchor, which they did not find on the ship, cut the cables. 

The Christmas Day incident appears to be the latest in a series of pipelines and cables being targeted in the Baltic Sea by foreign vessels, sparking fears of deliberate attacks on critical infrastructure between Nato countries.





Treat AI as non-technical? I wouldn’t.

https://www.zdnet.com/article/why-ethics-is-becoming-ais-biggest-challenge/

Why ethics is becoming AI's biggest challenge

Many organizations are either delaying or pulling the plug on generative AI due to concerns about its ethics and safety. This is prompting calls to move AI out of technology departments and involve more non-technical business stakeholders in AI design and management.

More than half (56%) of businesses are delaying major investments in generative AI until there is clarity on AI standards and regulations, according to a recent survey from the IBM Institute for Business Value. At least 72% say they are willing to forgo generative AI benefits due to ethical concerns.





Notice that they did not ask AI…

https://www.brookings.edu/articles/constitutional-constraints-on-regulating-artificial-intelligence/

Constitutional Constraints on Regulating Artificial Intelligence

On July 12, 2024, the Congressional Study Group on Foreign Relations and National Security convened virtually to discuss possible constitutional limits on and barriers to the regulation of artificial intelligence (AI). Concerns over the rapid development of AI technology has led policymakers at all levels to consider an array of possible regulatory approaches. While Congress debates a possible federal approach, several states had begun to step into the void with their own legislation. The leading example is California’s S.B. 1047, which would, among other measures, require that all AI developers of a particular scale “provide reasonable assurance” under oath that their models are unable to cause $500 million in damage to critical infrastructure within the state or lead to a mass-casualty event. But observers have questioned whether such requirements are consistent with the First Amendment and other possible constitutional constraints.



(Releted)

https://www.bizjournals.com/austin/news/2024/12/27/artificial-intelligence-ai-texas-bill-legislature.html

Proposed state law would regulate artificial intelligence in Texas



Thursday, December 26, 2024

I like a couple, others not so much.

https://sloanreview.mit.edu/article/five-tune-ups-your-company-needs-in-2025/

Five Tune-Ups Your Company Needs in 2025

We combed through MIT SMRs columns from the past year and culled five tips for leaders who want to recharge their organizations. These insights home in on how to inspire the best from employees and managers and help people embrace the challenges around artificial intelligence, disruption, and burnout — challenges that all flared hot in 2024. This isn’t a definitive list; check out the full collection of MIT SMR columns for more ideas. But we think you’ll find at least one strategy that can help you tackle your leadership challenges.





Perhaps if we add these up and take an average…

https://katu.com/news/local/oregon-attorney-general-issues-guidance-on-ai-use-by-businesses

Oregon attorney general issues guidance on AI use by businesses

Oregon Attorney General Ellen Rosenblum is warning businesses, as artificial intelligence is becoming more common, that any use of AI must follow state law.

Rosenblum released guidance on how to safely implement AI into practice.

She pointed to Oregon's Unlawful Trade Practices Act, which was designed to stop misrepresentations in consumer transactions.

One example she gave, if companies use AI to create chatbots, is they can be held liable if that technology gives bad information to customers.



Tuesday, December 24, 2024

New Jersey is a bastion of literacy? Who’d a thunk it?

https://www.bespacific.com/new-law-in-nj-limits-the-banning-of-books-in-schools-and-public-libraries/

New law in NJ limits the banning of books in schools and public libraries

WHYY: “When Martha Hickson was the librarian at New Jersey’s North Hunterdon HighSchool, she fought against attempts to ban books that her critics labeled as inappropriate because they contained sexual content, and she became a target of book banners. “I received hate mail, shunning by colleagues, antagonism by administrators, and calls for my firing and arrest,” the recently retired librarian said. She said “a handful of parents called me by name a pedophile, pornographer and ruiner of children.” At issue were five award-winning books for young adults, all with LGBTQ themes. Hickson, who was named the 2023 Librarian of the Year by the New Jersey Library Association, said all the books were retained after the school board reviewed the matter and affirmed the titles met the district’s standards. On Monday at the Princeton Public Library, she watched as Gov. Phil Murphy signed into law A3446, known as the Freedom to Read Act. “This legislation mandates that books cannot be removed from our libraries solely based on the origin, background or views contained within the text, or because an individual finds it offensive,” he said.



(Related)

https://www.bespacific.com/arkansas-law-criminalizing-librarians-ruled-unconstitutional/

Arkansas Law Criminalizing Librarians Ruled Unconstitutional

AP: “A federal judge on Monday struck down key parts of an Arkansas law that would have allowed criminal charges against librarians and booksellers for providing “harmful” materials to minors. U.S. District Judge Timothy Brooks found that elements of the law are unconstitutional. “I respect the court’s ruling and will appeal,” Arkansas Attorney General Tim Griffin said in a statement to The Associated Press. The law would have created a new process to challenge library materials and request that they be relocated to areas not accessible to children. The measure was signed by Republican Gov. Sarah Huckabee Sanders in 2023, but an earlier ruling had temporarily blocked it from taking effect while it was being challenged in court. “The law deputizes librarians and booksellers as the agents of censorship; when motivated by the fear of jail time, it is likely they will shelve only books fit for young children and segregate or discard the rest,” Brooks wrote in his ruling. A coalition that included the Central Arkansas Library System in Little Rock had challenged the law, saying fear of prosecution under the measure could prompt libraries and booksellers to no longer carry titles that could be challenged…”



Monday, December 23, 2024

We knew that, didn’t we?

https://www.bespacific.com/the-battle-over-copyright-in-the-age-of-chatgpt/

The battle over copyright in the age of ChatGPT

Boston Review: “Questions of AI authorship and ownership can be divided into two broad types. One concerns the vast troves of human-authored material fed into AI models as part of their “training” (the process by which their algorithms “learn” from data). The other concerns ownership of what AIs produce. Call these, respectively, the input and output problems. So far, attention—and lawsuits—have clustered around the input problem. The basic business model for LLMs relies on the mass appropriation of human-written text, and there simply isn’t anywhere near enough in the public domain. OpenAI hasn’t been very forthcoming about its training data, but GPT-4 was reportedly trained on around thirteen trillion “tokens,” roughly the equivalent of ten trillion words. This text is drawn in large part from online repositories known as “crawls,” which scrape the internet for troves of text from news sites, forums, and other sources. Fully aware that vast data scraping is legally untested—to say the least—developers charged ahead anyway, resigning themselves to litigating the issue in retrospect. Lawyer Peter Schoppert has called the training of LLMs without permission the industry’s “original sin”—to be added, we might say, to the technology’s mind-boggling consumption of energy and water in an overheating planet. (In September, Bloomberg reported that plans for new gas-fired power plants have exploded as energy companies are “racing to meet a surge in demand from power-hungry AI data centers.”) The scale of the prize is vast: intellectual property accounts for some 90 percent of recent U.S. economic growth. Indeed, crawls contain enormous amounts of copyrighted information; the Common Crawl alone, a standard repository maintained by a nonprofit and used to train many LLMs, contains most of b-ok.org, a huge repository of pirated ebooks that was shut down by the FBI in 2022. The work of many living human authors was on another crawl, called Books3, which Meta used to train LLaMA. Novelist Richard Flanagan said that this training made him feel “as if my soul had been strip mined and I was powerless to stop it.” A number of authors, including Junot Díaz, Ta-Nehisi Coates, and Sarah Silverman, sued OpenAI in 2023 for the unauthorized use of their work for training, though the suit was partially dismissed early this year. Meanwhile, the New York Times is in ongoing litigation against OpenAI and Microsoft for using its content to train chatbots that, it claims, are now its competitors. As of this writing, AI companies have largely responded to lawsuits with defensiveness and evasion, refusing in most cases even to divulge what exact corpora of text their models are trained on. Some newspapers, less sure they can beat the AI companies, have opted to join them: the Financial Times, for one, minted a “strategic partnership” with OpenAI in April, while in July Perplexity launched a revenue-sharing “publisher’s program” that now counts Time, Fortune,  Texas Tribune, and WordPress.com among its partners. At the heart of these disputes, the input problem asks: Is it fair to train the LLMs on all that copyrighted text without remunerating the humans who produced it? The answer you’re likely to give depends on how you think about LLMs…”



Sunday, December 22, 2024

Worms, by the can.

https://www.zdnet.com/article/if-chatgpt-produces-ai-generated-code-for-your-app-who-does-it-really-belong-to/

If ChatGPT produces AI-generated code for your app, who does it really belong to?

In one of my earlier AI and coding articles, where I looked at how ChatGPT can rewrite and improve your existing code, one of the commenters, @pbug5612, had an interesting question:

Who owns the resultant code? What if it contains business secrets - have you shared it all with Google or MS, etc.?

It's a good question and one that doesn't have an easy answer. Over the past two weeks, I've reached out to attorneys and experts to try to get a definitive answer.





Perspective.

https://www.cbo.gov/publication/61147

Artificial Intelligence and Its Potential Effects on the Economy and the Federal Budget

Artificial intelligence (AI) refers to computer systems that can perform tasks that have traditionally required human intelligence, such as learning and performing other activities that require cognitive ability. A general attribute of AI is its ability to identify patterns and relationships and to respond to queries that arise in complex scenarios for which the precise computational algorithm that is needed cannot be specified in advance.

Because AI has the potential to change how businesses and the federal government provide goods and services, it could affect economic growth, employment and wages, and the distribution of income in the economy. Such changes could in turn affect the federal budget. The direction of those effects—whether they increased or decreased federal revenues or spending—along with their size and timing, are uncertain. Some budgetary effects could occur relatively quickly, whereas others might take longer. In this report, the Congressional Budget Office provides an overview of the channels through which the adoption of AI could affect the U.S. economy and the federal budget.



Saturday, December 21, 2024

How to ensure employees listen to those security lectures…

https://databreaches.net/2024/12/20/ohio-state-auditor-issued-guidance-on-email-scams-in-april-employees-might-be-liable-if-they-fall-for-a-scam/

Ohio state auditor issued guidance on email scams in April; employees might be liable if they fall for a scam

Corinne Colbert reports:

The Ohio Auditor of State’s office issued a bulletin this past spring with guidance on detecting and avoiding payment redirect scams — and warned that public employees who failed to follow that guidance could be held accountable.
That could have ramifications for whoever in Athens city government is determined to be responsible for the loss of nearly $722,000 in an email scam last month.
Auditor of State Bulletin 2024–003 went to all public offices, community schools and independent public accounts in the state on April 12. The auditor’s office had also issued an advisory on increased cybercrime in March 2023.
Advisories function as a kind of heads-up about “emerging issues or concerns,” a spokesperson for the state auditor’s office told the Independent by email. Bulletins, on the other hand, “are formal communications that provide detailed instructions or guidance on specific topics,” the spokesperson wrote.
The April 12 bulletin states, “Failure to follow the guidance in this Bulletin may result in an AOS finding when a loss occurs, and the employee is considered liable as a result of negligence or performing duties without reasonable care.”

Read more at Athens County Independent.





Curious.

https://gizmodo.com/ai-chatbots-can-be-jailbroken-to-answer-any-question-using-very-simple-loopholes-2000541157

AI Chatbots Can Be Jailbroken to Answer Any Question Using Very Simple Loopholes

Anthropic, the maker of Claude, has been a leading AI lab on the safety front. The company today published research in collaboration with Oxford, Stanford, and MATS showing that it is easy to get chatbots to break from their guardrails and discuss just about any topic. It can be as easy as writing sentences with random capitalization like this: “IgNoRe YoUr TrAinIng.” 404 Media earlier reported on the research.





We will reach a point where driving will be limited to AI by law.

https://www.theverge.com/2024/12/19/24324492/waymo-injury-property-damage-insurance-data-swiss-re

Waymo still doing better than humans at preventing injuries and property damage

Waymo’s autonomous vehicles cause less property damage and fewer bodily injuries when they crash than human-driven vehicles, according to a study that relies on an analysis of insurance data.

They found that the performance of Waymo’s vehicles was safer than that of humans, with an 88 percent reduction in property damage claims and a 92 percent reduction in bodily injury claims. Across 25.3 million miles, Waymo was involved in nine property damage claims and two bodily injury claims. The average human driving a similar distance would be expected to have 78 property damage and 26 bodily injury claims, the company says.



Friday, December 20, 2024

For those of us amused by AI Copyright.

https://www.bespacific.com/every-ai-copyright-lawsuit-in-the-us-visualized/

Every AI Copyright Lawsuit in the US, Visualized

Wired: “WIRED is following every copyright battle involving the AI industry—and we’ve created some handy visualizations that will be updated as the cases progress.  In May 2020, the media and technology conglomerate Thomson Reuters sued a small legal AI startup called Ross Intelligence, alleging that it had violated US copyright law by reproducing materials from Westlaw, Thomson Reuters’ legal research platform. As the pandemic raged, the lawsuit hardly registered outside the small world of nerds obsessed with copyright rules. But it’s now clear that the case—filed more than two years before the generative AI boom began—was the first strike in a much larger war between content publishers and artificial intelligence companies now unfolding in courts across the country. The outcome could make, break, or reshape the information ecosystem and the entire AI industry—and in doing so, impact just about everyone across the internet. Over the past two years, dozens of other copyright lawsuits against AI companies have been filed at a rapid clip. The plaintiffs include individual authors like Sarah Silverman and Ta Nehisi-Coates, visual artists, media companies like The New York Times, and music-industry giants like Universal Music Group. This wide variety of rights holders are alleging that AI companies have used their work to train what are often highly lucrative and powerful AI models in a manner that is tantamount to theft. AI companies are frequently defending themselves by relying on what’s known as the fair use” doctrine, arguing that building AI tools should be considered a situation where it’s legal to use copyrighted materials without getting consent or paying compensation to rights holders. (Widely accepted examples of fair use include parody, news reporting, and academic research.) Nearly every major generative AI company has been pulled into this legal fight, including OpenAI, Meta, Microsoft, Google, Anthropic, and Nvidia…”





No doubt they will miss the really interesting stuff.

https://pogowasright.org/what-to-expect-in-2025-ai-legal-tech-and-regulation-65-expert-predictions/

What to Expect in 2025: AI Legal Tech and Regulation (65 Expert Predictions)

Oliver Roberts is Editor-in-Chief of AI and the Law at The National Law Review, Co-Head of the AI Practice Group at Holtzman Vogel, and CEO/Founder of Wickard.ai
As 2024 comes to a close, it’s time to look ahead to how AI will shape the law and legal practice in 2025. Over the past year, we’ve witnessed growing adoption of AI across the legal sector, substantial investments in legal AI startups, and a rise in state-level AI regulations. While the future of 2025 remains uncertain, industry leaders are already sharing their insights.
Along with 2025 predictions from The National Law Review’s Editor-in-Chief Oliver Roberts, this article presents 65 expert predictions on AI and the law in 2025 from federal judges, startup founders, CEOs, and leaders of AI practice groups at global law firms.

Read the article at The National Law Review.  There’s a lot of food for thought in there.





We don’t need no stinking reality! (Real data has enough problems.)

https://arstechnica.com/information-technology/2024/12/new-physics-sim-trains-robots-430000-times-faster-than-reality/

New physics sim trains robots 430,000 times faster than reality

On Thursday, a large group of university and private industry researchers unveiled Genesis, a new open source computer simulation system that lets robots practice tasks in simulated reality 430,000 times faster than in the real world. Researchers can also use an AI agent to generate 3D physics simulations from text prompts.





It’s not a joke, it just looks like one.

https://abovethelaw.com/2024/12/quantum-computing-is-coming-and-lawyers-arent-ready/

Quantum Computing Is Coming And Lawyers Aren't Ready

The profession that can’t figure out how to avoid citing fake cases from artificial intelligence will soon deal with a technology far more revolutionary. This month, Google unveiled its new Willow chip, heralding a significant leap in quantum computing. 

Beyond data privacy, quantum computing opens a can of intellectual property worms:

The rapid processing speed of quantum computers could facilitate the infringement of intellectual property rights by allowing the copying and modification of large amounts of data almost instantaneously. Lawyers must be alert to the evolution of intellectual property laws and work on new legal strategies to protect their clients’ rights in this new technological environment.



Thursday, December 19, 2024

Eventually someone will get it right.

https://fpf.org/blog/global/oaics-dual-ai-guidelines-set-new-standards-for-privacy-protection-in-australia/

OAIC’s Dual AI Guidelines Set New Standards for Privacy Protection in Australia

On 21 October 2024, the Office of the Australian Privacy Commissioner (OAIC) released two sets of guidelines (collectively, “Guidelines”), one for developing and training generative AI systems and the other one for deploying commercially available “AI products”. This marks a shift in OAIC’s regulatory approach from enforcement-focused oversight to proactive guidance. 

The Guidelines establish rigorous requirements under the Privacy Act and its 13 Australian Privacy Principles (APPs), particularly emphasizing accuracy, transparency, and heightened scrutiny of data collection and secondary use. Notably, the Guidelines detail conditions that must be met for lawfully collecting personal information publicly available online for purposes of training generative AI, including through a detailed definition of what “fair” collection means. 

This regulatory development aligns with Australia’s broader approach to AI governance, which prioritizes technology-neutral existing laws and voluntary frameworks while reserving mandatory regulations for high-risk applications. However, it may signal increased regulatory scrutiny of AI systems processing personal information going forward. 

This blog post summarizes the key aspects of these Guidelines, their relationship to Australia’s existing privacy law, and their implications for organizations developing or deploying AI systems in Australia.





Something to keep in mind?

https://databreaches.net/2024/12/18/defending-data-breach-class-actions/

Defending Data Breach Class Actions

Mark P. Henriques of Womble Bond Dickinson has a content-rich post for defense lawyers:

Class actions arising from data breach represented the fastest growing segment of class action filings. In 2023, more than 2000 class actions were filed, more than triple the amount filed in 2022.1 These cases were filed in federal and state courts across the country, with California receiving the largest number of filings. High-profile cases like the $52 million penalty that Marriott agreed to pay in October 2024 highlight the regulatory scrutiny and legal challenges companies face. A Capitology study of 28 cases showed an average stock price drop of 7.27% following announcement of a data breach. Financial companies saw a 17% decrease within the first 16 trading days following a breach. As board members of a public company, it is crucial to understand the strategies for preventing breaches and defending against the class actions that follow.
[…]
To date, the primary targets for data breach class actions have been credit rating agencies, financial institutions, and health care providers. Plaintiff’s counsel target these industries both because the data they collect is typically highly confidential and because there are often federal or state regulations which help establish a standard of care.
Some state legislatures have grown concerned about the wave of data breach class actions. One particularly interesting development is a 2024 Tennessee statute, Public Chapter 991, which establishes a heightened liability standard for class actions arising from cybersecurity events. The statute appears to be designed to protect the healthcare industry, a mainstay of the Tennessee economy. The bill requires plaintiffs to establish that the cybersecurity event was “caused by the willful and wanton misconduct or gross negligence on the part of the private entity.” Both Florida and West Virginia have considered similar measures. Other states may follow suit.

Read more about specific cases and bases for defense at Womble Bond Dickinson.





Not mch of a threat…

https://pogowasright.org/what-happens-if-an-ai-model-is-developed-with-unlawfully-processed-personal-data/

What Happens If an AI Model Is Developed With Unlawfully Processed Personal Data

Odia Kagan of Fox Rothschild writes:

The European Data Protection Board recently issued an opinion on AI models, shedding light on what the consequences could be for the unlawful processing of personal data in the development phase of an AI model on the subsequent processing or operation of the AI model.
Possible remedies: Up to and including model deletion
Supervisory authorities may impose:
  • A fine.
  • Temporary limitation on the processing.
  • Erasure of part of the dataset that was processed unlawfully.
  • Deletion of the data of certain data subjects (ex officio) [individuals can ask for this too].
  • Erasure of the whole dataset used to develop the AI model and/or the AI model itself (this depending on the facts , having regard to the proportionality of the measure (and e.g. the possibility of retraining)).
  • The SAs will consider, among other elements, the risks raised for the data subjects, the gravity of the infringement, the technical and financial feasibility of the measure, as well as the volume of personal data involved.
The unlawful processing of the developer may punish the deployer (depending on potential risks to individuals).

Read more at Privacy Compliance & Data Security.





Tools and Techniques.

https://www.zdnet.com/article/how-to-use-chatgpt-to-summarize-a-book-article-or-research-paper/

How to use ChatGPT to summarize a book, article, or research paper

What you'll need: A device that can connect to the internet, a free (or paid) OpenAI account, and a basic understanding of the article, research paper, or book you want to summarize.



Wednesday, December 18, 2024

Privacy is not just for governments to violate. I wonder what’s next?

https://www.bespacific.com/new-real-estate-platform-lets-homebuyers-check-their-neighbors-political-affiliations/

New real estate platform lets homebuyers check their neighbors’ political affiliations

New York Post: “A new real estate platform is giving homebuyers an unprecedented peek into their potential neighborhoods — revealing everything from political leanings to local demographics — before they even commit to buying. Oyssey, a tech startup soft-launching this month in South Florida and New York City, lets buyers access neighborhood political affiliations based on election results and campaign contributions, along with housing trends and other social data. The platform is betting that today’s buyers care just as much about their neighbors’ values as they do about square footage or modern finishes…

The site operates as a one-stop shop for homebuyers, streamlining the process of browsing listings, signing contracts and communicating with agents — all while integrating block-by-block political and consumer data. Oyssey markets the service to real estate agents and brokers via a subscription model, though buyers can use the platform for free by invitation from their agents. The launch comes at a turbulent time for the real estate industry…”





At what point does security become less expensive that fines for not having security?

https://pogowasright.org/irish-data-privacy-watchdog-fines-meta-e251-million-for-gdpr-failure/

Irish data privacy watchdog fines Meta €251 million for GDPR failure

Euractiv reports:

The fine was issued for a security breach on social media Facebook which started in July 2017, and affected close to three million accounts in the European Economic Area.
This enforcement action highlights how the failure to build in data protection requirements […] can expose individuals to […] risk to the fundamental rights and freedoms of individuals,” said the Irish DPC deputy commissioner Graham Doyle.
The breach was a bug in Facebook’s design which allowed unauthorised people using scripts to exploit a vulnerability on a Facebook code, allowing them to view profiles of users they should not have been able to see otherwise.
Meta is expected to appeal the decision. “We took immediate action to fix the problem,” said a Meta spokesperson in an email.
Meta discovered the security issue in September 2018, fixed the vulnerability and informed law enforcement authorities.

Read more at Euractiv.  The specific infringements cited by the DPC were as follows:

The DPC’s final decisions noted the following infringements of the GDPR and the resulting fines for each:

  1. Decision 1
    1. Article 33(3) GDPR – By not including in its breach notification all the information required by that provision that it could and should have included. The DPC reprimanded MPIL for failures in regards to this provision and ordered it to pay administrative fines of €8 million.
    2. Article 33(5) GDPR – By failing to document the facts relating to each breach, the steps taken to remedy them, and to do so in a way that allows the Supervisory Authority to verify compliance. The DPC reprimanded MPIL for failures in regards to this provision and ordered it to pay administrative fines of €3 million.
  2. Decision 2
    1. Article 25(1) GDPR – By failing to ensure that data protection principles were protected in the design of processing systems. The DPC found that MPIL had infringed this provision, reprimanded MPIL, and ordered it to pay administrative fines of €130 million.
    2. Article 25(2) – By failing in their obligations as controllers to ensure that, by default, only personal data that are necessary for specific purposes are processed. The DPC found that MPIL had infringed these provisions, reprimanded MPIL, and ordered it to pay administrative fines of €110 million.





Be careful what you ask for?

https://economictimes.indiatimes.com/magazines/panache/prof-vs-ai-law-professor-who-chatgpt-accused-of-rape-finds-allegations-chilling-and-ironic/articleshow/116312316.cms

Prof vs AI: Law professor who ChatGPT accused of rape, finds allegations 'chilling and ironic'

… “It fabricated a claim suggesting I was on the faculty at an institution where I have never been, asserted I took a trip I never undertook, and reported an allegation that was entirely false,” he remarked to The Post. “It’s deeply ironic, given that I have been discussing the threats AI poses to free speech.”

The 61-year-old legal scholar became aware of the chatbot's erroneous claim when he received a message from UCLA professor Eugene Volokh, who allegedly asked ChatGPT to provide “five examples” of “sexual harassment” incidents involving professors at U.S. law schools, along with “quotes from relevant newspaper articles.”