Imagine
these guys in your systems. How would you catch them?
https://www.csoonline.com/article/3659001/chinese-apt-group-winnti-stole-trade-secrets-in-years-long-undetected-campaign.html#tk.rss_all
Chinese
APT group Winnti stole trade secrets in years-long undetected
campaign
The
Operation CuckooBees campaign used zero-day exploits to compromise
networks and leveraged Windows' Common Log File System to avoid
detection.
Security
researchers have uncovered a cyberespionage campaign that has
remained largely undetected
since 2019 and focused on stealing trade secrets and other
intellectual property from technology and manufacturing companies
across the world. The campaign uses previously undocumented malware
and is attributed to a Chinese state-sponsored APT group known as
Winnti.
"With
years to surreptitiously conduct reconnaissance and identify valuable
data, it is estimated that the group managed to exfiltrate hundreds
of gigabytes of information," researchers from security firm
Cybereason said in a
new report.
"The attackers targeted intellectual property developed by the
victims, including sensitive documents, blueprints, diagrams,
formulas, and manufacturing-related proprietary data."
Be
aware, be very aware.
https://www.cpomagazine.com/cyber-security/avoiding-data-breaches-a-guide-for-boards-and-c-suites/
Avoiding
Data Breaches: A Guide for Boards and C-Suites
Litigation
against corporate board members and C-level executives for data
privacy and security claims is on the rise.
Specifically, the number of suits stemming from data breaches and
other cybersecurity incidents has increased as such breaches and
incidents have become more common. Recently, plaintiffs have
targeted corporate board members and C-level executives alleging that
their data privacy–related claims result from a breach of fiduciary
duties. For example, plaintiffs may allege that the board’s or
C-suite’s breach of fiduciary duties caused or contributed to the
data breach due to a failure to implement an effective system of
internal controls or a failure to heed cybersecurity-associated red
flags. Even if a breach does not lead to litigation or enforcement
action against board members or C-level executives, data breaches can
tarnish a corporation’s name and lead to increased scrutiny from
regulators. This year alone, the U.S. Department of Health and Human
Services Office for Civil Rights has recorded
over
100 breaches of unsecured electronic protected health information, or
ePHI. The department noted that most cyberattacks could be prevented
or substantially mitigated by implementing appropriate security
measures.
Is
“Fair” the right goal?
https://www.nature.com/articles/d41586-022-01202-3
To
make AI fair, here’s what we must learn to do
Beginning
in 2013, the Dutch government used an algorithm to wreak havoc in the
lives of 25,000 parents. The software was meant to predict which
people were most likely to commit childcare-benefit fraud, but the
government did not wait for proof before penalizing families and
demanding that they pay back years of allowances. Families were
flagged on the basis of ‘risk factors’ such as having a low
income or dual nationality. As a result, tens of thousands were
needlessly impoverished, and more than 1,000 children were placed in
foster care.
From
New York City to California and the European Union, many artificial
intelligence (AI) regulations are in the works. The intent is to
promote equity, accountability and transparency, and to avoid
tragedies similar to the Dutch childcare-benefits scandal.
But
these won’t be enough to make AI equitable. There must be
practical know-how on how to build AI so that it does not exacerbate
social inequality. In my view, that means setting out clear ways for
social scientists, affected communities and developers to work
together.
(Related)
https://apnews.com/article/child-welfare-algorithm-investigation-9497ee937e0053ad4144a86c68241ef1
An
algorithm that screens for child neglect raises concerns
Inside
a cavernous stone fortress in downtown Pittsburgh, attorney Robin
Frank defends parents at one of their lowest points – when they
risk losing their children.
The
job is never easy, but in the past she knew what she was up against
when squaring off against child protective services in family court.
Now, she worries she’s fighting something she can’t see: an
opaque algorithm whose statistical calculations help social workers
decide which families should be investigated in the first place.
“A
lot of people don’t know that it’s even being used,” Frank
said. “Families should have the right to have all of the
information in their file.”
From
Los Angeles to Colorado and
throughout Oregon, as child welfare agencies use or consider tools
similar to the one in Allegheny County, Pennsylvania, an Associated
Press review has identified a number of concerns about the
technology, including questions about its reliability and its
potential to harden racial disparities in the child welfare system.
Related issues have already torpedoed some jurisdictions’ plans to
use predictive models, such as the tool notably dropped by the state
of Illinois.
According
to new research from a Carnegie Mellon University team obtained
exclusively by AP,
Allegheny’s algorithm in its first years of operation showed a
pattern of flagging a disproportionate number of Black children for a
“mandatory” neglect investigation, when compared with white
children. The independent researchers, who received data from the
county, also found that social workers disagreed with the risk scores
the algorithm produced about one-third of the time.
A
different angle, same argument.
https://puck.news/the-hollywood-a-i-i-p-supernova/
The
Hollywood A.I.-I.P. Supernova
Will
the robots replace us all one day? Who knows, but chances are they
will eventually learn how to create a superhero movie. Ergo, the
start of one of the great legal debates in Hollywood history.
The A.I. Wars are almost here. No, I’m not
talking about Terminator or even a crackdown on Twitter bots.
Instead, we’ll soon be witnessing a series of extraordinary test
cases designed to force the American legal system to reconsider the
concept of authorship as artificial intelligence begins to write
short stories or pop songs. It may sound like a Zuckerbergian fever
dream, but A.I. could soon be creating blockbuster movies and
life-saving pharmaceuticals, too—multi-billion dollar products with
no human creator.
The legal battle has already begun. Sometime in
the next couple of weeks, I’ve learned, a lawsuit will be filed
that challenges the U.S. Copyright Office’s recent decision to deny
an “author” identified as “Creativity Machine.” Then, a few
weeks later, a federal appeals court will hear oral arguments in
Thaler v. Hirshfeld, an under-the-radar but potentially blockbuster
case concerning whether A.I. can be listed as the “inventor” in a
patent application. Meanwhile, authorities in the European Union and
15 other countries are being asked to make similar determinations to
properly credit the achievements of A.I.
… Abbott is obsessed with our technological
future. In writings including The Reasonable Robot, he outlines how
we should discriminate between A.I. and human behavior under the law.
For example, if businesses get taxed on the wages of its human but
not robot workers, he asks, do we incentivize automation? And if we
hold the suppliers of autonomous driving software to a punishing tort
standard (i.e. strict liability rather than negligence), will there
come a time when we’re actually discouraging the adoption of
technology that would prevent accidents on the road?
… This new case focusing on copyright
protection for A.I.-generated work could become meaningful for the
creative industry as studios and filmmakers explore A.I.’s
potential. In recent years, for example, Warner Bros. has used
A.I. to guide its decision-making about what projects to
pursue. In Japan, a new film about a boy’s dislike of tomatoes,
based
on a script by A.I., is now hitting the festival circuit.
There’s now an A.I. tool out there that, sensing the tone of any
video, recomposes
music for a score. Sony, in fact, has
tried to use A.I. to make new music that sounds like The
Beatles, and Spotify is experimenting
too. And as anyone who has seen the deepfake
“Tom Cruise” knows, A.I. can do a pretty good job of
replicating actors (something that’s of increasing
concern to actor unions). Put it all together, and we’ll
likely be seeing A.I. acting soon as the auteur on a major motion
picture. And not just for movies either. A.I. is increasingly
involved in video
game development, too.