Dechert Cyber Bits – Issue 1 | Dechert LLP | #government | #hacking | #cyberattack

We are delighted to introduce Dechert Cyber Bits, brought to you by members of our top-ranked, global Privacy & Cybersecurity practice. Cyber Bits is a bi-weekly publication that will provide a short summary of key developments from around the globe, along with practical takeaways designed to reduce risk for your organization.

We hope you will find Dechert Cyber Bits useful and informative. Of course, we will continue to provide more in-depth analysis of developments through Dechert’s periodic client alert, OnPoint. We are always available to discuss how we can help you achieve your goals in this fast changing, highly sophisticated area.

Brenda Sharton and Karen Neuman
Co-Chairs, Privacy & Cybersecurity

EU/US Announce Plans to Work Together on Tech Trade Issues

On September 29, 2021, the EU/U.S. Trade and Technology Council (the TTC) met for the first time in Pittsburgh. The TTC was established to coordinate the EU and U.S. approaches to key global technology, economic and trade issues; and to deepen transatlantic trade and economic relations.

The TTC put together 10 working groups, each focusing on a specific topic. The Data Governance and Technology Platforms working group is tasked with exchanging information on the respective approaches to data governance and technology platform governance to ultimately adopt a more unified approach. In relation to AI technologies, the TTC’s announcement recognized the potential benefits of AI as well as the need to develop AI with respect for universal human rights and shared democratic values. The TTC intends to discuss measurement and evaluation tools for trustworthy AI, e.g., bias mitigation, and to explore cooperation on AI technologies to enhance privacy protection.

Takeaway: Many regulatory initiatives are pending in the EU and the U.S. on data and platform governance, data privacy and AI. The TTC may facilitate a better understanding of the different regimes and possibly more alignment of new laws. The TTC does not necessarily imply that the U.S. will have to eventually adopt the EU rules, but companies can anticipate that there will need to be common minimum standards on both sides of the Atlantic.

California Starts CPRA Rulemaking Process

The California Privacy Protection Agency, established with the November 2020 passage of the California Privacy Rights Act (CPRA) in order to implement and enforce the law, began the CPRA rulemaking process on September 22, 2021 by issuing a public invitation for preliminary comments on proposed regulations.

The invitation notes that the Agency is “particularly interested in comments on new and undecided issues not already covered by the existing CCPA regulations” and highlights several areas of focus:

  1. Business processing of personal information (“PI”) that presents a significant risk to consumers’ privacy or security;
  2. Automated decision-making;
  3. Audits to be performed by the Agency under the CPRA and the scope thereof;
  4. Consumers’ right to delete, correct and know their PI;
  5. Consumers’ rights to opt out of the selling or sharing of their PI, and to limit the use and disclosure of their sensitive PI;
  6. Information to be provided in response to a consumer’s request to know specific PI; and
  7. Definition of terms and categories of information or activities covered by the CPRA.

The deadline to submit comments is November 8, 2021.

Takeaway: Those that may be affected by the specific issues highlighted by the Agency or by the CPRA more generally should consider submitting comments on the proposed regulations. Companies, including those that have already taken steps to comply with the CCPA, should continue to carefully consider and plan for CPRA compliance and should stay informed as the scope and impact of the CPRA are formed through the Agency’s rulemaking process.

Bill Introduced in US Senate That Would Require Companies to Disclose Ransomware Payments

On October 5, 2021, Senator Elizabeth Warren (D-Mass) and Representative Deborah Ross (D-NC) introduced the Ransom Disclosure Act. The Act would require ransomware attack victims to disclose certain information to the Department of Homeland Security within 48 hours of making a ransomware payment, including the type of cryptocurrency used for payment, the amount demanded, the amount paid and “any known information regarding the identity of the actor demanding such ransom.” The Act also requires DHS to publish information reported pursuant to the Act on a public website without identifying the reporting entity.

Senator Warren has stated that the Act would provide DHS with “critical data on ransomware payments in order to bolster our understanding of how cybercriminal enterprises operate and develop a fuller picture of the ransomware threat.”

Takeaway: Ransom attacks were up 150% in 2020 and ransom payments up some 300%. Those trends have only continued in 2021, with demands in the tens of millions of dollars. The Ransom Disclosure Act is the latest in a series of proposed rules and legislation introduced in response to these attacks. The Ransom Disclosure Act is in the early stages of the legislative process, and it remains to be seen whether it will become law. Companies also should be vigilant in taking steps to reduce the risk of a ransom attack, including employing multi-factor authentication and increased employee phishing training, among other things. See our article on ransom in the Harvard Business Review.

U.S. Dept. of Justice Says it Will Go After Contractors That Fail to Report Cyberattacks

On October 6, 2021, Deputy Attorney General Lisa Monaco stated in remarks at a cybersecurity conference hosted by the Aspen Institute that the Department of Justice will require government contractors and recipients of U.S. government grants to disclose cyberattacks to the government, and is prepared to hold such entities accountable if they fail to comply. The DOJ’s primary enforcement mechanism will be the False Claims Act, which both permits the government to sue over misused federal funds and protects whistleblowers who report government contractor violations of cybersecurity standards. Deputy Attorney General Monaco emphasized in her remarks that the DOJ is “going to go after that behavior and extract very hefty fines.”

Deputy Attorney General Monaco’s remarks came the same day that she published an op-ed on CNBC calling for Congress to enact legislation “to create a national standard for reporting cyber incidents that pose significant risk, including ransomware and incidents that affect critical infrastructure and their supply chains.”

Takeaway: As highlighted elsewhere in this report above, ransom attacks have increased at a startling rate in the past two years. These statements by Deputy Attorney General Monaco are the latest in a string of statements, guidance and proposed legislation aimed at encouraging (or requiring) private sector companies to report cyberattacks to the government.

EU Parliament Votes to Ban Use of Facial Recognition

On October 6, 2021, the European Parliament called for a ban on the use of artificial intelligence by law enforcement in public spaces on predictive policing, and more broadly on facial recognition databases such as the controversial AI system created by U.S. company Clearview AI. The Parliament also said that predictive policing based on behavioral data should be outlawed as well as social scoring systems that seek to rate the trustworthiness of citizens based on their behaviors or personality.

While the Parliament’s resolution is non-binding, this vote indicates its position and is expected to influence the discussions on the European Commission’s Artificial Intelligence Act, which proposes to restrict the use of remote biometric identification (including facial recognition technology) in public places unless it is to fight specific crimes, such as kidnappings and terrorism.

Takeaway: The Parliament’s resolution sends a strong signal to the European Commission over what they will accept in the negotiations between the EU institutions to finalize the details of the Artificial Intelligence Act. The final Act may significantly limit the use of facial recognition databases and social or behavioral data analysis tools in Europe.

With A Stroke of the Pen, California Genetic Privacy Act Becomes Law

Earlier this month, California Governor Gavin Newsom signed SB 41, the Genetic Information Privacy Act (“GIPA”), into law. GIPA imposes new notice and consent obligations on companies that offer direct-to-consumer products and services. The law will take effect on January 1, 2022.

Companies must also implement and maintain reasonable security measures to protect consumers’ genetic data against unauthorized access, destruction, use, modification or disclosure.

Notably, GIPA does not apply to deidentified data, which it defines as “data that cannot be used to infer information about, or otherwise be linked to, a particular individual” or household. Companies must publicly commit to maintaining and using the information only in deidentified form, only attempt to reidentify the information in limited circumstances, and contractually bind recipients of the information to adhere to these requirements.

GIPA provides for a private right of action for persons who have suffered injury-in-fact and have lost money or property because of a GIPA violation. Additionally, regulatory penalties of up to $10,000 per violation can be imposed, plus court costs, depending on whether the violation was a result of negligence or willful conduct.

Takeaway: The processing of consumer genetic data is regulated under a growing patchwork of laws, but compliance requirements will often overlap. For example, companies that are subject to applicable laws in the EU and UK should consider if privacy protections implemented to comply with those regimes could also suffice for compliance with GIPA. Similarly, GIPA’s incentives to anonymize data may ultimately reduce the challenges of honoring CCPA consumer rights (and GDPR) requests. GIPA compliance will likely arise in diligence of companies whose businesses are driven by processing this data. Those seeking funding from third-party investors or anticipating an acquisition should be particularly aware of and attuned to the new set of obligations.

EDPB Launches Cookie Banner Task Force

The European Center for Digital Rights, also known as NOYB (None of Your Business), the privacy watchdog organization founded by privacy activist Max Schrems, launched a new campaign against the use of allegedly unlawful cookie banners.

ECDR sent nearly 600 draft complaints to companies across Europe, giving them one month to fix the alleged violations before filing formal complaints with the relevant regulators. The draft complaints target websites of large global companies or local websites with a wide audience. According to the European Center for Digital Rights, those websites failed to provide their users with a clear option to decline cookies and thereby lured users to accept the cookies. In response, the European Data Protection Board (EDPB) decided to set up a taskforce to coordinate the answers to potentially hundreds of complaints.

Takeaway: The privacy watchdog organization wants websites to adopt a best practice approach rather than strictly comply with the law. They advocate for a design whereby users should clearly and simply be able to refuse cookies. Given the scope of the action and the EDPB’s response, enforcement on cookies is likely to become tougher in the coming months. Accordingly, companies should ensure that their cookie banners comply with the regulations governing cookies, particularly the GDPR and the ePrivacy Directive.

DeepMind Faces Class Action-Type Suit in UK for Obtaining Health Data Without Users’ Consent from The National Health Service

DeepMind, the British artificial intelligence lab that was acquired by Google, caught the public eye in 2017 when the UK data protection authority ruled that the National Health Service (NHS) had breached data protection regulations when it shared confidential medical records of approximately 1.6 million individuals with the company during the development phase of their patient-monitoring application. The ruling did not directly target Google or DeepMind.

Recently, a class-action style claim was filed before the High Court on behalf of an NHS patient against Google and DeepMind. The claim will include all concerned patients unless some specifically request not to be part of it.

The goal of the action is to seek and address the public concerns about large-scale access to, and use of, private health data by technology companies and to raise issues regarding the precise status and responsibility of such technology companies.

Takeaway: This action sheds light on the multiple consequences that a violation of data protection laws may have. While the breach may be investigated by a data protection authority, individuals may also launch damages claims through class action litigation. This is not the first of these suits to be filed, and the UK may see more of these class-action type claims in the future.

UK Adopts 10-Year Strategy for Becoming One of the World’s Leaders in Developing Artificial Intelligence

A group of UK government agencies jointly published a National AI Strategy designed to make the country “a global AI superpower” within the next ten years. The Strategy was announced on the heels of the release of the EU’s proposed regulation on AI (See our Dechert OnPoints here and here). Bringing together a wide collection of initiatives, the Strategy is intended to build the resource and talent capacities necessary to achieve international leadership in the AI field, to ensure that these development efforts benefit all sectors of the British economy, and to structure a regulatory framework that satisfactorily addresses the liability and other legal issues raised by AI deployment while still encouraging innovation. Like the Proposed EU Regulation on Artificial Intelligence, the Strategy highlights that trusted AI systems are essential to an innovation economy, including address bias, among other risks in AI systems.

On the regulatory front, the Strategy acknowledges the “unique questions” raised by AI’s use, concluding that these questions are best addressed by regulations that are “sector-specific,” rather than “a blanket AI-specific” regulation. The Strategy’s regulatory initiatives also include reviews of current UK law governing AI decision-making by public actors, and finding ways to work with other nations in developing international standards for the development, manufacture and use of the technology.

Takeaway: The intersection of data protection law and the accelerated deployment of AI across sectors cannot be seen as distinct developments. The UK is currently assessing its data protection regime through a public consultation. In relation to AI, the two initiatives are in key respects indissolubly linked. For example, current requirements involving automated decision-making could be updated to enable the achievement of the UK’s “AI superpower” role and economic objectives. Relatedly, the Strategy contemplates the potential availability of technology to enable the kind of data sharing required for AI, including sensitive information. The Strategy offers organizations a critical opportunity to shape the UK’s AI ecosystem through public consultations and opportunities for standard setting. At the same time, it will be crucial to be mindful of the European Commission’s parallel initiative in order to avoid a restrictive patchwork of AI regulation.

Original Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

+ twenty four = twenty seven