A plaintiff filed a complaint against an online university, alleging claims under the Telephone Consumer Protection Act and the Illinois Consumer Fraud and Deceptive Business Practices Act (“ICFA”) relating to the defendant’s alleged repeated and unsolicited calls to the plaintiff’s cell phone.
The defendant, Ashford University, LLC, allegedly called plaintiff Melissa Nelson’s cell phone on at least 50 different occasions in an attempt to solicit her business. Nelson claimed that “her life and well-being were disrupted by the constant calls to her cell phone,” and that the repeated calls resulted in “emotional distress, mental anguish, invasion of privacy, increased anxiety, increased depression, general aggravation, increased usage of her cell service, and diminished data storage on her cell [phone].”
Ashford University filed a motion to dismiss Nelson’s ICFA claims. The ICFA requires a plaintiff to show they suffered “actual damage” due to a defendant’s violation of the Act, and demands that “‘[a]ctual damages’ must arise from ‘purely economic injuries.’” Ashford University argued that Nelson did not allege any actual damages in her complaint. The Court agreed.
With respect to the plaintiff’s claim of increased usage of her telephone service and diminished space for data storage, the Court pointed out that Nelson had not alleged that she had suffered any monetary cost that would not have otherwise occurred, such as overage charges for telephone or data services. As to her claim of emotional distress, the ICFA provides that emotional distress damages are only compensable “when they are part of a total award that includes actual economic damages.” Because Nelson failed to prove any pure economic damages, her emotional distress claims failed to suffice.
The Court therefore granted Ashford University’s motion to dismiss the ICFA claim. Nelson’s TCPA claim is still pending. Troutman Sanders will continue to monitor the developments in this case.
September 7, 2016 No Comments
The Federal Trade Commission announced on August 29 that it is seeking public comment on its Standards for Safeguarding Customer Information, commonly known as the Safeguards Rule, as part of the FTC’s periodic retrospective review of the rules. The Safeguards Rule, effective May 23, 2003, was issued under the Gramm-Leach-Bliley Act and places certain requirements on financial institutions to safeguard their customer information. Financial institutions are those entities significantly engaged in activities that the Federal Reserve Board has determined to be financial in nature, such as lending or investing money, providing financial advice, and brokering, underwriting, or servicing loans. Financial activities do not include activities that the FRB determined to be incidental activities or activities that were determined to be financial in nature after enactment of the GLBA – two issues the FTC has suggested require reconsideration in requesting comments on whether the scope should be expanded.
The Safeguards Rule requires that a financial institution develop, implement, and maintain a comprehensive written information security program consisting of administrative, technical, and physical safeguards that the financial institution uses in all stages of the customer information lifecycle. In developing a written information security program, financial institutions must inventory customer information in their possession and identify any reasonably foreseeable internal and external risks which could compromise the security, confidentiality, or integrity of the information. Once the risks have been identified, a financial institution should then design and implement safeguards appropriate to the size and complexity of the financial institution, the nature and scope of its activities, and the sensitivity of the customer information at issue. Financial institutions are required to test and monitor the effectiveness of implemented safeguards and make adjustments as necessary to continuously combat developing threats.
The FTC has asked for comment, and corresponding evidence, on several general and specific issues posed by the Safeguards Rule as provided in the Federal Register Notice. There are two particularly interesting issues raised by the FTC. First is continued consideration of the impact the Safeguards Rule has on small businesses. Comments to the original Safeguards Rule suggested that it would create burdens, including financial, on small businesses that potentially lack the expertise needed to develop, implement, and maintain required safeguards – expertise that larger entities arguably have. The FTC addressed these comments in 2003 by taking a flexible approach with the final Safeguards Rule, allowing businesses to implement safeguards appropriate to the size and complexity of the business. It is clear from the questions posed in this periodic review that the FTC remains interested in how small businesses are coping with the requirements, from both financial and compliance perspectives.
The second issue is whether the Safeguards Rule should be modified to include more detailed requirements for information security programs. Specifically, the FTC asked about whether the rule should require information security programs to include a response plan in the event of a breach to the security, integrity, or confidentiality of customer information, and whether the rule should rely on other information security standards or frameworks, such as the National Institute of Standards and Technology’s Cybersecurity Framework or the Payment Card Industry Data Security Standards. The questions raised by the FTC account for the overall impact such prescriptive changes could have on the costs imposed on and benefits to consumers and businesses, including small businesses. It is a logical question to then ask, if the rule is modified to contain more detailed requirements, whether the FTC might consider including a safe harbor under the rule to balance out any increase in cost to comply – a question that remains unanswered for now.
The FTC will be accepting public comment until November 7.
Troutman Sanders’ cybersecurity, information governance, and privacy team monitors developments in various information security standards, and advises clients on compliance with such standards and how to address new and emerging threats.
August 31, 2016 No Comments
OCR Settles With Illinois Nonprofit Medical Group for $5.55 Million in Medical Data Breach Investigation
The United States Department of Health and Human Services, Office for Civil Rights (“OCR”), has assessed a $5.55 million fine against an Illinois healthcare provider for alleged HIPAA data privacy violations. Thesettlement is the largest to date between the OCR and any single entity, and is one of several multi-million dollar settlements obtained by the OCR this year.
Advocate Health Care Network (“Advocate”), a nonprofit organization and the largest healthcare organization in Illinois, came under OCR scrutiny in 2013 after it submitted breach notification reports relating to three distinct data security incidents involving its subsidiary, Advocate Medical Group (“AMG”). According to the OCR, the three incidents affected the electronic protected health information (EPHI) of over four million individuals. Advocate first reported that four desktop computers containing the EPHI of approximately four million users were stolen from an administrative office building. In the second incident, Advocate notified HHS that the EPHI of approximately 2,000 patients had been potentially exposed to an unauthorized third party via an associated billing services provider. The third incident consisted of the theft of a laptop containing the unencrypted EPHI of approximately 2,000 individuals from an unlocked employee vehicle.
According to the OCR, the EPHI included individuals’ demographic information, clinical information, health insurance information, patient names, addresses, credit card numbers and their expiration dates, and dates of birth. As a result of its investigation, the OCR found, among other things, that Advocate had failed to conduct an accurate and thorough risk analysis of its facilities, IT equipment, applications, and its data systems handling EPHI, that it failed to limit physical access to certain electronic information systems, and that it failed to obtain an adequate assurance from its associated billing services provider regarding the safeguarding of EPHI.
As a “covered entity” under HIPAA, Advocate is subject to OCR regulation. Under the terms of the settlement, Advocate admits no liability, and in addition to the fine, Advocate has entered into a mandatory corrective action plan set forth by the OCR.
August 15, 2016 No Comments
Reversing the findings of an Administrative Law Judge, the FTC has found that LabMD, Inc., a former provider of clinical laboratory testing services to physicians, violated Section 5 of the FTC Act by failing to maintain proper data security practices. The final order, issued on July 29, is notable in its position suggesting that the FTC has broad power to regulate even the extremely limited disclosure of personal medical information.
LabMD operated as a provider of laboratory testing services for physicians from 2001 to 2014. The company maintained sensitive patient samples and testing information. In 2013, the FTC issued a complaint against LabMD, which alleged that LabMD failed to provide reasonable and appropriate security for personal information stored on its computer network. The complaint was based on an alleged vulnerability identified in 2008 by a forensic analyst working for Tiversa, a data security company. While the Office of Civil Rights might be expected to take charge had the event happened today, the FTC asserted jurisdiction.
The Tiversa analyst allegedly located a copy of a LabMD insurance aging report via a peer-to-peer (P2P) application. The file, referred to in the opinion as the “1718” file, supposedly contained “1,718 pages of sensitive personal information for approximately 9,300 consumers, including their names, dates of birth, social security numbers, ‘CPT’ codes designating specific medical tests and procedures for lab tests conducted by LabMD, and, in some instances, health insurance company names, addresses, and policy numbers.” The forensic analyst alleges that he was also able to download other shared files from the same LabMD IP address. The 1718 file was allegedly exposed because a LabMD billing manager was given administrator rights and downloaded a P2P application to her computer. The billing manager had allowed the P2P application to share the entire contents of her “My Documents” folder with other users.
The ALJ held that under Section 5(n), LabMD’s computer data security practices had not been shown to have “caused” or have been “likely to cause” “substantial consumer injury” sufficient to invoke the FTC’s jurisdiction. In pertinent part, the ALJ found that the limited disclosure of the 1718 file to Tiversa (and to an affiliated academic researcher) did not constitute sufficient injury under Section 5(n). The ALJ also noted that Complaint Counsel relied on unsubstantiated evidence provided by Tiversa in bringing its original complaint.
In reversing the ALJ, the Commission determined that the ALJ improperly interpreted Section 5(n) of the FTC Act, and it disagreed with the ALJ’s findings. Specifically, the Commission found that LabMD’s unauthorized disclosure of the 1718 file itself caused substantial injury under Section 5(n), even though the 1718 file disclosure was limited to only Tiversa and one other researcher. The Commission noted that “substantial” consumer injury under Section 5(n) could include “an intangible but very real harm like a privacy harm resulting from the disclosure of sensitive health or medical information.” The mere disclosure of the 1718 file itself was therefore sufficient injury under Section 5(n).
Further, the Commission concluded that the disclosure of the 1718 file via a peer-to-peer file sharing application “was likely to cause substantial injury and that the disclosure of sensitive medical information did cause substantial injury” under Section 5(n). The opinion noted that physical or economic harm was not required, at least when medical information is at issue. “[T]he disclosure of sensitive health or medical information causes additional harms that are neither economic nor physical in nature but are nonetheless real and substantial and thus cognizable under Section 5(n).” Finally, as to whether substantial injury was “likely” to occur, the Commission stated that “a practice may be unfair if the magnitude of the potential injury is large, even if the likelihood of the injury occurring is low.”
The Commission also pointed to specific shortcomings in the company’s data security procedures. Those issues included LabMD’s failure to employ adequate risk assessment tools, including intrusion detection, file integrity monitoring, and penetration testing. The opinion also noted that LabMD failed to provide data security training to its employees, and that it failed to adequately restrict or monitor employee administrator access. The Commission also stated that the security tools LabMD had used to mitigate risk were inadequate under the circumstances, and that its “antivirus programs, firewall logs, and manual computer inspections … could identify only a limited scope of vulnerabilities” and were often used ineffectively.
The problem with the Commission’s ruling is that it turned the “likely to cause substantial consumer injury” test on its head, finding unfairness where an unlikely risk may be theoretically large in potential scope. This conclusion is at odds with the statutory requirement that there was actual – or even likely – harm. The test for jurisdiction under Section 5 in no way suggests that the likelihood of harm test (causation) requires a lower standard if the consumer injury is somehow potentially more “substantial.”
LabMD has 60 days in which to file a petition for review of the FTC’s decision with the U.S. Court of Appeals. Michael Daugherty, president and CEO of now-defunct LabMD, recently expressed his desire to take the legal battle to federal court on appeal.
August 8, 2016 No Comments
In Cour v. Life360, Inc., the United States District Court for the Northern District of California granted a defendant’s motion to dismiss a claim under the Telephone Consumer Protection Act, finding that the defendant’s system for sending text messages did not constitute “making” a call under the statute. In reaching its decision, the Court advanced a narrow interpretation of what it means to “make” a call under the TCPA.
Cour involved allegedly unsolicited text messages. According to the plaintiff, he received a text message from Life360 saying “TJ, check this out….”, despite not being a Life360 user and never downloading the Life360 app. Because he claimed that this text message was unwelcome, the plaintiff sued Life360 for allegedly “mak[ing]” a call without express consent – a practice generally restricted under the TCPA.
For purposes of the Court’s decision, it presumed that Life360 works in the following manner: (1) Life360 asks users for permission to access their phone’s contacts; (2) users who allow such access are brought to a screen giving them the option to “add members;” (3) users are then given the option to “invite” specific members of their contacts to join Life360; and (4) Life360 sends text messages to those contacts “invited” by a member.
In deciding to dismiss the plaintiff’s claims, the Court’s analysis turned on whether Life360 “makes” calls under the TCPA. It held that Life360 does not. According to the Court, the fact that Life360 requires users to choose which of their contacts should receive an invitation, and then requires users to press the “invite” button before the text message is sent, means that Life360 is not making “calls” under the TCPA.
In reaching this conclusion, the Court was guided by the Federal Communication Commission’s July 10, 2015 order, wherein the FCC analyzed whether two companies, TextMe and Glide, “make” calls under the TCPA. The Court found the FCC’s analysis “[o]f particular relevance” because it clarified for the Court the type of actions that constitute “making” calls in the context of apps sending invitational text messages. For example, the Court noted the FCC’s conclusion in an analogous situation that the “app user’s actions and choices effectively program the cloud-based dialer to such an extent that he or she is so involved in the making of the call as to be deemed the initiator of the call … .”
Ultimately, in the Court’s view, the goal of the TCPA is to prevent the invasion of privacy. When considering the facts of the case and the FCC’s interpretation of the TCPA, the Court concluded that the person who chooses to send an unwanted invitation through Life360, and not Life 360 itself, “is responsible for invading the recipient’s privacy.” As a result, the Court dismissed the plaintiff’s TCPA claims.
August 3, 2016 No Comments
The Court of Appeals for the District of Columbia shot down a putative class action brought against Urban Outfitters, Inc., and Anthropologie, Inc., which had alleged that the companies violated D.C. consumer protection statutes by collecting customer ZIP code information during in-store checkout. The July 26 rulingremanded the suit for dismissal, and held that Plaintiffs failed to establish Article III standing under the Supreme Court’s recent decision in Spokeo. The ruling highlights the continuing obstacles facing would-be class action plaintiffs under Spokeo.
Plaintiffs Whitney Hancock and Jamie White brought the action against the retailers for alleged violations of the District of Columbia’s Use of Consumer Identification Information Act (the “Identification Act”) and its Consumer Protection Procedures Act (the “Protection Act”). Specifically, Plaintiffs alleged that Defendants’ request for ZIP codes at checkout violated the Identification Act’s ban on obtaining addresses as a condition of a credit card transaction. Plaintiffs further claimed that the request for ZIP codes violated the Protection Act by, among other things, falsely implying to consumers that disclosure of the ZIP codes is required to complete a credit card transaction.
Quoting Spokeo, the court dismissed Plaintiffs’ statutory claims. “The complaint here does not get out of the starting gate. It fails to allege that Hancock or White suffered any cognizable injury as a result of the [ZIP] code disclosures.” The court noted that Plaintiffs’ counsel admitted that the only alleged injury was that Plaintiffs were asked for a ZIP code, when under D.C. law they should not have been. “The Supreme Court’s decision inSpokeo thus closes the door on Hancock and White’s claim that the Stores’ mere request for a [ZIP] code, standing alone, amounted to an Article III injury. Spokeo held that plaintiffs must have suffered an actual (or imminent) injury that is both particularized and ‘concrete … even in the context of a statutory violation.’”
In the wake of Spokeo, it is clear that plaintiffs cannot allege mere “bare procedural” statutory harm to establish Article III standing. Spokeo mandates allegations of a concrete harm. The Spokeo decision continues to aid defendants in dodging putative class actions before they get out of “the starting gate.” This blog’s further discussion of the Spokeo decision can be found here.
August 1, 2016 No Comments
Federal courts continue to interpret and analyze the Supreme Court’s decision in Spokeo, Inc. v. Robins. Recently, a federal judge in New York permitted a lawsuit against Hearst Communications, Inc., to move forward after considering supplemental briefing on Article III standing.
Plaintiffs Suzanne Boelter and Josephine Edwards subscribe to magazines published by Hearst. Plaintiffs claim that Hearst sold their personal information to third parties, without their consent, in violation of Michigan’s Video Rental Privacy Act (“VRPA”). Hearst asked the court to dismiss the complaint for lack of Article III standing, arguing that Plaintiffs failed to allege any concrete injury-in-fact and instead relied on bare procedural violations of the law. In response, Plaintiffs argued that Hearst’s disclosure of information implicated their right to privacy and personal security. Plaintiffs also claimed that as a result of Hearst’s actions, they suffered actual injury because they overpaid for magazine subscriptions and received junk mail and telephone solicitations.
Judge Analisa Torres denied Hearst’s motion to dismiss, holding that Plaintiffs allegations qualified as particularized and concrete harm, and that they adequately alleged “injury-in-fact.” Taking the allegations as true, the Court held that Hearst’s sale and disclosure of personal information to third parties violated Plaintiffs’ right to keep their information private, subjected Plaintiffs to unwanted solicitations, and resulted in Hearst’s unjust retention of economic benefits. Judge Torres also denied the motion to dismiss on other grounds, including that the VRPA was constitutional and that the complaint stated a plausible claim for relief.
There is no doubt that plaintiffs’ bar will continue to disagree about the implications of Spokeo. However, as Judge Torres acknowledged, “violation of a statute by itself is insufficient to confer standing to sue,” and it is clear that to satisfy Article III standing plaintiffs must allege a concrete and consequential harm beyond a mere technical violation of a statute. Accordingly, defendants should anticipate that clever plaintiffs will continue to create theories of harm that attempt to sidestep the lack of tangible injury—such as those made by Plaintiffs inHearst of “unjust enrichment” and “invasion of privacy.”
July 29, 2016 No Comments
Microsoft prevailed in its appeal to the Second Circuit from an order denying its motion to quash a warrant seeking a Microsoft user’s email stored on the company’s servers in Ireland. The ruling sets important precedent limiting the extraterritorial reach of the federal government in seeking to compel disclosure of private company data under the Stored Communications Act (“SCA”). Microsoft received high profile support in its appeal, with the likes of Apple, AT&T, Amazon, Verizon Communications, Cisco, and the country of Ireland joining as amici curiae.
The ruling may also help bolster the credibility of the fledgling EU-U.S. Privacy Shield data transfer agreement, which has been criticized by European regulators for not adequately safeguarding EU personal data from U.S. government scrutiny. Privacy Shield’s predecessor, Safe Harbor, was struck down by the European Court of Justice over similar concerns. European regulators have so far signaled reluctant acceptance of Privacy Shield, but issues like automated data profiling continue to cause worries. The ruling by the Second Circuit may help to allay some fears over the staying power of Privacy Shield.
The July 14 ruling by Judge Susan L. Carney of the United States Court of Appeals for the Second Circuit reversed the denial by the District Court for the Southern District of New York of Microsoft’s motion to quash, and vacated the court’s finding of civil contempt for Microsoft’s failure to comply with the warrant.
Judge Carney’s ruling emphasized the SCA’s intended focus on safeguarding privacy in stored electronic communications. “Contrary to the government’s contention, this section does more than merely protect against the disclosure of information by third parties. By prohibiting the alteration or blocking of access to stored communications, this section also shelters the communications’ integrity.” Importantly, Judge Carney held that a “warrant” issued under the SCA is subject to traditional territorial limitations and constitutional requirements, including the presumption against extraterritoriality, and is not akin to a subpoena.
The warrant served on Microsoft was issued by a United States magistrate judge as part of a narcotics investigation into an unnamed individual. The warrant directed Microsoft to seize and produce the contents of the individual’s Microsoft Outlook “@msn.com” email account. The individual’s non-content information was stored on servers in the United States. The individual’s content information, however, was stored on servers in Ireland, as Microsoft generally stores content at datacenters located near the physical location identified by the user.
Microsoft complied with the warrant in part and produced the individual’s U.S.-based non-content information. Microsoft refused to produce the customer content stored on its servers in Ireland, however, and moved to quash the warrant. Microsoft’s motion subsequently was denied by the District Court, and the company was eventually held in civil contempt.
In presenting its case, the federal government argued that similar to a subpoena, an SCA warrant requires the recipient to deliver records to the government regardless of where the records are located, so long as they are in the recipient’s custody and control. Microsoft swayed the court in asserting that an SCA warrant is subject to the same territorial boundaries as a traditional warrant. Judge Carney also noted that the federal government conceded that the warrant provisions of the SCA do not contemplate or permit extraterritorial application. The court further pointed out that the SCA itself draws a distinction between “subpoena” and “warrant”, with the latter providing a greater degree of privacy protection.
The federal government also contended that preventing SCA warrants from reaching data stored abroad would seriously impede law enforcement efforts, and that the current process for obtaining such information, using Mutual Legal Assistance Treaties (“MLATs”), is overly cumbersome. Judge Carney dismissed this argument, noting that international comity and the text of the SCA supported limiting the scope of a warrant under the SCA.
The Second Circuit’s ruling can be seen as a win for companies concerned about maintaining user privacy and curbing law enforcement’s reach into private user data. The ruling limits law enforcement’s ability to compel host companies like Microsoft to produce private user data stored abroad.
July 25, 2016 No Comments
Most organizations understand the importance of timely implementing software updates and patches. However, open platforms have permitted a level of customization such that a patch in one application may have unintended consequences in other parts of the overall system architecture, including customization of the software being updated. A good example is the recent Microsoft security patch released in June that resulted in problems with many users’ Group Policy objects (“GPOs”). While Microsoft issued guidance on July 5 as to how to repair the Group Policy problems caused by the patch, the experience is an example of unintended consequences that can arise during routine product security updates.
Group Policy is Microsoft’s tool for managing user and computer settings on certain networks. In other words, Group Policy determines which users and devices get access to the sensitive data of the company (and the applications), or have the authority to make changes to the system (the “keys to the kingdom”). Microsoft reportedly was beset with a bevy of complaints from users reporting network and user access issues caused by the patch.
The patch, released on June 14, resolved a vulnerability that could allow elevation of privilege in the event of a “man-in-the-middle” (“MiTM”) attack against traffic passing between a domain controller and a target machine. Generally speaking, a MiTM attack is an attack on authentication protocol in which the attacker positions itself between two parties so as to intercept (and possibly alter) the data traveling between them. According to Microsoft, if a MiTM attack were underway, an attacker could create a group policy to grant administrator rights to a standard user.
Microsoft’s June patch addressed the vulnerability by enforcing Kerberos authentication for certain calls over Lightweight Directory Access Protocol (“LDAP”), but it had the additional effect of breaking many users’ Group Policy Objects. In other words, the patch limited an exploit of an outside hacker, but in doing so potentially gave internal users, qualified only for limited permissions, unfettered access to system controls. In simplified terms, where a user normally would have only “read” rights, taking down the GPO could grant that user read, write, and edit rights.
While the debate initially involved whether the unintended consequence of the patch was the fault of Microsoft or the users, in the event of a breach, the debate makes little difference to the affected company. Careful and thoughtful consideration is required to balance the complexities of an information security program. Understanding the implications of an update and patch policy (which to most may seem simple), is just the beginning.
July 18, 2016 No Comments
The FTC issued warning letters to 28 companies that allegedly advertised participation in the Asia-Pacific Economic Cooperative Cross-Border Privacy Rules system (“APEC CBPRs”), but had not received the requisite certification. A company seeking to participate in the CBPR system must first have its compliance established by an APEC-recognized accountability agent.
The APEC CBPRs is a voluntary, self-regulated system developed by participating APEC countries, including the United States. The system requires participating businesses to develop and implement data privacy policies consistent with the APEC Privacy Framework. The framework is based on nine data privacy principles: preventing harm, notice, collection limitation, use, choice, integrity, security safeguards, access and correction, and accountability. Companies certified under the system appear on the CBPRs website.
In the United States, the FTC enforces the APEC CBPR system under the FTC Act. The FTC has demanded that the 28 companies remove the claims regarding APEC CBPR from their websites immediately, and to confirm with the FTC that they have done so or that they are, in fact, certified.
This is not the first time that the FTC has targeted companies over false APEC CBPRs representations. In May of this year, a San Francisco-based manufacturer of hand-held vaporizers settled with the FTC over charges that it deceived consumers about its participation in APEC CBPRs. Under the terms of the settlement, the company is prohibited from misrepresenting its participation, membership, or certification in any privacy or security program sponsored by a government or self-regulatory or standard-setting organization.
FTC enforcement actions have not been limited to the APEC CBPRs. In August 2015, the FTC charged 13 U.S. Companies with misrepresenting that they were compliant with the US-EU Safe Harbor framework, when company certifications had lapsed or certifications had not been applied for at all.
July 18, 2016 No Comments