In March 2015, I wrote about the ongoing dispute between the FTC and LabMD, an Atlanta-based cancer screening laboratory, and looked at whether the FTC has the authority to take enforcement action over data-security practices alleged to be insufficient and therefore “unfair” under section 5(n) of the Federal Trade Commission Act (“FTCA”). On November 13, 2015, an administrative law judge ruled that the FTC had failed to prove its case.
In 2013, the FTC filed an administrative complaint against LabMD, alleging it had failed to secure personal, patient-sensitive information on its computer networks. The FTC alleged that LabMD lacked a comprehensive information-security program, and had therefore failed to (i) implement measures to prevent or detect unauthorized access to the company’s computer networks, (ii) restrict employee access to patient data, and (iii) test for common security risks.
The FTC linked this absence of protocol to two security breaches. First, an insurance aging report containing personal information about thousands of LabMD customers was leaked from the billing manager’s computer onto peer-to-peer file-sharing platform LimeWire, where it was available for download for at least eleven months. Second, Sacramento police reportedly discovered hard copies of LabMD records in the hands of unauthorized individuals. They were charged with identity theft in an unrelated case of fraudulent billing and pleaded no contest.
Incriminating as it all might seem, Administrative Law Judge D. Michael Chappell dismissed the FTC’s complaint entirely, citing a failure to show that LabMD’s practices had caused substantial consumer injury in either incident.
Section 5(n) of the FTCA requires the FTC to show that LabMD’s acts or practices caused, or were likely to cause, substantial injury to consumers. The ALJ held that “substantial injury” means financial harm or unwarranted risks to health and safety. It does not cover embarrassment, stigma, or emotional suffering. As for “likely to cause,” the ALJ held that the FTC was required to prove “probable” harm, not simply “possible” or speculative harm. The ALJ noted that the statute authorizes the FTC’s regulation of future harm (assuming all statutory criteria are met), but that unfairness liability, in practice, applies only to cases involving actual harm.
In the case of the insurance aging report, the evidence showed that the file had been downloaded just once—by a company named Tiversa, which did so to pitch its own data-security services to LabMD. As for the hard copy records, their discovery could not be traced to LabMD’s data-security measures, said the ALJ. Indeed, the FTC had not shown that the hard copy records were ever on LabMD’s computer network.
The FTC had not proved—either with respect to the insurance aging report or the hard copy documents—that LabMD’s alleged security practices caused or were likely to cause consumer harm.
The FTC has appealed the ALJ’s decision to a panel of FTC Commissioners who will render the agency’s final decision on the matter. The FTC’s attorneys argue that the ALJ took too narrow a view of harm, and a substantial injury occurs when any act or practice poses a significant risk of concrete harm. According to the FTC’s complaint counsel, LabMD’s data-security measures posed a significant risk of concrete harm to consumers when the billing manager’s files were accessible via LimeWire, and that risk amounts to an actual, substantial consumer injury covered by section 5(n) of the FTCA.
The Commissioners heard oral arguments in early March and will probably issue a decision in the next several months. On March 20th, LabMD filed a related suit in district court seeking declaratory and injunctive relief against the Commission for its “unconstitutional abuse of government power and ultra vires actions.”
Despite not being explicitly mentioned in the Constitution, the Supreme Court has firmly held that a right to privacy for all Americans is found in several amendments to the Constitution, with almost 100 years of case law providing precedent for many personal privacy rights that have become a cornerstone of American culture. However, in this new digital age of rapid technology change, with real-time access to information and the global exchange of information at the push of a button, new privacy protection questions arise almost daily. The extent to which an individual’s private information shared online is subject to privacy protection varies depending on which side of the pond you stand.
European nations generally take a more restrictive approach than the U.S. as to how companies can use personal data. EU nations often go head-to-head with U.S. digital companies over differing interpretations of privacy rights. Both Google and Microsoft have faced multiple investigations outside the United States.
Facebook seems to be a particularly popular target. As the world’s largest social network with 1.6 billion monthly users, Facebook earns its revenues from advertising aimed at users, after gathering information from the users’ social connections and activities in their posts. Late last month, a German court fined Facebook 100,000 Euros for failing to follow an order issued by a German court four years ago that required the social media site to revise a clause in its terms regarding any intellectual property content posted by users on or in connection with Facebook. The German court had found that the clause in the terms violated consumer rights. While Facebook modified the wording slightly for German users, the German court found that the revised wording still maintains the same underlying message as the original wording. Europe’s highest court also recently successfully challenged Facebook as to the way that data was transferred between the European Union and the United States. And just yesterday, a German court ruled that domestic websites could not transfer user data to Facebook via its “like” button without the specific consent of the user.
In a novel link between privacy protection and antitrust, the German competition authority known as the Federal Cartel Office (BKA) opened an investigation on March 2 into whether Facebook abused its dominant position in social networking in order to collect its users’ digital information, including placing unfair constraints on the users, who were forced to sign complicated terms and conditions in order to use the network. The investigation seeks to discover whether Facebook users were properly informed about how their personal data would be obtained through the site, including the type of data collected, as well as the extent of the data collected.
One might ask why the BKA would get involved with this novel approach to linking privacy protection to antitrust law. First, under antitrust law, the maximum fines are much greater than those under privacy law. For a company tech giant like Facebook, the fines imposed by data protection authorities can seem negligible, even for the most egregious cases, while antitrust fines pose a much more significant deterrent. Second, Facebook has claimed that it falls only within the jurisdiction of the data protection authority in Ireland, where its international headquarters are situated. By bringing the investigation under the auspice of the antitrust authority, this argument is avoided. The President of the BKA, Andreas Mundt, remarked that, “[d]ominant companies are subject to special obligations,” and he went on to say that such obligations include adequate terms of service, as far as they are relevant to the market. He also noted the importance of user data where Internet services are financed by advertising. The BKA noted, “. . . if there is a connection between infringement and market dominance, it could constitute an abusive practice under competition law.”
While some question the BKA’s position as ambitious and vague, others fear that this case could open the door to other investigations and cases using data protection violations to claim antitrust violations. Whether the BKA is successful or not, this should be a forewarning to other big U.S. technology companies: it is probably not enough to rely on U.S. privacy rules when playing in a global arena.
Exploiting consumers and exploiting consumer data were popular themes in the FTC’s October 30th workshop on lead generation, “Follow the Lead.” The day-long workshop explored the mechanics of lead generation and its role in the online marketplace. With a focus on the lending and education spaces, panelists discussed the many layers of marketing involved in lead generation—and importantly—how those many layers can add confusion to how consumer data gets collected, sold, used … and misused.
Panelists of the five workshop sessions hailed from industry, government, advocacy groups, and research institutions. They offered insights into both the vulnerabilities and opportunities flowing from the extensive “behind the scenes” market of lead generation. But unsurprisingly, the benefits of lead generation were overshadowed largely by attendant concerns: why is so much consumer data collected, what is done with it, and are consumers aware of how their personal information is being traded and used?
The workshop included two “case study” panels on lending and education. For the panel on lead generation in lending, Tim Madsen of PartnerWeekly provided an overview of how the “ping tree” model works. Connecting prospective borrowers with lenders through a reverse auction of borrower leads, the “ping tree” model may be an efficient way of matching borrowers and lenders. However, Pam Dixon, Executive Director of World Privacy Forum, highlighted her concerns that lenders are receiving consumer data that would otherwise be protected under the Equal Credit Opportunity Act and therefore that the online process is circumventing important consumer protection laws. For instance, the online lending process may require certain personal information from borrowers in order filter fraudulent requests. But that personal information (e.g., gender or marital status) otherwise could not be part of the loan application process. Dixon felt the disclosure of protected information was one that needed to be addressed from both a technical and a policy standpoint. And it is an issue she raised on subsequent panels during the conference, indicating a possible pressure point for future regulatory action.
The panel on lead generation in education was highly charged, due to the controversial nature of marketing higher education and due to the negative attention on for-profit education. Despite many people’s assumption that online marketing in education is largely a tool of the for-profit education industry, Amy Sheridan, CEO of Blue Phoenix Media, provided some surprising statistics: state and private institutions represent roughly forty percent of her business in the education vertical. Even renowned schools like Harvard and Yale are employing lead generation to gain students in their programs.
But given the extensive access to federal funds through higher education, consumer advocates highlighted concerns over students being preyed upon by unscrupulous educators. Jeff Appel, Deputy Undersecretary of Education at the Department of Education, attributed the problem in part to the lack of underwriting in federal student loans. [Query: Wouldn’t it make sense to add underwriting to the federal student loan process? Statistically, private student loan repayment fares much better thanks to this preliminary screening.]
In support of responsible advertising for educational programs, Jonathan Gillman, CEO of Omniangle Technologies, identified the need for clear guidance on appropriate marketing tactics, which may better address problems than resorting to law enforcement. He pointed out the adverse consequences of clamping down on educators’ online advertising: educators are now afraid to advertise online and that space is being filled by affiliates who are more apt to cross the line into deceptive advertising.
Appel provided some general guidance for schools working with lead generators. Schools should (1) monitor how lead generators are representing programs and ensure their ads are not deceptive, (2) make sure payment for advertising does not implicate regulations against incentive-based compensation, and (3) be aware that the actions of lead generators may come under the Education Department’s purview if they are providing additional assistance (e.g., processing student applications).
Both Appel and consumer advocates seemed to agree, though, that laws and regulations already in place were sufficient to address consumer protection concerns in the education marketing space. It is only a matter of having the resources to enforce those laws and regulations. Appel also suggested that state regulators could curb issues by better screening schools.
Throughout the day and across the panels, FTC representatives turned to the concept of “remnant information,” i.e. consumer information that is longer being used. FTC attorney Katherine Worthman asked panelists various questions about what ultimately happens to this information. R. Michael Waller, another FTC attorney and panelist, noted his concern that companies have an economic interest in maintaining and possibly selling remnant information, and that such information is increasingly vulnerable to fraudsters. These FTC attorneys thus pressed about policies on consumer data retention. Aaron Rieke of Upturn supported the FTC concerns and noted that nothing in the company privacy policies (that he’s reviewed) prevents the sale of consumer data: “privacy policies are shockingly permissive when you look at how much information is being provided.”
Another popular issue was whether and to what extent disclosures to consumers are sufficient: are consumers aware of how their information is being traded? The general consensus among panelists was that consumers remained ignorant to the sale and use of the personal information they provide online.
Upshot from the workshop: Lead generators, and the companies using them, should be aware of the growing interest by federal regulators in (1) how consumer data is being collected, retained, and sold and (2) the extent to which people up and down the online marketing supply chain are vetting the buyers and sellers of consumer data. Other takeaways from the conference: Companies should ensure their data collection and retention policies comply with applicable state and federal law. Finally, it is important for companies to ensure their practices comply with both their policies and their disclosures.
Every week, we learn about new data breaches affecting consumers across the country. Federal government workers and retirees recently received the unsettling news that a breach compromised their personal information, including social security numbers, job history, pay, race, and benefits. Amid a host of other public relations issues, the Trump organization recently discovered a potential data breach at its hotel chain. If you visited the Detroit Zoo recently, you may want to check your credit card statements, as the zoo’s third party vendor detected “malware” which allowed access to customers’ credit and debit card numbers. And, certainly, none of us can forget the enormous data breach at Target, and the associated data breach notifications and subsequent lawsuits.
For years, members of Congress have stressed the need for national data breach standards and data security requirements. Aside from mandates in particular laws, such as HIPAA, movement on data breach requirements had stalled in Congress. Years ago, however, the states picked up the slack, establishing data breach notification laws requiring notifications to consumers and, in many instances to attorneys general and consumer protection offices when certain defined “personal information” was breached. California led the pack, passing its law in 2003. Today, 47 states have laws requiring organizations to notify consumers when a data breach has compromised consumers’ personal information. Several states’ laws also mandate particular data security practices, including Massachusetts, which took the lead on establishing “standards for protection of personal information.”
Many businesses and their lobbying organizations have urged Congress to preempt state laws and establish a national standard. Most companies have employees or customers in multiple states. Thus, under current laws, organizations have to address a multitude of state requirements, including triggering events, types of personal information covered, how quickly the notification must be made, who gets notified, what information should be included in the notification, among others. State Attorneys General, on the other hand, assert that, irrespective of these inconveniences, their oversight of data breaches through the supervision of notifications and enforcement has played a critical role in consumer protection.
This week, the Attorneys General from the 47 states wrote to Congressional leaders, urging Congress to maintain states’ authority in any federal law, by requiring data breach notifications, and preserving the states’ enforcement authority.
The AGs’ key points are:
- State AG offices have played critical roles in investigating and enforcing data security lapses for more than a decade.
- States have been able to respond to constant changes in data security by passing “significant, innovative laws related to data security, identity theft, and privacy.” This includes addressing new categories of information, such as biometric data and login credentials for online accounts.
- States are on the “front lines” of helping consumers deal with the fallout of data breaches and have the most experience in guiding consumers through the process of removing fraudulent charges and repairing their credit. By way of example, the Illinois AG helped nearly 40,000 Illinois residents remove more than $27 million in unauthorized charges from their accounts.
- Forty states participate in the “Privacy Working” group, where state AGs coordinate to investigate data breaches affecting consumers across multiple states.
- Consumers keep asking for more protection. Any preemption of state law “would make consumers less protected than they are right now.”
- States are better equipped to “quickly adjust to the challenges presented by a data-driven economy.”
- Adding enforcement and regulatory authority at the federal level could hamper the effectiveness of the state law. Some breaches will be too small to have priority at the federal level; however, these breaches may have a large impact at the state or regional level.
Interestingly, just this week, Rep. David Cicilline (D-RI) introduced a House bill mandating that companies inform consumers within 30 days of a data breach. The bill also requires minimum security standards. Representative Cicilline’s bill would not preempt stricter state-level data breach security laws. The bill also contains a broad definition of “personal information” to include data that could lead to “dignity harm” – such as personal photos and videos, in addition to the traditional categories of banking information and social security numbers. The proposed legislation would also impose civil penalties upon organizations that failed to meet the standards.
Without a doubt data breaches will continue – whether from bad actors, technical glitches, or common employee negligence. The states have certainly “picked up the slack” for over a decade while Congressional actions stalled. Understandably, the state AGs do not want Congress taking over the play in their large and established “privacy sandbox.” Preemption will continue to be a key issue for any federal data breach legislation before Congress. As someone who has guided companies through multi-state data breach notifications, I have seen firsthand that requiring businesses to deal with dozens of differing state requirements is costly and extremely burdensome. Small businesses, in particular, are faced with having to grapple with a data security incident while trying to understand and comply with a multitude of state requirements. Those businesses do not have the resources of a “Target” and complying with a patchwork of laws significantly and adversely impacts those businesses. While consumer protection is paramount, a federal standard for data breach notification would provide a common and clear-cut standard for all organizations and reduce regulatory burdens. While the federal standard could preempt state notification laws, states could continue to play critical roles as enforcement authorities.
In the interim, companies must ensure that they comply with the information security requirements and data breach notifications of applicable states. An important, and overlooked aspect is to remember that while an organization may think of itself as, say a “Vermont” or “Virginia” company, it is likely that the company has personal information on residents of various states – for instance, employees who telecommute from neighboring states, or employees who left the company and moved to a different state. Even a “local” or “regional” company can face a host of state requirements. As part of an organization’s data security planning, companies should periodically survey the personal information they hold and the affected states. In addition to data breach requirements in the event of a breach, organizations need to address applicable state data security standards.
The FTC’s complaint stated that Nomi’s technology (called its “Listen” service) allows retailers to track consumers’ movements through stores. The company places sensors in its clients’ stores, which collect the MAC addresses of consumers’ mobile devices as the devices search for WiFi networks. While Nomi “hashes” the MAC addresses prior to storage in order to hide the specific MAC addresses, the process results in identifiers unique to consumers’ mobile devices which can be tracked over time. Nomi provided its retail clients with aggregated information, such as how long consumers stayed in the store, the types of devices used by consumers, and how many customers had visited a different location in a chain of stores. Between January and September 2013, Nomi collected information on approximately 9 million mobile devices, according to the FTC’s complaint.
Nomi’s settlement does not require any monetary payment but prohibits Nomi from misrepresenting the options through which consumers can exercise control over the collection, use, disclosure or sharing of information collected from or about them or their devices. The settlement also bars Nomi from misrepresenting the extent to which consumers will be provided notice about how data from or about a particular consumer or device is collected, used, disclosed or shared. Nomi is required to maintain certain supporting records for five years. As is typical with FTC consent orders, this agreement remains in force for 20 years.
What can companies learn from Nomi’s settlement, even those not in the retail tracking business?
- While this is the first FTC action against a retail tracking company, the FTC has repeatedly stated that it will enforce the FTC Act and other laws under its jurisdiction against emerging as well as traditional technologies.
- The FTC noted that Nomi had about 45 clients. Most of those clients did not post a disclosure or notify consumers regarding their use of the Listen service, and Nomi did not mandate such disclosures by its clients. The FTC did not address what, if any, obligation, these businesses may have to make such disclosures. Will it become common/mandated to see a sign in a retail location warning that retail tracking via mobile phones is occurring (similar to signs about video surveillance)? One industry group’s self-regulatory policy requires retail analytics firms to take “reasonable steps to require that companies using their technology display, in a conspicuous location, signage that informs consumers about the collection and use of MLA [mobile location analytics] Data at that location.” This issue will become more prevalent as more retailers and other businesses use tracking technology.
- Interestingly, the FTC brought this action even though traditional “personal information” was not collected (such as name, address, social security number, etc.). Organizations should not assume that collecting IP addresses, MAC addresses, or other less personalized information presents no issues. The FTC takes privacy statements seriously, whatever the information collected (though certainly there is more sensitivity toward certain categories such as health, financial, and children’s information).
The bottom line is “do what you say” when it comes to privacy practices. All companies should evaluate their privacy policies at least every six months to ensure that they remain accurate and complete, have working links (if any), and reflect a company’s current practices.
In e-commerce, user reviews can make or break a business. Review sites such as Yelp are a double edged sword for merchants and service providers: on one hand satisfied customers can generate buzz about the company and bring in new customers, and on the other hand dissatisfied customers can use it as a very public platform to air their grievances and discourage new business.
Review sites such as Yelp maintain policies protecting users’ anonymity, a major source of frustration among business owners. By remaining anonymous, users can make potentially defamatory statements and leave the businesses with little recourse to hold the individuals accountable. A recent ruling by the Virginia Supreme Court has demonstrated the long and tortured road that businesses must take to challenge the anonymity of these unnamed users.
In 2012 a small Virginia company, Hadeed Carpet Cleaning Inc., brought suit against unnamed Doe defendants for allegedly defamatory statements published about Hadeed on the Yelp review website. According to Hadeed, a number of negative reviews did not match up to records of the company’s existing customers, and therefore the company suspected that the false statements were published by individuals who had never used the company’s services. The Circuit Court for the City of Alexandria, Virginia, issued a subpoena to Yelp requiring it to provide identifying information about the anonymous users. Yelp refused to comply, and the Circuit Court held Yelp in contempt.
Yelp appealed, arguing that the court’s order violated the First Amendment by forcing the company to identify the anonymous users. In January 2014 the Court of Appeals upheld the Circuit Court’s order, applying a six-prong procedure Virginia’s “unmasking statute,” which provides that the court may issue a subpoena to unveil the identity of an individual speaking anonymously over the internet where (1) notice of the subpoena was served on the anonymous speaker through his internet service provider, (2) the plaintiff has a legitimate, good faith basis to contend that communications may be tortious or illegal, (3) other efforts to identify the speaker have been fruitless, (4) the identity of the communicator is important, (5) there is no pending motion challenging the viability of the lawsuit, and (6) the entity to whom the subpoena is addressed is likely to have responsive information.
The Court of Appeals noted that Hadeed had followed the proper procedure in requesting the subpoena. The court found that the company’s evidence that the reviews did not match customer records was sufficient to establish they were not published by actual customers of the company, and were therefore likely to be false.
Yelp appealed the Circuit Court decision to Virginia’s Supreme Court. Last month, the Virginia Supreme Court issued an anticlimactic ruling dismissing the case on jurisdictional grounds, stating that the case should have been brought in California where Yelp is headquartered and where the responsive records are located.
If Hadeed chooses to resume the case in California, if will face a somewhat higher burden in obtaining the names of the users. Notably, Virginia is the only state in the country to have enacted an unmasking statute. In most states, the courts will no issue a subpoena until the plaintiff has established a prima facie case for defamation—significantly more than the “legitimate, good faith basis” used in Virginia.
A class action lawsuit recently instituted in federal court in the Northern District of California, Hunter v. Lenovo et al., alleges that Lenovo Inc., a computer manufacturer, violated its customers’ rights by selling computers which came preinstalled with alleged spyware manufactured by Superfish Inc., another named defendant. The purported class alleges that the Superfish software monitors user activity and displays pop-up ads, among other things, as part of an “image-based search” function which identifies images on the user’s screen and seeks out similar images on the web. The complaint states causes of action for violations of the Electronic Communications Privacy Act and the Stored Communications Act, as well as unjust enrichment.
The Stored Communications Act (“SCA”), 18 U.S.C. §§ 2701-2712 provides criminal penalties for anyone who “intentionally accesses without authorization a facility through which an electronic communication service is provided” or “intentionally exceeds an authorization to access that facility.” The SCA has been cited by plaintiffs in other class actions in which users allege that a technology company has overstepped its bounds. For instance, in Perkins v. LinkedIn Corp., No. 13-CV-04303-LHK, 2014 WL 2751053 (N.D. Cal. June 12, 2014), a putative class of LinkedIn users alleged that the social networking company violated the SCA by collecting contacts from users’ external email accounts. The court granted LinkedIn’s motion to dismiss the SCA claims, noting that the users consented to the collection of email addresses in a prominent disclosure, and therefore LinkedIn was “authorized” to collect the information, an exception to the SCA pursuant to 18 U.S.C. §2701(c).
Although the suit is still pending, Lenovo has reversed course on the Superfish software. Lenovo has disabled Superfish on computers which came pre-installed with the software, its websites offer instructions for users to uninstall the software altogether, and Lenovo computers no longer come preinstalled with the program. While these remedial actions may be an appropriate response to user concerns, they do not constitute an admission of legal liability in the class action suit. The defendants may still argue that users consented to the software, even as they remove it from the computers.
The law of unintended consequences – a distant cousin of Murphy’s Law – states that the actions of human beings will always have effects that are unanticipated and unintended. The law could prove a perfect fit for recent efforts by class action counsel to rely upon the Federal Wiretap Act in lawsuits arising from adware installed on personal home computers.
Take, for example, the recently filed case of Bennett v. Lenovo (United States), Inc. In that case, the plaintiff seeks to represent a class of purchasers of Lenovo laptop computers complaining that “Superfish” software that was preloaded on the laptops directed them to preferred advertisements based on their internet browsing behavior. The most interesting claim included in the complaint is the assertion that Lenovo and Superfish violated the Federal Wiretap Act.
Wiretap? What wiretap?
The Federal Wiretap Act was originally passed as Title III of the Omnibus Crime Control and Safe Streets Act of 1968. These provisions were included, at least in part, as a result of concerns about investigative techniques used by the FBI and other law enforcement agencies that threatened the privacy rights of individuals. In passing the Wiretap Act, Congress was clearly focused on the need to protect communications between individuals by telephone, telegraph and the like. The Electronic Communications Privacy Act of 1986 (ECPA) broadened the application of the statute by expanding the kinds of communications to which the statute applied. But the focus was still on communications between individuals.
As is often the case, technology is testing the boundaries of this nearly 50-year-old law. The Bennett case is not the first case in which a plaintiff has argued that software on his or her computer that reads the user’s behavior violates the Wire Act. In some cases, the software in question has been so-called “keylogging” software that captures every one of a user’s keystrokes. Cases considering such claims (or similar claims under state statutes modeled after the federal Act) have been split – some based on the specifics of when and how the software actually captured the information, and others based possibly on differences in the law in different parts of the country.
One of the more interesting cases, Klumb v. Gloan, 2-09-CV 115 (ED Tenn 2012), involved a husband who sued his estranged wife when he discovered that she had placed spyware on his computer. At trial, the husband demonstrated that during his marriage, his wife installed eBlaster, a program capable of not only recording key strokes, but also intercepting emails and monitoring websites visited. The husband alleged that once intercepted, the wife altered the emails and other legal documents to make it appear as if the husband was having an affair. The motive? Money, of course. Adultery was a basis to void the pre-nuptial agreement that the parties had executed prior to their ill-fated marriage. The wife – who was a law school graduate – argued that the installation was consensual. Although consent is a recognized defense to a claim of violating the Federal Wiretap Act, for a variety of reasons, the court discredited the wife’s testimony regarding the purported consent and awarded damages and attorney’s fees to the husband plaintiff.
The Bennett plaintiffs may or may not succeed in showing the facts and arguing the law sufficient to prevail in their claim, and we know too little about the facts in that case to express a prediction of the result in that case. But we can state with confidence that the continued expansion of how the Wiretap Act is applied will, at some point, require that Congress step in and update the statute to make clear how it applies in the new internet-based world in which we now live.
Employers Running Background Checks: Top 10 Tips to Avoid Joining the Fair Credit Reporting Act Litigation “Club”
What do Whole Foods, Chuck E. Cheese, Michael’s Stores, Dollar General, Panera, Publix, and K-Mart have in common? Each of these companies has faced lawsuits (including class actions) under the Fair Credit Reporting Act (“FCRA”). Although Congress passed the FCRA way back in 1970 and litigation has focused on credit reporting agencies’ duties under the law, class action plaintiff firms have recently focused on the FCRA’s employer-related provisions. Several large settlements (such as Publix’s $6.8 million class action settlement, Dollar General’s $4 million, and K-Mart’s $ 3 million) have spurred further litigation. While some of the alleged FCRA violations may appear minor or technical in nature, these “technical violations” still result in costly lawsuits. Employers should re-familiarize themselves with the FCRA to avoid becoming class action defendants.
The FCRA’s Employer-Related Provisions
Many employers understandably want to conduct background checks on prospective employees, or current employees who may be obtaining new responsibilities or accessing sensitive information. In particular, companies in the retail and restaurant sectors, whose employees have access to cash receipts and credit card account numbers, want to guard against employees whose background checks may reveal issues of concern. Further, organizations whose employees enter homes and businesses (such as service providers – e.g., carpet cleaners, plumbers, contractors) have additional concerns about potential liability.
The FCRA is usually thought of as a federal law that regulates consumer reporting agencies, like credit bureaus. However, the FCRA also prescribes certain requirements for employers who use consumer reports. The FCRA broadly defines the term “consumer reports” as information prepared by a consumer reporting agency “bearing on a consumer’s credit worthiness, credit standing, credit capacity, character, general reputation, personal characteristics, or mode of living which is used or expected to be used or collected in whole or in part for the purpose of serving as a factor in establishing the consumer’s eligibility for—credit or insurance to be used primarily for personal, family, or household purposes; employment purposes” or other permitted purposes. This definition draws in more than a traditional credit report. It can include driving records, civil lawsuits, and reference checks, among other information.
Disclosure and Consent
Employers may not obtain a consumer report from a consumer reporting agency unless they first make a “clear and conspicuous” written disclosure to the prospective employee/employee. The disclosure document must consist “solely” of the disclosure that a consumer report may be obtained. The job applicant/employee must provide written permission for the employer to obtain a consumer report. The FTC has indicated the disclosure form may include a signature line for the individual’s consent. (In 2001, the FTC also issued an opinion letter stating it believes such consent can be obtained electronically, consistent with the federal E-Sign law). The employer further certifies to the consumer reporting agency that is has a permissible purpose for the report and that it has complied with the FCRA and applicable equal opportunity laws.
These steps sound simple enough, however, litigation has ensued based upon employers’ alleged failures to comply. For instance, in the Whole Foods case in federal court in California, the plaintiffs claim the online application process included a liability waiver in the disclosure form for the background check, allegedly violating the FCRA requirement that a disclosure form not include other information. In a separate case in federal court in Florida involving retailer Nine West, the plaintiff alleges he did not receive a separate form, and that the background check authorization was on a web page with various other types of information.
Adverse Action Based on Report
If the employer intends to take “adverse action” against the prospective employee/employee (based even in part on the information in the report), the FCRA requires the employer to follow certain additional steps. The term “adverse action” includes “a denial of employment or any other decision for employment purposes that adversely affects any current or prospective employee.”
Before the employer takes the adverse action, it must provide a “pre-adverse action” notice to the affected person. This notice must include a copy of the consumer report and a statutory “Summary of Rights.” (This is an updated form, required since January 2013 by the new Consumer Financial Protection Board, which now has responsibility for FCRA rulemaking). The purpose of this notice requirement is to permit the individual to discuss the report with the employer before the employer implements the adverse action.
Next, if the employer intends to take the adverse action, the FCRA requires the employer to provide an adverse action notice to the individual. This notice must contain certain information, including:this is a test one
a statement setting forth the applicant’s or employee’s right to obtain a free disclosure of his or her report from the consumer reporting agency if the individual requests the disclosure within 60 days; and
In a case involving Domino’s Pizza employees, the company settled a class action that included allegations that it took adverse employment actions against certain individuals based on information contained in consumer reports without providing those individuals the required notice and a copy of such reports in advance. K-Mart settled a class action suit based upon allegations that the statement of consumer rights provided to individuals after a background check contained outdated disclosures, among other alleged FCRA failures.
Liability and Enforcement
Plaintiffs can pursue a private right of action against employers for negligently or willfully violating the FCRA. Claims regarding negligent violations allow actual damages and reasonable attorneys’ fees and costs. Willful violations can result in actual damages or statutory damages ranging between $100 and $1,000, plus punitive damages and attorneys’ fees and costs. The Federal Trade Commission (“FTC”) has also brought actions against employers for FCRA violations.
10 Steps to Avoid Becoming a FCRA Defendant When Using Employment Background Checks
1. Review your current background check practices for prospective and current employees, including any online application materials.
2. Review disclosure/consent forms for compliance. Ensure you are presenting applicants or current employees with a simple, one page disclosure form. The form should inform individuals that you intend to obtain a consumer report for employment purposes.
3. You must obtain consent from the prospective employee/employee. You may include a line on the disclosure form for the individual to acknowledge and grant consent. Do not include other material, such as liability waivers, confirmation of at-will employment, or seek other consents.
4. If your application process is online, ensure the disclosure/consent is displayed separately, on one screen, without other content.
5. If you intend to conduct background checks periodically during an individual’s employment, state that in the disclosure and consent form.
6. Do not seek consent verbally. FCRA requires “written” consent (though FTC has stated it may be electronic).
7. Maintain backup of the disclosure and consent forms for at least 5 years from the date they were provided. (Lawsuits must be brought by the earlier of two years after the date of the plaintiff’s discovery of the violation, or five years after the date on which the violation occurred).
8. If you intend to take adverse action based on information in the consumer report, you should be providing the individual with a pre-adverse action notice, a copy of the consumer report, and the “Summary of Rights.” Ensure you are using the most updated “Summary of Rights.”
9. You should wait a reasonable amount of time (at least 5 days) before issuing an adverse action notice. Your company’s adverse action notice must contain the information required under the FCRA (see bulleted information, above).
10. Check state law regarding background checks for the states in which you operate/solicit employees. Some states have similar requirements to FCRA; others may further restrict the types of information you can request.
* * *
The FTC/EEOC have issued a joint statement on background checks. While many employers need to conduct background checks to avoid liability and risks to their businesses, employers also need to follow the FCRA’s mandates to avoid the deep end of litigation “pool.”
It’s International Data Privacy Day! Every year on January 28, the United States, Canada and 27 countries of the European Union celebrate Data Privacy Day. This day is designed to raise awareness of and generate discussion about data privacy rights and practices. Indeed, each day new reports surface about serious data breaches, data practice concerns, and calls for legislation. How can businesses manage data privacy expectations and risk amid this swirl of activity?
Here, we share some tips from our firm’s practice and some recent FTC guidance. We don’t have a cake to celebrate International Data Privacy Day but we do have our “Top 10 Data Privacy Tips”:
3. Ensure Your U.S.-E.U. Safe Harbor Is Up-to-Date. Last year, the FTC took action against several companies, including the Atlanta Falcons and Level 3 Communications, for stating in their privacy policies that they were U.S.-E.U. Safe Harbor Certified by the U.S. Department of Commerce when, in fact, the companies had failed to keep their certification current by reaffirming their compliance annually. While your organization is not required to participate in Safe Harbor, don’t say you are Safe Harbor Certified if you haven’t filed with the U.S. Department of Commerce. And, remember that your company needs to reaffirm compliance annually, including payment of a fee. You can check your company’s status here.
4. Understand Your Internal Risks. We’ve said this before – while malicious breaches are certainly out there, a significant percentage of breaches (around 30 percent, according to one recent study) occurs due to accidents or malicious acts by employees. These acts include lack of firewalls, lack of encryption on devices (such as laptops and flash drives), and failing to change authentications when employees leave or are terminated. Many data breaches are While you are at it, review who has access to confidential information and whether proper restrictions are in place.
5. Educate Your Workforce. While today is International Data Privacy Day, your organization should educate your workforce on privacy issues throughout the year. Depending on the size of the company and the type of information handled (for instance, highly sensitive health information versus standard personal contact details), education efforts may vary. You should review practices like the confidentiality of passwords, creating a secure password and changing it frequently, and avoiding downloading personal or company sensitive information in unsecured forms. Just last week, a security firm reported that the most popular passwords for 2014 were “123456” and “password.” At a minimum, these easily guessed passwords should not be allowed in your system.
6. Understand Specific Requirements of Your Industry/Customers/ Jurisdiction. Do you have information on Massachusetts residents? Massachusetts requires that your company have a Written Information Security Program. Does your company collect personal information from kids under 13? The organization must comply with the federal Children’s Online Privacy Protection Act and the FTC’s rules. The FTC has taken many actions against companies deemed to be collecting children’s information without properly seeking prior express parental consent.
7. Maintain a Data Breach Response Plan. If there were a potential data breach, who would get called? Legal? IT? Human Resources? Public relations? Yes, likely all of these. The best defense is a good offense – plan ahead. Representatives from in-house and outside counsel, IT/IS, human resources, and your communications department should be part of this plan. State data breach notification laws require prompt reporting. Some companies have faced lawsuits for alleged “slow” response times. If there is potential breach, your company needs to gather resources, investigate, and if required, disclose the breach to governmental authorities, affected individuals, credit reporting agencies, etc.
8. Consider Contractual Obligations. Before your company commits to data security obligations in contracts, ensure that a knowledgeable party, such as in-house or outside counsel, reviews these commitments. If there is a breach of a contracting party’s information, assess the contractual requirements in addition to those under data breach notification laws. The laws generally require notice to be given promptly when a company’s data is compromised while under the “care” of another company. On the flip side, consider the service providers your company uses and what type of access the providers have to sensitive data. You should require service providers to adhere to reasonable security standards, with more stringent requirements if they handle sensitive data.
9. Review Insurance Coverage. While smaller businesses may think “we’re not Target” and don’t need cyber insurance, that’s a false assumption. In fact, smaller businesses usually have less sophisticated protections and can be more vulnerable to hackers and employee negligence. Data breaches – requiring investigations, hiring of outside experts such as forensics, paying for credit monitoring, and potential loss of goodwill – can be expensive. Carriers are offering policies that do not break the bank. Cyber insurance is definitely worth exploring. If you believe you have coverage for a data incident, your company should promptly notify the carrier. Notice should be part of the data breach response plan.
10. Remember the Basics! Many organizations have faced the wrath of the FTC, state attorneys general or private litigants because the companies or its employees failed to follow basic data security procedures. The FTC has settled 53 data security law enforcement actions. Many involve the failure to take common sense steps with data, such as transmitting sensitive data without encryption, or leaving documents with personal information in a dumpster. Every company must have plans to secure physical and electronic information. The FTC looks at whether a company’s practices are “reasonable and appropriate in light of the sensitivity and amount of consumer information you have, the size and complexity of your business, and the availability and cost of tools to improve security and reduce vulnerabilities.” If the FTC calls, you want to have a solid explanation of what you did right, not be searching for answers, or offering excuses. Additional information on the FTC’s guidance can be found here.
* * *
Remember, while it may be International Data Privacy Day, data privacy isn’t a one day event. Privacy practices must be reviewed and updated regularly to protect data as well as enable your company to act swiftly and responsively in the event of a data breach incident.