In March 2015, I wrote about the ongoing dispute between the FTC and LabMD, an Atlanta-based cancer screening laboratory, and looked at whether the FTC has the authority to take enforcement action over data-security practices alleged to be insufficient and therefore “unfair” under section 5(n) of the Federal Trade Commission Act (“FTCA”). On November 13, 2015, an administrative law judge ruled that the FTC had failed to prove its case.
In 2013, the FTC filed an administrative complaint against LabMD, alleging it had failed to secure personal, patient-sensitive information on its computer networks. The FTC alleged that LabMD lacked a comprehensive information-security program, and had therefore failed to (i) implement measures to prevent or detect unauthorized access to the company’s computer networks, (ii) restrict employee access to patient data, and (iii) test for common security risks.
The FTC linked this absence of protocol to two security breaches. First, an insurance aging report containing personal information about thousands of LabMD customers was leaked from the billing manager’s computer onto peer-to-peer file-sharing platform LimeWire, where it was available for download for at least eleven months. Second, Sacramento police reportedly discovered hard copies of LabMD records in the hands of unauthorized individuals. They were charged with identity theft in an unrelated case of fraudulent billing and pleaded no contest.
Incriminating as it all might seem, Administrative Law Judge D. Michael Chappell dismissed the FTC’s complaint entirely, citing a failure to show that LabMD’s practices had caused substantial consumer injury in either incident.
Section 5(n) of the FTCA requires the FTC to show that LabMD’s acts or practices caused, or were likely to cause, substantial injury to consumers. The ALJ held that “substantial injury” means financial harm or unwarranted risks to health and safety. It does not cover embarrassment, stigma, or emotional suffering. As for “likely to cause,” the ALJ held that the FTC was required to prove “probable” harm, not simply “possible” or speculative harm. The ALJ noted that the statute authorizes the FTC’s regulation of future harm (assuming all statutory criteria are met), but that unfairness liability, in practice, applies only to cases involving actual harm.
In the case of the insurance aging report, the evidence showed that the file had been downloaded just once—by a company named Tiversa, which did so to pitch its own data-security services to LabMD. As for the hard copy records, their discovery could not be traced to LabMD’s data-security measures, said the ALJ. Indeed, the FTC had not shown that the hard copy records were ever on LabMD’s computer network.
The FTC had not proved—either with respect to the insurance aging report or the hard copy documents—that LabMD’s alleged security practices caused or were likely to cause consumer harm.
The FTC has appealed the ALJ’s decision to a panel of FTC Commissioners who will render the agency’s final decision on the matter. The FTC’s attorneys argue that the ALJ took too narrow a view of harm, and a substantial injury occurs when any act or practice poses a significant risk of concrete harm. According to the FTC’s complaint counsel, LabMD’s data-security measures posed a significant risk of concrete harm to consumers when the billing manager’s files were accessible via LimeWire, and that risk amounts to an actual, substantial consumer injury covered by section 5(n) of the FTCA.
The Commissioners heard oral arguments in early March and will probably issue a decision in the next several months. On March 20th, LabMD filed a related suit in district court seeking declaratory and injunctive relief against the Commission for its “unconstitutional abuse of government power and ultra vires actions.”
Every week, we learn about new data breaches affecting consumers across the country. Federal government workers and retirees recently received the unsettling news that a breach compromised their personal information, including social security numbers, job history, pay, race, and benefits. Amid a host of other public relations issues, the Trump organization recently discovered a potential data breach at its hotel chain. If you visited the Detroit Zoo recently, you may want to check your credit card statements, as the zoo’s third party vendor detected “malware” which allowed access to customers’ credit and debit card numbers. And, certainly, none of us can forget the enormous data breach at Target, and the associated data breach notifications and subsequent lawsuits.
For years, members of Congress have stressed the need for national data breach standards and data security requirements. Aside from mandates in particular laws, such as HIPAA, movement on data breach requirements had stalled in Congress. Years ago, however, the states picked up the slack, establishing data breach notification laws requiring notifications to consumers and, in many instances to attorneys general and consumer protection offices when certain defined “personal information” was breached. California led the pack, passing its law in 2003. Today, 47 states have laws requiring organizations to notify consumers when a data breach has compromised consumers’ personal information. Several states’ laws also mandate particular data security practices, including Massachusetts, which took the lead on establishing “standards for protection of personal information.”
Many businesses and their lobbying organizations have urged Congress to preempt state laws and establish a national standard. Most companies have employees or customers in multiple states. Thus, under current laws, organizations have to address a multitude of state requirements, including triggering events, types of personal information covered, how quickly the notification must be made, who gets notified, what information should be included in the notification, among others. State Attorneys General, on the other hand, assert that, irrespective of these inconveniences, their oversight of data breaches through the supervision of notifications and enforcement has played a critical role in consumer protection.
This week, the Attorneys General from the 47 states wrote to Congressional leaders, urging Congress to maintain states’ authority in any federal law, by requiring data breach notifications, and preserving the states’ enforcement authority.
The AGs’ key points are:
- State AG offices have played critical roles in investigating and enforcing data security lapses for more than a decade.
- States have been able to respond to constant changes in data security by passing “significant, innovative laws related to data security, identity theft, and privacy.” This includes addressing new categories of information, such as biometric data and login credentials for online accounts.
- States are on the “front lines” of helping consumers deal with the fallout of data breaches and have the most experience in guiding consumers through the process of removing fraudulent charges and repairing their credit. By way of example, the Illinois AG helped nearly 40,000 Illinois residents remove more than $27 million in unauthorized charges from their accounts.
- Forty states participate in the “Privacy Working” group, where state AGs coordinate to investigate data breaches affecting consumers across multiple states.
- Consumers keep asking for more protection. Any preemption of state law “would make consumers less protected than they are right now.”
- States are better equipped to “quickly adjust to the challenges presented by a data-driven economy.”
- Adding enforcement and regulatory authority at the federal level could hamper the effectiveness of the state law. Some breaches will be too small to have priority at the federal level; however, these breaches may have a large impact at the state or regional level.
Interestingly, just this week, Rep. David Cicilline (D-RI) introduced a House bill mandating that companies inform consumers within 30 days of a data breach. The bill also requires minimum security standards. Representative Cicilline’s bill would not preempt stricter state-level data breach security laws. The bill also contains a broad definition of “personal information” to include data that could lead to “dignity harm” – such as personal photos and videos, in addition to the traditional categories of banking information and social security numbers. The proposed legislation would also impose civil penalties upon organizations that failed to meet the standards.
Without a doubt data breaches will continue – whether from bad actors, technical glitches, or common employee negligence. The states have certainly “picked up the slack” for over a decade while Congressional actions stalled. Understandably, the state AGs do not want Congress taking over the play in their large and established “privacy sandbox.” Preemption will continue to be a key issue for any federal data breach legislation before Congress. As someone who has guided companies through multi-state data breach notifications, I have seen firsthand that requiring businesses to deal with dozens of differing state requirements is costly and extremely burdensome. Small businesses, in particular, are faced with having to grapple with a data security incident while trying to understand and comply with a multitude of state requirements. Those businesses do not have the resources of a “Target” and complying with a patchwork of laws significantly and adversely impacts those businesses. While consumer protection is paramount, a federal standard for data breach notification would provide a common and clear-cut standard for all organizations and reduce regulatory burdens. While the federal standard could preempt state notification laws, states could continue to play critical roles as enforcement authorities.
In the interim, companies must ensure that they comply with the information security requirements and data breach notifications of applicable states. An important, and overlooked aspect is to remember that while an organization may think of itself as, say a “Vermont” or “Virginia” company, it is likely that the company has personal information on residents of various states – for instance, employees who telecommute from neighboring states, or employees who left the company and moved to a different state. Even a “local” or “regional” company can face a host of state requirements. As part of an organization’s data security planning, companies should periodically survey the personal information they hold and the affected states. In addition to data breach requirements in the event of a breach, organizations need to address applicable state data security standards.
FTC seems more confident than ever in its authority to go after companies with insufficient data security measures. As of January 2015, FTC had settled 53 data-security enforcement actions, and FTC Senior Attorney Lesley Fair expects that number to increase.
Not everyone is sanguine about FTC’s enforcement efforts. Companies targeted for administrative action complain that the Commission is acting beyond its delegated powers under the Federal Trade Commission Act (the “FTCA”). So far, courts have declined to intervene in any administrative action that is not yet resolved at the agency level.
One such case involves LabMD, Inc., an Atlanta-based cancer-screening laboratory. At least nine years ago, someone downloaded onto the billing department manager’s computer a peer-to-peer file-sharing application called Limewire. Hundreds of files on the computer were designated for sharing on the network, including an insurance aging report that contained personal information for more than 9,000 LabMD customers. In 2008, a third party notified LabMD that the aging report was available on Limewire. The application was promptly removed from the billing department manager’s computer, but the damage allegedly had been done. According to FTC, authorities discovered in October 2012 that data from the aging report and other LabMD files were being used to commit identify theft against LabMD’s customers.
Ten months later, FTC filed an administrative complaint against LabMD alleging that it had failed to employ reasonable and appropriate data security measures. FTC further alleged that LabMD could have corrected the problems at relatively low cost with readily available security measures. By contrast, LabMD’s customers had no way of knowing about the failures and could not reasonably avoid the potential harms, such as identity theft, medical identity theft, and disclosure of sensitive, private, medical information. On these facts, FTC alleged that LabMD had committed an unfair trade practice in violation of the FTCA.
LabMD tried to get the administrative action dismissed on several grounds, including that the FTCA does not give the Commission express authority to regulate data-security practices. The Commission denied LabMD’s motion, explaining that Congress gave FTC broad jurisdiction to regulate unfair and deceptive practices that meet a three-factor test: section 5(n) provides that, in enforcement actions or rulemaking proceedings, the Commission has authority to determine that an act or practice is “unfair” if (i) it causes or is likely to cause substantial injury to consumers which is (ii) not reasonably avoidable by consumers themselves and (iii) not outweighed by countervailing benefits to consumers or competition. Commissioners noted that the FTCA as passed in 1918 granted FTC the authority to regulate unfair methods of competition. When courts took a narrow view of that authority, Congress responded by amending the FTCA to clarify that the Commission has authority to regulate unfair acts or practices that injure the public, regardless of whether they injure one’s competitors. According to the Commission, the statutory delegation is intentionally broad, giving FTC discretionary authority to define unfair practices on a flexible, incremental basis. For these and other reasons, the administrative action against LabMD would proceed.
Having failed to get the case dismissed, LabMD sought relief from the federal courts to no avail. On January 20, 2015, the U.S. Court of Appeals for the Eleventh Circuit dismissed LabMD’s suit for lack of subject-matter jurisdiction. The court explained that it lacked the power to decide LabMD’s claims in the absence of final agency action. FTC had filed a complaint and issued an order denying LabMD’s motion to dismiss. But neither was a reviewable agency action because neither represented a “consummation of the agency’s decision-making process.” Moreover, “no direct and appreciable legal consequences” flowed from the actions and “no rights or obligations had been determined” by them.
LabMD can challenge FTC’s data-security jurisdiction only after the Commission’s proceedings against it are final. That may well be too late. As a result of FTC’s enforcement action, the company was forced to wind down its operations more than a year ago.
LabMD is one of very few companies to test FTC’s data-security jurisdiction. In 2007, a federal court in Wyoming sided with FTC in holding that the defendant’s unauthorized disclosure of customer phone records was an unfair trade practice in violation of the FTCA. The Tenth Circuit affirmed that decision on appeal.
More recently, a district court in New Jersey gave FTC a preliminary victory against Wyndham Worldwide Corporation. In that case, the court held that FTC’s unfairness jurisdiction extends to data-security practices that meet the three-factor test under Section 5(n). That decision is currently on appeal before the Third Circuit. During oral argument on March 3rd, the three-judge panel signaled little doubt that FTC has authority to regulate unreasonable cybersecurity practices. Instead, the panel was concerned with how the Commission exercises that authority—specifically, whether and how it has given notice as to what data security measures are considered to be “unfair.”
The law of unintended consequences – a distant cousin of Murphy’s Law – states that the actions of human beings will always have effects that are unanticipated and unintended. The law could prove a perfect fit for recent efforts by class action counsel to rely upon the Federal Wiretap Act in lawsuits arising from adware installed on personal home computers.
Take, for example, the recently filed case of Bennett v. Lenovo (United States), Inc. In that case, the plaintiff seeks to represent a class of purchasers of Lenovo laptop computers complaining that “Superfish” software that was preloaded on the laptops directed them to preferred advertisements based on their internet browsing behavior. The most interesting claim included in the complaint is the assertion that Lenovo and Superfish violated the Federal Wiretap Act.
Wiretap? What wiretap?
The Federal Wiretap Act was originally passed as Title III of the Omnibus Crime Control and Safe Streets Act of 1968. These provisions were included, at least in part, as a result of concerns about investigative techniques used by the FBI and other law enforcement agencies that threatened the privacy rights of individuals. In passing the Wiretap Act, Congress was clearly focused on the need to protect communications between individuals by telephone, telegraph and the like. The Electronic Communications Privacy Act of 1986 (ECPA) broadened the application of the statute by expanding the kinds of communications to which the statute applied. But the focus was still on communications between individuals.
As is often the case, technology is testing the boundaries of this nearly 50-year-old law. The Bennett case is not the first case in which a plaintiff has argued that software on his or her computer that reads the user’s behavior violates the Wire Act. In some cases, the software in question has been so-called “keylogging” software that captures every one of a user’s keystrokes. Cases considering such claims (or similar claims under state statutes modeled after the federal Act) have been split – some based on the specifics of when and how the software actually captured the information, and others based possibly on differences in the law in different parts of the country.
One of the more interesting cases, Klumb v. Gloan, 2-09-CV 115 (ED Tenn 2012), involved a husband who sued his estranged wife when he discovered that she had placed spyware on his computer. At trial, the husband demonstrated that during his marriage, his wife installed eBlaster, a program capable of not only recording key strokes, but also intercepting emails and monitoring websites visited. The husband alleged that once intercepted, the wife altered the emails and other legal documents to make it appear as if the husband was having an affair. The motive? Money, of course. Adultery was a basis to void the pre-nuptial agreement that the parties had executed prior to their ill-fated marriage. The wife – who was a law school graduate – argued that the installation was consensual. Although consent is a recognized defense to a claim of violating the Federal Wiretap Act, for a variety of reasons, the court discredited the wife’s testimony regarding the purported consent and awarded damages and attorney’s fees to the husband plaintiff.
The Bennett plaintiffs may or may not succeed in showing the facts and arguing the law sufficient to prevail in their claim, and we know too little about the facts in that case to express a prediction of the result in that case. But we can state with confidence that the continued expansion of how the Wiretap Act is applied will, at some point, require that Congress step in and update the statute to make clear how it applies in the new internet-based world in which we now live.
It’s International Data Privacy Day! Every year on January 28, the United States, Canada and 27 countries of the European Union celebrate Data Privacy Day. This day is designed to raise awareness of and generate discussion about data privacy rights and practices. Indeed, each day new reports surface about serious data breaches, data practice concerns, and calls for legislation. How can businesses manage data privacy expectations and risk amid this swirl of activity?
Here, we share some tips from our firm’s practice and some recent FTC guidance. We don’t have a cake to celebrate International Data Privacy Day but we do have our “Top 10 Data Privacy Tips”:
3. Ensure Your U.S.-E.U. Safe Harbor Is Up-to-Date. Last year, the FTC took action against several companies, including the Atlanta Falcons and Level 3 Communications, for stating in their privacy policies that they were U.S.-E.U. Safe Harbor Certified by the U.S. Department of Commerce when, in fact, the companies had failed to keep their certification current by reaffirming their compliance annually. While your organization is not required to participate in Safe Harbor, don’t say you are Safe Harbor Certified if you haven’t filed with the U.S. Department of Commerce. And, remember that your company needs to reaffirm compliance annually, including payment of a fee. You can check your company’s status here.
4. Understand Your Internal Risks. We’ve said this before – while malicious breaches are certainly out there, a significant percentage of breaches (around 30 percent, according to one recent study) occurs due to accidents or malicious acts by employees. These acts include lack of firewalls, lack of encryption on devices (such as laptops and flash drives), and failing to change authentications when employees leave or are terminated. Many data breaches are While you are at it, review who has access to confidential information and whether proper restrictions are in place.
5. Educate Your Workforce. While today is International Data Privacy Day, your organization should educate your workforce on privacy issues throughout the year. Depending on the size of the company and the type of information handled (for instance, highly sensitive health information versus standard personal contact details), education efforts may vary. You should review practices like the confidentiality of passwords, creating a secure password and changing it frequently, and avoiding downloading personal or company sensitive information in unsecured forms. Just last week, a security firm reported that the most popular passwords for 2014 were “123456” and “password.” At a minimum, these easily guessed passwords should not be allowed in your system.
6. Understand Specific Requirements of Your Industry/Customers/ Jurisdiction. Do you have information on Massachusetts residents? Massachusetts requires that your company have a Written Information Security Program. Does your company collect personal information from kids under 13? The organization must comply with the federal Children’s Online Privacy Protection Act and the FTC’s rules. The FTC has taken many actions against companies deemed to be collecting children’s information without properly seeking prior express parental consent.
7. Maintain a Data Breach Response Plan. If there were a potential data breach, who would get called? Legal? IT? Human Resources? Public relations? Yes, likely all of these. The best defense is a good offense – plan ahead. Representatives from in-house and outside counsel, IT/IS, human resources, and your communications department should be part of this plan. State data breach notification laws require prompt reporting. Some companies have faced lawsuits for alleged “slow” response times. If there is potential breach, your company needs to gather resources, investigate, and if required, disclose the breach to governmental authorities, affected individuals, credit reporting agencies, etc.
8. Consider Contractual Obligations. Before your company commits to data security obligations in contracts, ensure that a knowledgeable party, such as in-house or outside counsel, reviews these commitments. If there is a breach of a contracting party’s information, assess the contractual requirements in addition to those under data breach notification laws. The laws generally require notice to be given promptly when a company’s data is compromised while under the “care” of another company. On the flip side, consider the service providers your company uses and what type of access the providers have to sensitive data. You should require service providers to adhere to reasonable security standards, with more stringent requirements if they handle sensitive data.
9. Review Insurance Coverage. While smaller businesses may think “we’re not Target” and don’t need cyber insurance, that’s a false assumption. In fact, smaller businesses usually have less sophisticated protections and can be more vulnerable to hackers and employee negligence. Data breaches – requiring investigations, hiring of outside experts such as forensics, paying for credit monitoring, and potential loss of goodwill – can be expensive. Carriers are offering policies that do not break the bank. Cyber insurance is definitely worth exploring. If you believe you have coverage for a data incident, your company should promptly notify the carrier. Notice should be part of the data breach response plan.
10. Remember the Basics! Many organizations have faced the wrath of the FTC, state attorneys general or private litigants because the companies or its employees failed to follow basic data security procedures. The FTC has settled 53 data security law enforcement actions. Many involve the failure to take common sense steps with data, such as transmitting sensitive data without encryption, or leaving documents with personal information in a dumpster. Every company must have plans to secure physical and electronic information. The FTC looks at whether a company’s practices are “reasonable and appropriate in light of the sensitivity and amount of consumer information you have, the size and complexity of your business, and the availability and cost of tools to improve security and reduce vulnerabilities.” If the FTC calls, you want to have a solid explanation of what you did right, not be searching for answers, or offering excuses. Additional information on the FTC’s guidance can be found here.
* * *
Remember, while it may be International Data Privacy Day, data privacy isn’t a one day event. Privacy practices must be reviewed and updated regularly to protect data as well as enable your company to act swiftly and responsively in the event of a data breach incident.
In August, the Federal Trade Commission (“FTC”) released a staff report concerning mobile shopping applications (“apps”). FTC staff reviewed some of the most popular apps consumers utilize to comparison shop, collect and redeem deals and discounts, and pay in-store with their mobile devices. This new report focused on shopping apps offering price comparison, special deals, and mobile payments. The August report is available here.
Popularity of Mobile Shopping Apps/FTC Interest
Shoppers can empower themselves in the retail environment by comparison shopping via their smartphones in real-time. According to a 2014 Report by the Board of Governors of the Federal Reserve System, 44% of smartphone owners report using their mobile phones to comparison shop while in retail store, and 68% of those consumers changed where they made a purchase as a result. Consumers can also get instant coupons and deals to present at checkout. With a wave of a phone at the checkout counter, consumers can then make purchases.
While the shopping apps have surged in popularity, the FTC staff is concerned about consumer protection, data security and privacy issues associated with the apps. The FTC studied what types of disclosures and practices control in the event of unauthorized transactions, billing errors, or other payment-related disputes. The agency also examined the disclosures that apps provide to consumers concerning data privacy and security.
Apps Lack Important Information
FTC staff concluded that many of the apps they reviewed failed to provide consumers with important pre-download information. In particular, only a few of the in-store purchase apps gave consumers information describing how the app handled payment-related disputes and consumers’ liability for charges (including unauthorized charges).
FTC staff determined that fourteen out of thirty in-store purchase apps did not disclose whether they had any dispute resolution or liability limits policies prior to download. And, out of sixteen apps that provided pre-download information about dispute resolution procedures or liability limits, only nine of those apps provided written protections for users. Some apps disclaimed all liability for losses.
Data Security Information Vague
FTC staff focused particular attention on data privacy and security, because more than other technologies, mobile devices are personal to a user, always on, and frequently with the user. These features enable an app to collect a huge amount of information, such as location, interests, and affiliations, which could be shared broadly with third parties. Staff noted that, “while almost all of the apps stated that they share personal data, 29 percent of price comparison apps, 17 percent of deal apps, and 33 percent of in-store purchase apps reserved the right to share users’ personal data without restriction.”
Staff concluded that while privacy disclosures are improving, they tend to be overly broad and confusing. In addition, app developers may not be considering whether they even have a business need for all the information they are collecting. As to data security, staff noted it did not test the services to verify the security promises made. However, FTC staff reminded companies that it has taken enforcement actions against mobile apps it believed to have failed to secure personal data (such as Snapchat and Credit Karma). The report states, “Staff encourages vendors of shopping apps, and indeed vendors of all apps that collect consumer data, to secure the data they collect. Further those apps must honor any representations about security that they make to consumers.”
FTC Staff Recommends Better Disclosures and Data Security Practices
The report urges companies to disclose to consumers their rights and liability limits for unauthorized, fraudulent, or erroneous transactions. Organizations offering these shopping apps should also explain to consumers what protections they have based on their methods of payment and what options are available for resolving payment and billing disputes. Companies should provide clear, detailed explanations for how they collect, use and share consumer data. And, apps must put promises into practice by abiding by data security representations.
Consumer Responsibility Plays Role, Too
Importantly, the FTC staff report does not place the entire burden on companies offering the mobile apps. Rather, FTC staff urge consumers to be proactive when using these apps. The staff report recommends that consumers look for and consider the dispute resolution and liability limits of the apps they download. Consumers should also analyze what payment method to use when purchasing via these apps. If consumers cannot find sufficient information, they should consider an alternative app, or make only small purchases.
While a great “deal” could be available with a click on a smartphone, the FTC staff urges consumers to review available information on how their personal and financial data may be collected, used and shared while they get that deal. If consumers are not satisfied with the information provided regarding data privacy and security, then staff recommends that they choose a different app, or limit the financial and personal financial data they provide. (Though that last piece of advice may not be practical considering most shopping apps require a certain level of personal and financial information simply to complete a transaction).
Deal or No Deal? FTC Will be Watching New Shopping Apps
FTC Staff has concerns about mobile payments and will continue to focus on consumer protections. The agency has taken several enforcement actions against companies for failing to secure personal and payment information and it does not appear to be slowing down. While the FTC recognizes the benefits of these new shopping and payment technologies, it is also keenly aware of the enormous amount of data obtained by companies when consumers use these services. Thus, companies should anticipate that the FTC will continue to monitor shopping and deal apps with particular attention on disclosures and data practices.
Last week the Federal Trade Commission (“FTC”) charged the operators of Jerk.com with harvesting personal information from Facebook to create profiles for more than an estimated 73 million people, where they could not be labeled a “Jerk” or “not a Jerk.”
In the complaint, the FTC charged the defendants, Jerk, LLC and the operator of the website, John Fanning, with violating the FTC Act by allegedly misleading consumers into believing that the content on Jerk.com had been created by registered users of the site, when most of it had been harvested from Facebook. The FTC alleged that the operators of Jerk.com falsely claimed that consumers could revise their online profiles by paying a $30 membership fee. Additionally, the FTC asserted that the defendants misled consumers to believe that by paying for a membership, they would have access to the website that could allow them to change their profiles on the site.
Facebook profile pictures and profile names generally are public. Facebook rules allow for developers to upload the names and pictures in bulk. However, Jerk.com allegedly violated Facebook’s policies in the way it mined data from people’s profiles. At the time, Facebook’s rules only allowed an app developer to keep a person’s profile picture for 24 hours. The complaint stated that Fanning registered several websites with Facebook and used Facebook’s application program to download the data needed to create the fake profiles on Jerk.com. The FTC is also seeking an order barring the defendants from using the personal information that was obtained and requiring them to delete the information.
This action is another indication that the FTC is closely monitoring companies that the FTC believes are scraping data on consumers from other sites and deceiving customers in their business practices. The complaint notes how Jerk.com profiles often appear high in search engine results when a person’s name is searched. “In today’s interconnected world, people are especially concerned about their reputation online, and this deceptive scheme was a brazen attempt to exploit those concerns,” said Jessica Rich, Director of the FTC’s bureau of Consumer Protection in a statement.
Companies should monitor their practices for obtaining data from other websites to ensure that they are in compliance with the terms and conditions of websites where they obtain data. Organizations should be cautious about how they use this data, including being careful about making any representations and disclosures that could be viewed as deceptive by the FTC or a state attorney general.
By Michelle Cohen, CIPP-US
After recovering from high-profile data breaches at Target and Neiman Marcus, signing up for free credit monitoring and analyzing our credit reports, a new Internet villain recently emerged: the “Heartbleed Bug.” The Heartbleed Bug is a security flaw present on Open SSL, popular software run on most webservers. This open source software is widely used to encrypt web communications. The Heartbleed Bug affects approximately 500,000 websites, including reportedly Yahoo, OK Cupid, and Tumblr. And, in addition to websites, the Bug may impact networking devices such as video conferencing services, smartphones, and work phones.
The danger of the Heartbleed Bug lies in its ability to reveal the content of a server’s memory. Then, the Bug can grab sensitive data stored in the memory, including passwords, user names, and credit card numbers. Adding insult to injury, the Bug has existed for at least two years, giving hackers a huge head start. News reports and some websites have urged users to change their passwords. Others have warned individuals not to change their passwords until a website has indicated it has installed the security patch that “cures” the Bug. Several sites offer tools to “test” whether an indicated website is vulnerable to the Heartbleed Bug, including one by McAfee. In terms of priorities, users should focus on sites where they bank, conduct e-commerce, e-mail and use file storage accounts.
Further intrigue comes from the fact that a recent Bloomberg report alleged that the National Security Agency (“NSA”) knew about the Bug for at least two years, but may have utilized the vulnerabilities to access information. The NSA has denied it had knowledge of the Bug.
While we have yet to see a “rush to the courthouse” following the announcement of the Heartbleed Bug, we anticipate lawsuits and enforcement could follow where organizations do not act in response to the Bug by installing the necessary security patch. Companies (including our clients in the Internet marketing and I-gaming industries) should investigate whether their websites, apps, or other services (such as cloud services) use Open SSL – then take immediate efforts to oversee the installation of the security patch. Organizations should also advise users of the status of the Heartbleed Bug fix and encourage users to change their passwords, with different passwords across different services.
After the FTC secured a $163MM judgment against Kristy Ross in the US District Court of Maryland, the 4th Circuit affirmed, and so ends the FTC’s six-year “scareware” enforcement action. From beginning to end, this odyssey has been quite colorful, to say the least. The nine-figure judgment against Ross is no exception.
Originally, there were eight codefendants: Innovative Marketing, Inc., ByteHosting Internet Services, LLC, and five of the companies’ officers and directors, including Ms. Ross. The case was based on FTC allegations that their massive “scareware” scheme was deceptive in violation of Section 5 of the FTC Act. Specifically, the FTC alleged that the defendants falsely warned consumers that (imaginary) scans of their computers detected security or privacy issues (e.g., viruses, spyware, system errors, and pornography). After receiving the fraudulent security alerts, the consumers were prompted to purchase the Defendants’ software to remedy the (imaginary) problems. More than one million consumers purchased the scareware – of them, roughly three thousand filed complaints with the FTC.
Ross was the only co-defendant remaining at trial, and the judgment was entered against her individually and as a member of Innovative Marketing, Inc. (IMI). Four of the eight original defendants settled with the FTC in February 2010. The same month, the trial court entered default judgments against the remaining three – IMI, Mr. Jain, and Mr. Sundin – for their failure to appear and participate in the litigation. Ross retained counsel but failed to file an answer, respond to the FTC’s discovery requests, or appear at trial. As such, the lone defendant Ross was tried in absentia. Though not explicitly expressed in the trial judge’s opinion, one can only imagine that the optics did not bode well for Ms. Ross at trial.
Before trial, the FTC moved for summary judgment. In her opposition, Ross argued that she was just an employee at IMI (not a “control person”) without requisite knowledge of the misconduct and that she could not therefore be held individually liable under the FTC Act. The court found there to be no issues of material fact with regard to whether the scareware scheme was deceptive in violation of the FTC Act. And a bench trial was ordered to determine the extent of Ross’ control over, participation in, and knowledge of IMI’s deceptive practices.
At trial, Judge Bennett found that Ross had actual knowledge of the marketing scheme, was fully aware of many of the complaints from customers, and was in charge of remedying the problems. The court issued a permanent injunction (as authorized by the FTC Act) and held her individually liable for the total amount of consumer injury (calculated by the FTC $163,167,539.95), finding that to be the proper measure for consumer redress.
On appeal, Ross asked the court to apply the SEC standard for individual liability, which essentially requires a showing of specific intent/subjective knowledge. The Fourth Circuit declined, finding that such a standard would leave the FTC “with a futile gesture of obtaining an order directed to the lifeless entity of a corporation, while exempting from its operation the living individuals who were responsible for the illegal practices in the first place.” The appeals court also rejected Ross’ arguments that district courts do not have authority to award consumer redress, noting that “[a] ruling in favor of Ross would forsake almost thirty years of federal appellate decisions and create a circuit split,” an outcome that it refused to countenance.
The factual and procedural history of this case are pretty outlandish, and it is not clear why Ross opted to take the FTC to the mat (in absentia) on case with so much weighing against her. Had she settled with the others back in 2010, maybe she would have only been on the hook for the gross revenues she received from the alleged scam. Then, almost certainly the FTC would have followed its common practice of suspending all but the amount she was able to pay. But, alas, she did not.
Attorney General Holder Calls on Congress to Establish Strong National Data Breach Notification Standard
By Michelle Cohen, CIPP-US
Yesterday, in his weekly video address, Attorney General Eric Holder urged Congress to create a national data breach notification standard requiring companies to quickly notify consumers of a breach of their personal or financial information. In the wake of the high profile holiday season data breaches at retailers Target and Neiman Marcus, Holder stated that the Department of Justice and the U.S. Secret Service continue to work to investigate hacking and cybercrimes. However, Holder believes that Congress should act to establish a federal notification requirement to protect consumers. Holder’s video address is available here .
Currently, at least forty-six states, the District of Columbia, Guam, Puerto Rico and the Virgin Islands have laws requiring private or government entities to notify individuals of security breaches of information involving personally identifiable information. As might be expected, the laws vary widely from state to state, particularly in the timing requirement for the breach notifications. Most laws allow delay to accommodate a law enforcement investigation.
Some states require notification as soon as reasonably practicable. Others require notification within 45 days. Yet organizations have faced lawsuits for failing to notify on a timely basis, even where there is no set standard. This presents a difficult situation for companies. Organizations need to investigate a data breach and determine the type of information affected, who was affected (and thus needs to be notified), and importantly, whether the breach is ongoing such that the company must immediately implement remedial measures.
Attorney General Holder believes Congress should set a national standard that will better protect consumers. Holder asserts that a federal requirement should enable law enforcement to investigate the data breaches quickly and to hold organizations accountable when they fail to protect personal and financial information. Holder’s video message did include a reference that this requirement should create “reasonable exemptions” for companies to avoid creating unnecessary burdens.
The Target and Neiman Marcus data breaches have certainly raised the profile of cybersecurity issues on Capitol Hill, with several bills having been introduced in recent weeks addressing data breaches. While the states certainly took the lead in protecting consumers by enacting data breach laws over the past several years, a properly-crafted national standard could provide more consistent guidance for industry and a uniform rule for consumers irrespective of their home states. Should Congress move forward on a data breach law, reasonable accommodations need to be made for companies to have time to investigate data breaches, to determine scope, persons affected, and the type of information affected. A national standard setting forth a notification deadline would also presumably alleviate the “rush to the courthouse” from the plaintiff’s bar with data breach notification timing allegations.