GDPR. If you see those letters and think it is an acronym for Gosh Darned Pain in the Rear (or an edgier equivalent) you are in large-part correct. But if you don’t know any more than that, and you are a company with any ties to Europe, then you need to read further.
GDPR, the General Data Protection Regulation, is an extensive and broad-reaching regulation issued by the European Union dealing with how companies (including U.S. companies) process the data of people living in the E.U. It replaces the E.U. Data Protection Directive and is slated to take effect May 25, 2018.
Companies that fall under the regulation’s requirements need to ensure (1) individuals’ data they are processing is secure in their hands, (2) that they have individuals’ consent to process it (or have an enumerated reason they don’t need consent), and (3) that they will keep individuals notified of individuals’ rights and developments surrounding the use of their data.
If you are a U.S.-based company, with little European presence, you may slough off the idea of getting into GDPR compliance. You may have analyzed the GDPR’s predecessor (the Data Protection Directive), decided that it didn’t implicate you, and assume the GDPR won’t implicate you either. Or you may have relied upon the Safe Harbor and assume you can continue to operate under that. Don’t draw assumptions. Don’t ignore the regulation. If you do, and you are ultimately found to have violated it, you could face some hefty penalties. Under the GDPR, there are two sets of thresholds for administrative fines:
- Up to €10million (almost US$ 12million) or up to 2% of global revenue, whichever is higher, for certain violations, including failure to implement data protection by design, failure to maintain written records, to report breaches when required; and
- Up to €20million (almost US$ 24million) or up to 4% of global revenue, whichever is higher, for other violations, including failure to adhere to basic processing principals such as consent, notification of individuals’ rights, and international transfers.
These fines are meant to catch attention. Hopefully, they caught yours. They may inspire you to do a double take to see whether or not your business will be subject to the GDPR. The GDPR has a broader reach than the earlier Data Protection Directive. Moreover, the Safe Harbor is no longer valid. It has been replaced by a “Privacy Shield” regime – which applies to data that companies transfer from the E.U. to the U.S. But even the Privacy Shield is on shaky ground and it may not be enough to shield companies (so to speak) from liability for GDPR violations. GDPR is broader, covering information on E.U. residents even if the data is not transferred across borders – and instituting stricter measures in terms of how data should be handled.
Here are some questions you should ask to help you determine whether you need to prepare for the GDPR:
- Do you have an E.U. office, or even a company representative who operates out of Europe?
If you have any real and effective European activity through stable arrangements (terms in italics represent terms used by E.U. courts to define implicated businesses), then you will be subject to the GDPR even if you do not process personal data in the E.U. So long as the data processed in the context of the European activities, the GDPR applies.
- Are you outside of the E.U., but process data about E.U.-based individuals in connection with offering goods or services?
It does not matter whether or not there is any payment involved in the offer. Your offers can be free of charge and you are still implicated by the rule. So long as your company anticipates activity directed at E.U. individuals (e.g., you suggest items in E.U. currency or pay a search engine to increase access to E.U.-based people), you are implicated.
- Are you outside of the E.U., but monitor the behavior of individuals in the E.U.?
If you track E.U.-based individuals online to create profiles, or to analyze or predict preferences, you are implicated.
The long and short is that, if you touch Europe, directly or remotely, in your operations and you process data that incorporates E.U. individuals, you should spend time assessing GDPR compliance.
- Review your E.U.-focused actual or directed information
- Review the type of information you collect/use
- Review the types of consent obtained and notifications on data usage provided
- Review your service contracts to determine your company’s role in data processing and follow-on companies’ roles in data processing
 We will treat the E.U.’s ability to enforce these penalties in a later post, but assume they will be able to reach your assets.
Data Privacy and Cyber Security
Acting Chairman of the Federal Trade Commission, Maureen Ohlhausen, answered questions about the FTC’s current role in data privacy before a crowded audience at the April 2017 IAPP Global Privacy Summit in D.C. Below are some take-aways we wanted to share from Commissioner Ohlhausen’s talk:
- Even if out of ISP oversight, the FTC is actively engaged in data privacy enforcement through its consumer protection role.
Ohlhausen expressed disappointment that FTC had to step out of ISP oversight in 2015, when the FCC reclassified broadband as a common carrier service (the reclassification means the FCC, no longer the FTC, has authority over privacy and data security enforcement of ISPs). But she said that the FTC is still active through holding companies to their data privacy policies and claims: “We enforce promises. We hold companies to their promises, even in technologically advanced areas.” She noted that FTC enforcement actions derive not only from consumer complaints, but that the FTC is getting cases from computer researchers and marketplace competitors.
- FTC to present positive findings from its enforcement actions.
Ohlhausen and her staff are considering changing up what they present publicly on their investigation findings. Normally, the FTC publishes what it has found companies doing wrong, but Ohlhausen believes the public could benefit from what the FTC has found companies doing right. The FTC therefore may be bolstering its public messages on enforcement actions with this positive twist.
- How FCC and FTC oversight of ISPs differs.
Ohlhausen noted that the FCC has ended up with a different approach to data security oversight. For instances, they have taken a different view on what constitutes sensitive data and on what types of opt-ins and opt-outs are permissible. She expressed concern that, with the Open Internet Order, which revoked FTC Privacy Rules, no one is really watching the hen house. She hopes either Congress or the FCC will reconsider the FTC’s role: The FCC could rescind its reclassification or Congress could rescind the FCC’s common carrier authority of broadband services.
- The Privacy Shield and the FTC’s role in working with Europe.
Ohlhausen noted that the current Administration seems committed to the Privacy Shield. She believes that the Privacy Shield meets Europe’s needs and further that the FTC has an important role to fill in (1) ensuring how information is disseminated and (2) enforcement. For instance, the FTC can provide guidance on how to inform EU consumers on the parameters of the Privacy Shield. Moreover, the FTC will enforce Privacy Shield violations—based on deception for failure to comply. She is optimistic that the Shield will withstand court challenges, in contrast to the Safe Harbor, which was negotiated in a different environment.
- Chinese forays into privacy.
Ohlhausen, who was heading to Beijing the day after her IAPP talk, expressed interest in Chinese developments in privacy regulation: where a communist country’s government controls so much, there still can be a real interest in privacy for the consumer. She noted that some international companies have concerns over whether they will be disadvantaged by Chinese privacy laws.
- Privacy and overlap with other areas of law
When asked whether privacy laws, such as anti-discrimination provisions contained in the GDPR, are carrying more water than just privacy, Ohlhausen noted that there is some overlap, such as with the Fair Credit Reporting Act and Civil Rights Act. She took the discussion as an opportunity to highlight the importance of balancing fear of the unknown against the benefits of innovation: it is good to identify the bad things that can happen. But we also need to weigh that against the good things. While consumer protection is important, we also want a competitive marketplace, and want to encourage innovation.
 A side note on the FCC reclassification: a persistent theme in Ohlhausen’s talk was expressing hope that the FTC would get authority back over ISPs.
In March 2015, I wrote about the ongoing dispute between the FTC and LabMD, an Atlanta-based cancer screening laboratory, and looked at whether the FTC has the authority to take enforcement action over data-security practices alleged to be insufficient and therefore “unfair” under section 5(n) of the Federal Trade Commission Act (“FTCA”). On November 13, 2015, an administrative law judge ruled that the FTC had failed to prove its case.
In 2013, the FTC filed an administrative complaint against LabMD, alleging it had failed to secure personal, patient-sensitive information on its computer networks. The FTC alleged that LabMD lacked a comprehensive information-security program, and had therefore failed to (i) implement measures to prevent or detect unauthorized access to the company’s computer networks, (ii) restrict employee access to patient data, and (iii) test for common security risks.
The FTC linked this absence of protocol to two security breaches. First, an insurance aging report containing personal information about thousands of LabMD customers was leaked from the billing manager’s computer onto peer-to-peer file-sharing platform LimeWire, where it was available for download for at least eleven months. Second, Sacramento police reportedly discovered hard copies of LabMD records in the hands of unauthorized individuals. They were charged with identity theft in an unrelated case of fraudulent billing and pleaded no contest.
Incriminating as it all might seem, Administrative Law Judge D. Michael Chappell dismissed the FTC’s complaint entirely, citing a failure to show that LabMD’s practices had caused substantial consumer injury in either incident.
Section 5(n) of the FTCA requires the FTC to show that LabMD’s acts or practices caused, or were likely to cause, substantial injury to consumers. The ALJ held that “substantial injury” means financial harm or unwarranted risks to health and safety. It does not cover embarrassment, stigma, or emotional suffering. As for “likely to cause,” the ALJ held that the FTC was required to prove “probable” harm, not simply “possible” or speculative harm. The ALJ noted that the statute authorizes the FTC’s regulation of future harm (assuming all statutory criteria are met), but that unfairness liability, in practice, applies only to cases involving actual harm.
In the case of the insurance aging report, the evidence showed that the file had been downloaded just once—by a company named Tiversa, which did so to pitch its own data-security services to LabMD. As for the hard copy records, their discovery could not be traced to LabMD’s data-security measures, said the ALJ. Indeed, the FTC had not shown that the hard copy records were ever on LabMD’s computer network.
The FTC had not proved—either with respect to the insurance aging report or the hard copy documents—that LabMD’s alleged security practices caused or were likely to cause consumer harm.
The FTC has appealed the ALJ’s decision to a panel of FTC Commissioners who will render the agency’s final decision on the matter. The FTC’s attorneys argue that the ALJ took too narrow a view of harm, and a substantial injury occurs when any act or practice poses a significant risk of concrete harm. According to the FTC’s complaint counsel, LabMD’s data-security measures posed a significant risk of concrete harm to consumers when the billing manager’s files were accessible via LimeWire, and that risk amounts to an actual, substantial consumer injury covered by section 5(n) of the FTCA.
The Commissioners heard oral arguments in early March and will probably issue a decision in the next several months. On March 20th, LabMD filed a related suit in district court seeking declaratory and injunctive relief against the Commission for its “unconstitutional abuse of government power and ultra vires actions.”
Every week, we learn about new data breaches affecting consumers across the country. Federal government workers and retirees recently received the unsettling news that a breach compromised their personal information, including social security numbers, job history, pay, race, and benefits. Amid a host of other public relations issues, the Trump organization recently discovered a potential data breach at its hotel chain. If you visited the Detroit Zoo recently, you may want to check your credit card statements, as the zoo’s third party vendor detected “malware” which allowed access to customers’ credit and debit card numbers. And, certainly, none of us can forget the enormous data breach at Target, and the associated data breach notifications and subsequent lawsuits.
For years, members of Congress have stressed the need for national data breach standards and data security requirements. Aside from mandates in particular laws, such as HIPAA, movement on data breach requirements had stalled in Congress. Years ago, however, the states picked up the slack, establishing data breach notification laws requiring notifications to consumers and, in many instances to attorneys general and consumer protection offices when certain defined “personal information” was breached. California led the pack, passing its law in 2003. Today, 47 states have laws requiring organizations to notify consumers when a data breach has compromised consumers’ personal information. Several states’ laws also mandate particular data security practices, including Massachusetts, which took the lead on establishing “standards for protection of personal information.”
Many businesses and their lobbying organizations have urged Congress to preempt state laws and establish a national standard. Most companies have employees or customers in multiple states. Thus, under current laws, organizations have to address a multitude of state requirements, including triggering events, types of personal information covered, how quickly the notification must be made, who gets notified, what information should be included in the notification, among others. State Attorneys General, on the other hand, assert that, irrespective of these inconveniences, their oversight of data breaches through the supervision of notifications and enforcement has played a critical role in consumer protection.
This week, the Attorneys General from the 47 states wrote to Congressional leaders, urging Congress to maintain states’ authority in any federal law, by requiring data breach notifications, and preserving the states’ enforcement authority.
The AGs’ key points are:
- State AG offices have played critical roles in investigating and enforcing data security lapses for more than a decade.
- States have been able to respond to constant changes in data security by passing “significant, innovative laws related to data security, identity theft, and privacy.” This includes addressing new categories of information, such as biometric data and login credentials for online accounts.
- States are on the “front lines” of helping consumers deal with the fallout of data breaches and have the most experience in guiding consumers through the process of removing fraudulent charges and repairing their credit. By way of example, the Illinois AG helped nearly 40,000 Illinois residents remove more than $27 million in unauthorized charges from their accounts.
- Forty states participate in the “Privacy Working” group, where state AGs coordinate to investigate data breaches affecting consumers across multiple states.
- Consumers keep asking for more protection. Any preemption of state law “would make consumers less protected than they are right now.”
- States are better equipped to “quickly adjust to the challenges presented by a data-driven economy.”
- Adding enforcement and regulatory authority at the federal level could hamper the effectiveness of the state law. Some breaches will be too small to have priority at the federal level; however, these breaches may have a large impact at the state or regional level.
Interestingly, just this week, Rep. David Cicilline (D-RI) introduced a House bill mandating that companies inform consumers within 30 days of a data breach. The bill also requires minimum security standards. Representative Cicilline’s bill would not preempt stricter state-level data breach security laws. The bill also contains a broad definition of “personal information” to include data that could lead to “dignity harm” – such as personal photos and videos, in addition to the traditional categories of banking information and social security numbers. The proposed legislation would also impose civil penalties upon organizations that failed to meet the standards.
Without a doubt data breaches will continue – whether from bad actors, technical glitches, or common employee negligence. The states have certainly “picked up the slack” for over a decade while Congressional actions stalled. Understandably, the state AGs do not want Congress taking over the play in their large and established “privacy sandbox.” Preemption will continue to be a key issue for any federal data breach legislation before Congress. As someone who has guided companies through multi-state data breach notifications, I have seen firsthand that requiring businesses to deal with dozens of differing state requirements is costly and extremely burdensome. Small businesses, in particular, are faced with having to grapple with a data security incident while trying to understand and comply with a multitude of state requirements. Those businesses do not have the resources of a “Target” and complying with a patchwork of laws significantly and adversely impacts those businesses. While consumer protection is paramount, a federal standard for data breach notification would provide a common and clear-cut standard for all organizations and reduce regulatory burdens. While the federal standard could preempt state notification laws, states could continue to play critical roles as enforcement authorities.
In the interim, companies must ensure that they comply with the information security requirements and data breach notifications of applicable states. An important, and overlooked aspect is to remember that while an organization may think of itself as, say a “Vermont” or “Virginia” company, it is likely that the company has personal information on residents of various states – for instance, employees who telecommute from neighboring states, or employees who left the company and moved to a different state. Even a “local” or “regional” company can face a host of state requirements. As part of an organization’s data security planning, companies should periodically survey the personal information they hold and the affected states. In addition to data breach requirements in the event of a breach, organizations need to address applicable state data security standards.
The FTC’s complaint stated that Nomi’s technology (called its “Listen” service) allows retailers to track consumers’ movements through stores. The company places sensors in its clients’ stores, which collect the MAC addresses of consumers’ mobile devices as the devices search for WiFi networks. While Nomi “hashes” the MAC addresses prior to storage in order to hide the specific MAC addresses, the process results in identifiers unique to consumers’ mobile devices which can be tracked over time. Nomi provided its retail clients with aggregated information, such as how long consumers stayed in the store, the types of devices used by consumers, and how many customers had visited a different location in a chain of stores. Between January and September 2013, Nomi collected information on approximately 9 million mobile devices, according to the FTC’s complaint.
Nomi’s settlement does not require any monetary payment but prohibits Nomi from misrepresenting the options through which consumers can exercise control over the collection, use, disclosure or sharing of information collected from or about them or their devices. The settlement also bars Nomi from misrepresenting the extent to which consumers will be provided notice about how data from or about a particular consumer or device is collected, used, disclosed or shared. Nomi is required to maintain certain supporting records for five years. As is typical with FTC consent orders, this agreement remains in force for 20 years.
What can companies learn from Nomi’s settlement, even those not in the retail tracking business?
- While this is the first FTC action against a retail tracking company, the FTC has repeatedly stated that it will enforce the FTC Act and other laws under its jurisdiction against emerging as well as traditional technologies.
- The FTC noted that Nomi had about 45 clients. Most of those clients did not post a disclosure or notify consumers regarding their use of the Listen service, and Nomi did not mandate such disclosures by its clients. The FTC did not address what, if any, obligation, these businesses may have to make such disclosures. Will it become common/mandated to see a sign in a retail location warning that retail tracking via mobile phones is occurring (similar to signs about video surveillance)? One industry group’s self-regulatory policy requires retail analytics firms to take “reasonable steps to require that companies using their technology display, in a conspicuous location, signage that informs consumers about the collection and use of MLA [mobile location analytics] Data at that location.” This issue will become more prevalent as more retailers and other businesses use tracking technology.
- Interestingly, the FTC brought this action even though traditional “personal information” was not collected (such as name, address, social security number, etc.). Organizations should not assume that collecting IP addresses, MAC addresses, or other less personalized information presents no issues. The FTC takes privacy statements seriously, whatever the information collected (though certainly there is more sensitivity toward certain categories such as health, financial, and children’s information).
The bottom line is “do what you say” when it comes to privacy practices. All companies should evaluate their privacy policies at least every six months to ensure that they remain accurate and complete, have working links (if any), and reflect a company’s current practices.
The law of unintended consequences – a distant cousin of Murphy’s Law – states that the actions of human beings will always have effects that are unanticipated and unintended. The law could prove a perfect fit for recent efforts by class action counsel to rely upon the Federal Wiretap Act in lawsuits arising from adware installed on personal home computers.
Take, for example, the recently filed case of Bennett v. Lenovo (United States), Inc. In that case, the plaintiff seeks to represent a class of purchasers of Lenovo laptop computers complaining that “Superfish” software that was preloaded on the laptops directed them to preferred advertisements based on their internet browsing behavior. The most interesting claim included in the complaint is the assertion that Lenovo and Superfish violated the Federal Wiretap Act.
Wiretap? What wiretap?
The Federal Wiretap Act was originally passed as Title III of the Omnibus Crime Control and Safe Streets Act of 1968. These provisions were included, at least in part, as a result of concerns about investigative techniques used by the FBI and other law enforcement agencies that threatened the privacy rights of individuals. In passing the Wiretap Act, Congress was clearly focused on the need to protect communications between individuals by telephone, telegraph and the like. The Electronic Communications Privacy Act of 1986 (ECPA) broadened the application of the statute by expanding the kinds of communications to which the statute applied. But the focus was still on communications between individuals.
As is often the case, technology is testing the boundaries of this nearly 50-year-old law. The Bennett case is not the first case in which a plaintiff has argued that software on his or her computer that reads the user’s behavior violates the Wire Act. In some cases, the software in question has been so-called “keylogging” software that captures every one of a user’s keystrokes. Cases considering such claims (or similar claims under state statutes modeled after the federal Act) have been split – some based on the specifics of when and how the software actually captured the information, and others based possibly on differences in the law in different parts of the country.
One of the more interesting cases, Klumb v. Gloan, 2-09-CV 115 (ED Tenn 2012), involved a husband who sued his estranged wife when he discovered that she had placed spyware on his computer. At trial, the husband demonstrated that during his marriage, his wife installed eBlaster, a program capable of not only recording key strokes, but also intercepting emails and monitoring websites visited. The husband alleged that once intercepted, the wife altered the emails and other legal documents to make it appear as if the husband was having an affair. The motive? Money, of course. Adultery was a basis to void the pre-nuptial agreement that the parties had executed prior to their ill-fated marriage. The wife – who was a law school graduate – argued that the installation was consensual. Although consent is a recognized defense to a claim of violating the Federal Wiretap Act, for a variety of reasons, the court discredited the wife’s testimony regarding the purported consent and awarded damages and attorney’s fees to the husband plaintiff.
The Bennett plaintiffs may or may not succeed in showing the facts and arguing the law sufficient to prevail in their claim, and we know too little about the facts in that case to express a prediction of the result in that case. But we can state with confidence that the continued expansion of how the Wiretap Act is applied will, at some point, require that Congress step in and update the statute to make clear how it applies in the new internet-based world in which we now live.
In August, the Federal Trade Commission (“FTC”) released a staff report concerning mobile shopping applications (“apps”). FTC staff reviewed some of the most popular apps consumers utilize to comparison shop, collect and redeem deals and discounts, and pay in-store with their mobile devices. This new report focused on shopping apps offering price comparison, special deals, and mobile payments. The August report is available here.
Popularity of Mobile Shopping Apps/FTC Interest
Shoppers can empower themselves in the retail environment by comparison shopping via their smartphones in real-time. According to a 2014 Report by the Board of Governors of the Federal Reserve System, 44% of smartphone owners report using their mobile phones to comparison shop while in retail store, and 68% of those consumers changed where they made a purchase as a result. Consumers can also get instant coupons and deals to present at checkout. With a wave of a phone at the checkout counter, consumers can then make purchases.
While the shopping apps have surged in popularity, the FTC staff is concerned about consumer protection, data security and privacy issues associated with the apps. The FTC studied what types of disclosures and practices control in the event of unauthorized transactions, billing errors, or other payment-related disputes. The agency also examined the disclosures that apps provide to consumers concerning data privacy and security.
Apps Lack Important Information
FTC staff concluded that many of the apps they reviewed failed to provide consumers with important pre-download information. In particular, only a few of the in-store purchase apps gave consumers information describing how the app handled payment-related disputes and consumers’ liability for charges (including unauthorized charges).
FTC staff determined that fourteen out of thirty in-store purchase apps did not disclose whether they had any dispute resolution or liability limits policies prior to download. And, out of sixteen apps that provided pre-download information about dispute resolution procedures or liability limits, only nine of those apps provided written protections for users. Some apps disclaimed all liability for losses.
Data Security Information Vague
FTC staff focused particular attention on data privacy and security, because more than other technologies, mobile devices are personal to a user, always on, and frequently with the user. These features enable an app to collect a huge amount of information, such as location, interests, and affiliations, which could be shared broadly with third parties. Staff noted that, “while almost all of the apps stated that they share personal data, 29 percent of price comparison apps, 17 percent of deal apps, and 33 percent of in-store purchase apps reserved the right to share users’ personal data without restriction.”
Staff concluded that while privacy disclosures are improving, they tend to be overly broad and confusing. In addition, app developers may not be considering whether they even have a business need for all the information they are collecting. As to data security, staff noted it did not test the services to verify the security promises made. However, FTC staff reminded companies that it has taken enforcement actions against mobile apps it believed to have failed to secure personal data (such as Snapchat and Credit Karma). The report states, “Staff encourages vendors of shopping apps, and indeed vendors of all apps that collect consumer data, to secure the data they collect. Further those apps must honor any representations about security that they make to consumers.”
FTC Staff Recommends Better Disclosures and Data Security Practices
The report urges companies to disclose to consumers their rights and liability limits for unauthorized, fraudulent, or erroneous transactions. Organizations offering these shopping apps should also explain to consumers what protections they have based on their methods of payment and what options are available for resolving payment and billing disputes. Companies should provide clear, detailed explanations for how they collect, use and share consumer data. And, apps must put promises into practice by abiding by data security representations.
Consumer Responsibility Plays Role, Too
Importantly, the FTC staff report does not place the entire burden on companies offering the mobile apps. Rather, FTC staff urge consumers to be proactive when using these apps. The staff report recommends that consumers look for and consider the dispute resolution and liability limits of the apps they download. Consumers should also analyze what payment method to use when purchasing via these apps. If consumers cannot find sufficient information, they should consider an alternative app, or make only small purchases.
While a great “deal” could be available with a click on a smartphone, the FTC staff urges consumers to review available information on how their personal and financial data may be collected, used and shared while they get that deal. If consumers are not satisfied with the information provided regarding data privacy and security, then staff recommends that they choose a different app, or limit the financial and personal financial data they provide. (Though that last piece of advice may not be practical considering most shopping apps require a certain level of personal and financial information simply to complete a transaction).
Deal or No Deal? FTC Will be Watching New Shopping Apps
FTC Staff has concerns about mobile payments and will continue to focus on consumer protections. The agency has taken several enforcement actions against companies for failing to secure personal and payment information and it does not appear to be slowing down. While the FTC recognizes the benefits of these new shopping and payment technologies, it is also keenly aware of the enormous amount of data obtained by companies when consumers use these services. Thus, companies should anticipate that the FTC will continue to monitor shopping and deal apps with particular attention on disclosures and data practices.
In an important decision in a federal court case in New Jersey, In Re Nickelodeon Privacy Litigation, Google and Viacom obtained a dismissal of a claim against them under the Video Privacy Protection Act (“VPPA”). The decision narrows the scope of who can be liable under the VPPA and what information is within the scope of the statute.
Congress passed the VPPA in 1988 after Robert Bork, a nominee for the U.S. Supreme Court, had his video rental history published during the nomination process. While Judge Bork’s viewing habits were unremarkable, members of Congress became understandably concerned that any individual’s private viewing information could easily be made public. The VPPA makes any “video tape service provider” that discloses rental information outside the ordinary course of business liable for $2,500 in damages per person, in addition to attorneys’ fees and punitive damages. There is no cap on the damages that plaintiffs can be awarded under the statute and cases are typically brought as class actions with large groups of plaintiffs.
In 2013, Congress passed and President Obama signed the first major change to the VPPA since it was enacted, the Video Privacy Protection Act Amendments Act of 2012. These amendments made it easier for companies to obtain consent from consumers to share their video viewing history. The amendment removed the requirement that video service providers obtain written consent from users every time a user’s viewing choice is disclosed. Additionally, the amendment allowed for a provider to obtain a user’s consent online and that the consent can apply on an ongoing basis for two years as long as the user is given the opportunity to withdraw that consent. The amendments were enacted in response to the interest by consumers in sharing videos on social media platforms.
Viacom owns and operates three websites through which users can stream videos and play video games. The plaintiffs in the lawsuit were registered users of those websites. When a user registered with the site, that individual would be assigned a code name based on that user’s gender and age. The plaintiffs alleged that the user code name would be combined with a code that identified which videos the user watched and that code was disclosed by Viacom to Google. The plaintiffs sued Viacom and Google alleging among other things that this disclosure was a violation of the VPPA.
The VPPA claim against Google was dismissed because the court found that Google was not a “video tape service provider” (“VTSP”) as required for liability under the statute. The court reasoned that Google is not “engaged in the business of renting, selling, or delivering either video tapes or similar audio materials.” Some courts have shown a willingness to extend the definition of a VTSP to companies such as Hulu and Netflix that offer video-streaming services, but the court in this case stopped short of extending it to Google, a company that does not offer video services as its main business.
The VPPA claim against Viacom failed because the court found that, even if Viacom were a VTSP, an issue the court did not reach, Viacom did not release personally identifiable information to Google, which is required to have occurred under the VPPA. The court concluded that “anonymous user IDs, a child’s gender and age, and information about the computer used to access Viacom’s websites” – even if disclosed by Viacom – were not personally identifiable information.
With its potential for large damages there has been a recent uptick in cases filed under the VPPA. Recently, plaintiffs have filed cases against well-known media companies including Hulu, Netflix, ESPN, the Cartoon Network, and The Wall Street Journal. These cases have started to show a trend in shifting away from the intended defendants, companies whose main line of business is renting and selling videos, and toward companies that provide streaming video as part of their business.
The line drawn by the court in this case of who can be considered a VTSP could be a significant win for companies that offer mobile apps with streaming video capabilities by limiting the definition of a VTSP to companies that are in the business of renting or selling videos. Such a limitation would be welcome by many operators of new technologies. Given the vast number of devices and platforms that deliver video content of some kind, an expansion of the definition of a VTSP could lead to a flood of litigation involving companies that are not in the business of renting or selling videos and were not the intended defendants under the statute.
While this decision will not stop the recent uptick in VPPA litigation, it will provide courts with guidance as how to determine who should be liable under the VPPA. The text of the VPPA was written in a way that did not anticipate the current environment where streaming video is available on a multitude of devices. As more cases are filed, the limits of the statute’s scope will be tested. However, this court’s decision provides precedent for a common sense approach to determining who should be held liable under the VPPA.
Last week the Federal Trade Commission (“FTC”) charged the operators of Jerk.com with harvesting personal information from Facebook to create profiles for more than an estimated 73 million people, where they could not be labeled a “Jerk” or “not a Jerk.”
In the complaint, the FTC charged the defendants, Jerk, LLC and the operator of the website, John Fanning, with violating the FTC Act by allegedly misleading consumers into believing that the content on Jerk.com had been created by registered users of the site, when most of it had been harvested from Facebook. The FTC alleged that the operators of Jerk.com falsely claimed that consumers could revise their online profiles by paying a $30 membership fee. Additionally, the FTC asserted that the defendants misled consumers to believe that by paying for a membership, they would have access to the website that could allow them to change their profiles on the site.
Facebook profile pictures and profile names generally are public. Facebook rules allow for developers to upload the names and pictures in bulk. However, Jerk.com allegedly violated Facebook’s policies in the way it mined data from people’s profiles. At the time, Facebook’s rules only allowed an app developer to keep a person’s profile picture for 24 hours. The complaint stated that Fanning registered several websites with Facebook and used Facebook’s application program to download the data needed to create the fake profiles on Jerk.com. The FTC is also seeking an order barring the defendants from using the personal information that was obtained and requiring them to delete the information.
This action is another indication that the FTC is closely monitoring companies that the FTC believes are scraping data on consumers from other sites and deceiving customers in their business practices. The complaint notes how Jerk.com profiles often appear high in search engine results when a person’s name is searched. “In today’s interconnected world, people are especially concerned about their reputation online, and this deceptive scheme was a brazen attempt to exploit those concerns,” said Jessica Rich, Director of the FTC’s bureau of Consumer Protection in a statement.
Companies should monitor their practices for obtaining data from other websites to ensure that they are in compliance with the terms and conditions of websites where they obtain data. Organizations should be cautious about how they use this data, including being careful about making any representations and disclosures that could be viewed as deceptive by the FTC or a state attorney general.
By Michelle Cohen, CIPP-US
After recovering from high-profile data breaches at Target and Neiman Marcus, signing up for free credit monitoring and analyzing our credit reports, a new Internet villain recently emerged: the “Heartbleed Bug.” The Heartbleed Bug is a security flaw present on Open SSL, popular software run on most webservers. This open source software is widely used to encrypt web communications. The Heartbleed Bug affects approximately 500,000 websites, including reportedly Yahoo, OK Cupid, and Tumblr. And, in addition to websites, the Bug may impact networking devices such as video conferencing services, smartphones, and work phones.
The danger of the Heartbleed Bug lies in its ability to reveal the content of a server’s memory. Then, the Bug can grab sensitive data stored in the memory, including passwords, user names, and credit card numbers. Adding insult to injury, the Bug has existed for at least two years, giving hackers a huge head start. News reports and some websites have urged users to change their passwords. Others have warned individuals not to change their passwords until a website has indicated it has installed the security patch that “cures” the Bug. Several sites offer tools to “test” whether an indicated website is vulnerable to the Heartbleed Bug, including one by McAfee. In terms of priorities, users should focus on sites where they bank, conduct e-commerce, e-mail and use file storage accounts.
Further intrigue comes from the fact that a recent Bloomberg report alleged that the National Security Agency (“NSA”) knew about the Bug for at least two years, but may have utilized the vulnerabilities to access information. The NSA has denied it had knowledge of the Bug.
While we have yet to see a “rush to the courthouse” following the announcement of the Heartbleed Bug, we anticipate lawsuits and enforcement could follow where organizations do not act in response to the Bug by installing the necessary security patch. Companies (including our clients in the Internet marketing and I-gaming industries) should investigate whether their websites, apps, or other services (such as cloud services) use Open SSL – then take immediate efforts to oversee the installation of the security patch. Organizations should also advise users of the status of the Heartbleed Bug fix and encourage users to change their passwords, with different passwords across different services.