In August, the Federal Trade Commission (“FTC”) released a staff report concerning mobile shopping applications (“apps”). FTC staff reviewed some of the most popular apps consumers utilize to comparison shop, collect and redeem deals and discounts, and pay in-store with their mobile devices. This new report focused on shopping apps offering price comparison, special deals, and mobile payments. The August report is available here.
Popularity of Mobile Shopping Apps/FTC Interest
Shoppers can empower themselves in the retail environment by comparison shopping via their smartphones in real-time. According to a 2014 Report by the Board of Governors of the Federal Reserve System, 44% of smartphone owners report using their mobile phones to comparison shop while in retail store, and 68% of those consumers changed where they made a purchase as a result. Consumers can also get instant coupons and deals to present at checkout. With a wave of a phone at the checkout counter, consumers can then make purchases.
While the shopping apps have surged in popularity, the FTC staff is concerned about consumer protection, data security and privacy issues associated with the apps. The FTC studied what types of disclosures and practices control in the event of unauthorized transactions, billing errors, or other payment-related disputes. The agency also examined the disclosures that apps provide to consumers concerning data privacy and security.
Apps Lack Important Information
FTC staff concluded that many of the apps they reviewed failed to provide consumers with important pre-download information. In particular, only a few of the in-store purchase apps gave consumers information describing how the app handled payment-related disputes and consumers’ liability for charges (including unauthorized charges).
FTC staff determined that fourteen out of thirty in-store purchase apps did not disclose whether they had any dispute resolution or liability limits policies prior to download. And, out of sixteen apps that provided pre-download information about dispute resolution procedures or liability limits, only nine of those apps provided written protections for users. Some apps disclaimed all liability for losses.
Data Security Information Vague
FTC staff focused particular attention on data privacy and security, because more than other technologies, mobile devices are personal to a user, always on, and frequently with the user. These features enable an app to collect a huge amount of information, such as location, interests, and affiliations, which could be shared broadly with third parties. Staff noted that, “while almost all of the apps stated that they share personal data, 29 percent of price comparison apps, 17 percent of deal apps, and 33 percent of in-store purchase apps reserved the right to share users’ personal data without restriction.”
Staff concluded that while privacy disclosures are improving, they tend to be overly broad and confusing. In addition, app developers may not be considering whether they even have a business need for all the information they are collecting. As to data security, staff noted it did not test the services to verify the security promises made. However, FTC staff reminded companies that it has taken enforcement actions against mobile apps it believed to have failed to secure personal data (such as Snapchat and Credit Karma). The report states, “Staff encourages vendors of shopping apps, and indeed vendors of all apps that collect consumer data, to secure the data they collect. Further those apps must honor any representations about security that they make to consumers.”
FTC Staff Recommends Better Disclosures and Data Security Practices
The report urges companies to disclose to consumers their rights and liability limits for unauthorized, fraudulent, or erroneous transactions. Organizations offering these shopping apps should also explain to consumers what protections they have based on their methods of payment and what options are available for resolving payment and billing disputes. Companies should provide clear, detailed explanations for how they collect, use and share consumer data. And, apps must put promises into practice by abiding by data security representations.
Consumer Responsibility Plays Role, Too
Importantly, the FTC staff report does not place the entire burden on companies offering the mobile apps. Rather, FTC staff urge consumers to be proactive when using these apps. The staff report recommends that consumers look for and consider the dispute resolution and liability limits of the apps they download. Consumers should also analyze what payment method to use when purchasing via these apps. If consumers cannot find sufficient information, they should consider an alternative app, or make only small purchases.
While a great “deal” could be available with a click on a smartphone, the FTC staff urges consumers to review available information on how their personal and financial data may be collected, used and shared while they get that deal. If consumers are not satisfied with the information provided regarding data privacy and security, then staff recommends that they choose a different app, or limit the financial and personal financial data they provide. (Though that last piece of advice may not be practical considering most shopping apps require a certain level of personal and financial information simply to complete a transaction).
Deal or No Deal? FTC Will be Watching New Shopping Apps
FTC Staff has concerns about mobile payments and will continue to focus on consumer protections. The agency has taken several enforcement actions against companies for failing to secure personal and payment information and it does not appear to be slowing down. While the FTC recognizes the benefits of these new shopping and payment technologies, it is also keenly aware of the enormous amount of data obtained by companies when consumers use these services. Thus, companies should anticipate that the FTC will continue to monitor shopping and deal apps with particular attention on disclosures and data practices.
In an important decision in a federal court case in New Jersey, In Re Nickelodeon Privacy Litigation, Google and Viacom obtained a dismissal of a claim against them under the Video Privacy Protection Act (“VPPA”). The decision narrows the scope of who can be liable under the VPPA and what information is within the scope of the statute.
Congress passed the VPPA in 1988 after Robert Bork, a nominee for the U.S. Supreme Court, had his video rental history published during the nomination process. While Judge Bork’s viewing habits were unremarkable, members of Congress became understandably concerned that any individual’s private viewing information could easily be made public. The VPPA makes any “video tape service provider” that discloses rental information outside the ordinary course of business liable for $2,500 in damages per person, in addition to attorneys’ fees and punitive damages. There is no cap on the damages that plaintiffs can be awarded under the statute and cases are typically brought as class actions with large groups of plaintiffs.
In 2013, Congress passed and President Obama signed the first major change to the VPPA since it was enacted, the Video Privacy Protection Act Amendments Act of 2012. These amendments made it easier for companies to obtain consent from consumers to share their video viewing history. The amendment removed the requirement that video service providers obtain written consent from users every time a user’s viewing choice is disclosed. Additionally, the amendment allowed for a provider to obtain a user’s consent online and that the consent can apply on an ongoing basis for two years as long as the user is given the opportunity to withdraw that consent. The amendments were enacted in response to the interest by consumers in sharing videos on social media platforms.
Viacom owns and operates three websites through which users can stream videos and play video games. The plaintiffs in the lawsuit were registered users of those websites. When a user registered with the site, that individual would be assigned a code name based on that user’s gender and age. The plaintiffs alleged that the user code name would be combined with a code that identified which videos the user watched and that code was disclosed by Viacom to Google. The plaintiffs sued Viacom and Google alleging among other things that this disclosure was a violation of the VPPA.
The VPPA claim against Google was dismissed because the court found that Google was not a “video tape service provider” (“VTSP”) as required for liability under the statute. The court reasoned that Google is not “engaged in the business of renting, selling, or delivering either video tapes or similar audio materials.” Some courts have shown a willingness to extend the definition of a VTSP to companies such as Hulu and Netflix that offer video-streaming services, but the court in this case stopped short of extending it to Google, a company that does not offer video services as its main business.
The VPPA claim against Viacom failed because the court found that, even if Viacom were a VTSP, an issue the court did not reach, Viacom did not release personally identifiable information to Google, which is required to have occurred under the VPPA. The court concluded that “anonymous user IDs, a child’s gender and age, and information about the computer used to access Viacom’s websites” – even if disclosed by Viacom – were not personally identifiable information.
With its potential for large damages there has been a recent uptick in cases filed under the VPPA. Recently, plaintiffs have filed cases against well-known media companies including Hulu, Netflix, ESPN, the Cartoon Network, and The Wall Street Journal. These cases have started to show a trend in shifting away from the intended defendants, companies whose main line of business is renting and selling videos, and toward companies that provide streaming video as part of their business.
The line drawn by the court in this case of who can be considered a VTSP could be a significant win for companies that offer mobile apps with streaming video capabilities by limiting the definition of a VTSP to companies that are in the business of renting or selling videos. Such a limitation would be welcome by many operators of new technologies. Given the vast number of devices and platforms that deliver video content of some kind, an expansion of the definition of a VTSP could lead to a flood of litigation involving companies that are not in the business of renting or selling videos and were not the intended defendants under the statute.
While this decision will not stop the recent uptick in VPPA litigation, it will provide courts with guidance as how to determine who should be liable under the VPPA. The text of the VPPA was written in a way that did not anticipate the current environment where streaming video is available on a multitude of devices. As more cases are filed, the limits of the statute’s scope will be tested. However, this court’s decision provides precedent for a common sense approach to determining who should be held liable under the VPPA.
Recently, the Maryland Attorney General’s Office announced that it reached a settlement with Snapchat, Inc. over alleged deceptive trade practices in violation of Maryland law and violations of federal laws that are intended to protect children’s online privacy. This is another reminder that state attorneys general’s offices will continue to be vigilant in addressing consumer privacy issues under both state and federal laws, when the federal laws permit state attorney general action.
Snapchat is a photo and video messaging app that allows users to take photos and videos, add text and drawings, and send them to selected contacts. The sent images are commonly referred to as “snaps” and users can set a time limit of up to ten seconds for how long the image will be visible to the contact. According to Snapchat, its app’s users were sending 700 million photos and videos per day in May 2014.
Maryland’s Attorney General asserted that Snapchat misled consumers when it represented that snaps are temporary and disappear after they are opened by a recipient. The Attorney General claimed that, in fact, the snaps could be copied or captured by recipients. Additionally, the Maryland Attorney General alleged that Snapchat collected and maintained the names and phone numbers from contact lists on consumers’ electronic devices, which was a practice that Snapchat had not always disclosed to consumers and to which consumers did not always consent. Lastly, the Attorney General alleged that Snapchat was aware that some users were under the age of 13, but it failed to comply with the federal Children’s Online Privacy Protection Act (“COPPA”), when it collected personal information from children without verifiable parental consent. COPPA has a provision that empowers state attorneys general to bring enforcement actions under the statute on behalf of residents of their states.
Snapchat agreed to pay the state of Maryland $100,000 to settle this case. Additionally, as part of its settlement, Snapchat agreed to not make false representations or material omissions in connection with its app. Furthermore, Snapchat is specifically enjoined from misrepresenting the temporary nature of the snaps and must disclose to users that recipients of snaps have the ability to copy the image they receive. Snapchat must also obtain affirmative consent from consumers before it collects and saves any contact information. In response to the COPPA allegations, Snapchat agreed to comply with COPPA for a period of ten years and to take specific steps to ensure that children under the age of 13 are not creating Snapchat accounts.
Snapchat has faced other actions as well. Last month, Snapchat reached a settlement with the Federal Trade Commission (“FTC”) on charges that it deceived consumers with promises about the disappearing nature of messages sent through the service. According to the FTC, Snapchat promised users that messages and images sent through the app would self-destruct and disappear in ten seconds or less despite there being ways for recipients to save the snaps. The FTC case also alleged that Snapchat told users that it did not collect information about their location when one version of the app did collect location information.
The FTC case did not include any accusation of violating COPPA, nor did it include any financial penalty. As part of the settlement, Snapchat agreed to implement privacy programs that will be subject to monitoring for 20 years and agreed not to misrepresent the confidentiality, privacy, and security of user information. Snapchat is also prohibited from misrepresenting how it maintains the privacy and confidentiality of user agreements.
On its official blog, Snapchat emphasized that its app does not retain users’ snaps and that both investigations largely revolved around how well users understood that recipients of their snaps could save their snaps. In response to the COPPA claims, Snapchat pointed out that its terms of service have always provided that the app is intended for users who are 13 years of age or older and has instituted controls to ensure it.
Mobile app companies need to be aware of the fact that they are being closely monitored by both the FTC and state attorneys general offices. In particular, any claim made by an app about consumer privacy may be scrutinized by regulators. Companies need to be prepared to justify their claims and must be forthcoming about any data that is collected from consumers. In other words: if you say you do something then you need to do it; if you say that you do not do something, do not do it. Your company does not want the FTC or a state attorney general “snapping” at your privacy practices.
Sprint Gets a Wallop of a Reminder – Company-Specific Do Not Call Lists Still Matter – $7.5 Million Record Do Not Call Consent Decree
Yesterday, the Federal Communications Commission (“FCC”) announced a consent decree with Sprint Corporation for federal do not call violations. Specifically, under the terms of the agreement, Sprint will make a $7.5 million “voluntary contribution” to the United States Treasury. This payment represents the largest do not call settlement reached by the FCC. Sprint also agreed to various ongoing compliance initiatives, including enhanced training and reporting requirements. Importantly, the action also serves as an important reminder on an often overlooked section of the do not call rules – the requirement that companies maintain and abide by “company-specific” or internal do not call lists.
Under the federal do not call rules, organizations making telemarketing calls to residential customers (including mobile phones) are required to scrub the federal do not call database before initiating those calls, unless the calls meet certain exceptions – the called party has an existing business relationship (“EBR”) with the caller or has provided prior express consent for the calls or the call is from a tax-exempt non-profit. Of course, as we have written before, there are additional requirements for autodialed or prerecorded calls to mobile mobiles and prerecorded telemarketing calls to residential lines.
Another, sometimes overlooked requirement is that companies making permissible calls (for instance, after scrubbing the do not call database or with an existing business relationship or prior express consent) must maintain an internal, company-specific do not call list where companies log individuals’ subsequent requests not to be called. In other words, even if a consumer has an existing business relationship or has given prior express consent to be called, once the consumer tells the company not to call again, that request trumps the existing business relationship/prior consent or the do not call scrub. This company-specific do not call request must be implemented within 30 days and honored for five years from the date the consumer made the request. (The federal do not call registration, in contrast, lasts indefinitely). A company must also have a do not call policy, available upon request.
In 2009, the FCC investigated Sprint for do not call violations relating to the company-specific do not call list. Sprint subsequently settled that enforcement action in 2011 through a consent decree (which included a $ 400,000 payment). The decree required Sprint to report to the FCC’s Enforcement Bureau, for two years, any noncompliance with the consent decree or the FCC’s company-specific do not call rules.
In March 2012, Sprint disclosed to the FCC that it had discovered additional issues involving human error and technical malfunctions relating to Sprint’s or its vendor’s do not call processes that caused potential noncompliance with consumers’ do not call or do not text preferences, or prevented the timely capture of the preferences. Sprint represented that it had subsequently implemented improvements in its do not call data management systems. It had also ceased telemarketing and text campaigns to investigate the issues. The FCC investigated Sprint’s do not call compliance and ultimately entered into this record-setting $7.5 million settlement.
Under the terms of the consent decree, in addition to the settlement payment, Sprint will designate a Compliance Officer to administer a new compliance plan and to comply with the consent decree. Sprint also must implement a compliance manual which will instruct “covered personnel” (including Sprint personnel and independent contractors who provide telemarketing services for Sprint) on Sprint’s do not call policies. The consent decree further requires Sprint to establish and maintain an annual compliance training program, and to file several compliance reports with the FCC at designated time frames. Significantly, Sprint acknowledges that actions or inactions of any independent contractors, subcontractors, or agents that result in a violation of the company-specific do not call rules or the consent constitute an act or inaction by Sprint – in other words, Sprint is specifically on the hook for third parties’ actions.
The consent decree and $7.5 million payment serve as a useful reminder of the company-specific do not call rules. Once a consumer indicates they do not wish to receive further telemarketing calls or texts, the FCC’s rules require that the telemarketer place that consumer on its internal, company-specific do not call list. This consumer requests trumps even an established business relationship or prior express consent. It can only be revoked by subsequent express consent – which we would recommend be in writing. Even if a consumer does business with your company every day, if he or she has asked not to receive telemarketing calls – don’t call! Compliance with the company-specific do not call rule means your organization does not call someone who has indicated they do not want to be called. And, it can also save your company great time, resources, and money spent defending private litigation or an FCC enforcement action. Further, if your organization utilizes third parties for telemarketing campaigns, your company should make sure the third party is taking do not call requests, logging them, and passing those to your company for future campaigns.
Mobile payments have become so commonplace that consumers rarely stop to think about whether their online payment is secure. Mobile app developers can fall into a similar trap of assuming that the necessary security measures are enabled without performing the necessary audits to assure security on a regular basis. A recent settlement between the FTC and two companies offering unsecured mobile application products gives cause to think again.
The FTC alleges that the movie ticketing service Fandango and credit monitoring company Credit Karma failed to adequately protect consumers’ sensitive personal information in their mobile apps because they failed to use Secure Sockets Layer (“SSL”) protocol to establish authentic, encrypted connections with consumers. Generally, an online service will present an SSL certificate to the app on the consumer’s device to vouch for its identity. The app then verifies the certificate to ensure that it is connecting to the genuine online service. When companies fail to use this protocol—especially if consumers use the app over a public wi-fi system—third party attackers can substitute an invalid certificate to the app, thus establishing a connection between the app and the attacker rather than the online service. As a result, any information that the consumer enters into the app will be sent directly to the attacker, including credit card numbers and other sensitive and personally identifying information.
The FTC alleged that Fandango and Credit Karma left their applications vulnerable to interception by third parties by failing to use SSL protocol. The FTC alleged that Fandango misrepresented the security of its application by stating that consumers’ credit card information would be stored and transmitted securely, despite the fact that the SSL protocol was disabled on the app from March 2009 to March 2013. The FTC alleged that Credit Karma’s app failed to validate SSL certificates from July 2012 to January 2013, leaving the app susceptible to attackers which could gather personal identifying information such as passwords, security questions and answers, birthdates, and “out of wallet” verification answers regarding things like mortgages and loan amounts.
In both cases, the online services received warnings of the vulnerabilities from both users and the FTC. In December 2012 a security researcher used Fandango’s online customer service form to submit a warning regarding the vulnerability. However, Fandango mistakenly flagged the email as a password reset request and sent the researcher a stock response on password resetting, then marked the complaint as resolved. A user sent a similar notice to Credit Karma about the SSL certificates in January 2013. Credit Karma responded by issuing a fix in the update to the iOS operating system that same month, however, one month later Credit Karma issued an Android app which contained the same vulnerability.
In both cases, the online services performed a more thorough internal audit of the apps only when issued a warning by the FTC. The FTC issued complaints against the companies for their deceptive representations regarding the security of their systems. While the complaints noted that the apps were vulnerable to third party attacks, they did not allege that any such attacks were made or that any consumer information was in fact compromised. Perhaps due to the lack of consumer harm, the FTC entered into consent agreements with Fandango and Credit Karma in which the services did not have to pay a monetary judgment, but did agree to establish comprehensive security programs and undergo security assessments every other year for the next 20 years. Fandango and Credit Karma are additionally prohibited from misrepresenting the level of privacy and security in their products or services.
SSL certificates are the default validation process that iOS and Android operating systems provide developers using the application programming interface. Therefore, mobile app developers can protect themselves and their users from this vulnerability simply by leaving the default SSL protocol enabled. What’s more, app developers can test for and identify SSL certificate validation vulnerabilities using free or very low cost tools. Therefore, all app developers should take the necessary precautions to ensure the security of their systems, and prevent harm to consumers (and potential lawsuits) down the road.
Attorney General Holder Calls on Congress to Establish Strong National Data Breach Notification Standard
By Michelle Cohen, CIPP-US
Yesterday, in his weekly video address, Attorney General Eric Holder urged Congress to create a national data breach notification standard requiring companies to quickly notify consumers of a breach of their personal or financial information. In the wake of the high profile holiday season data breaches at retailers Target and Neiman Marcus, Holder stated that the Department of Justice and the U.S. Secret Service continue to work to investigate hacking and cybercrimes. However, Holder believes that Congress should act to establish a federal notification requirement to protect consumers. Holder’s video address is available here .
Currently, at least forty-six states, the District of Columbia, Guam, Puerto Rico and the Virgin Islands have laws requiring private or government entities to notify individuals of security breaches of information involving personally identifiable information. As might be expected, the laws vary widely from state to state, particularly in the timing requirement for the breach notifications. Most laws allow delay to accommodate a law enforcement investigation.
Some states require notification as soon as reasonably practicable. Others require notification within 45 days. Yet organizations have faced lawsuits for failing to notify on a timely basis, even where there is no set standard. This presents a difficult situation for companies. Organizations need to investigate a data breach and determine the type of information affected, who was affected (and thus needs to be notified), and importantly, whether the breach is ongoing such that the company must immediately implement remedial measures.
Attorney General Holder believes Congress should set a national standard that will better protect consumers. Holder asserts that a federal requirement should enable law enforcement to investigate the data breaches quickly and to hold organizations accountable when they fail to protect personal and financial information. Holder’s video message did include a reference that this requirement should create “reasonable exemptions” for companies to avoid creating unnecessary burdens.
The Target and Neiman Marcus data breaches have certainly raised the profile of cybersecurity issues on Capitol Hill, with several bills having been introduced in recent weeks addressing data breaches. While the states certainly took the lead in protecting consumers by enacting data breach laws over the past several years, a properly-crafted national standard could provide more consistent guidance for industry and a uniform rule for consumers irrespective of their home states. Should Congress move forward on a data breach law, reasonable accommodations need to be made for companies to have time to investigate data breaches, to determine scope, persons affected, and the type of information affected. A national standard setting forth a notification deadline would also presumably alleviate the “rush to the courthouse” from the plaintiff’s bar with data breach notification timing allegations.
By Michelle Cohen, CIPP-US
On January 28th, in an effort raise awareness of privacy and data privacy, the United States, Canada and 27 countries of the European Union celebrate International Data Privacy Day. Many organizations use Data Privacy Day as an opportunity to educate their employees and stakeholders about privacy-related topics. With the recent, high-profile data breaches as Target, Neiman Marcus, and potentially, Michaels, the need for training and instruction on data security is more critical than ever before. In this vein, we’ve set forth our views on what we see as the year ahead in legal developments relating to data security and what companies can do to prepare.
Legislation Introduced but on the Move?
Data security and data breaches will continue to be the focus of regulators and Congress through 2014. In fact, Congress summoned Target’s Chief Financial Officer to appear before the Senate Judiciary Committee on February 4th and a House committee is seeking extensive documents from Target about its security program. Meanwhile, Senator Leahy re-introduced data breach legislation which would set a federal standard for data breach notifications (most states now require notifications, though the requirements differ state-to-state).
Senators Carper and Blunt introduced a separate bipartisan bill intended to establish national data security standards, set a federal breach notification requirement, and also require notification to federal agencies, police, and consumer reporting agencies when breaches affect more than 5,000 persons. Many companies have suffered data breaches and then faced civil lawsuits under various causes of actions, including allegations that they did not notify customers promptly. As a result, there may be strong support for federal standards rather than facing a patchwork of state laws. While the Target breach has certainly renewed interest in data security, and we expect Congress will conduct numerous hearings, ultimate passage of data breach legislation this Congress is still probably a longshot.
Watching Wyndham Take on FTC
As covered in this blog, various Wyndham entities have struck back at the FTC, challenging the FTC’s authority to bring an action against Wyndham for alleged data security failures. The Wyndham entities claim that the FTC may not set data security standards absent specific authority from Congress. Yet, with Congress having not set data security standards thus far, the court in oral arguments seemed concerned about leaving a void in the data security area. Wyndham’s motion to dismiss remains pending in federal court in New Jersey. Most observers think the court will be hard pressed to limit the FTC’s authority under Section 5 of the FTC Act, which broadly prohibits ”unfair methods of competition in or affecting commerce, and unfair or deceptive acts or practices in or affecting commerce” and provides the FTC with administrative and civil litigation enforcement authority. The agency has used this administrative authority with great success, bringing numerous data privacy actions that usually result in settlements by companies rather than risk further litigation expenses, penalties, and reputational damage. We think the FTC will remain vigilant in this space, including attention on the security of mobile apps.
Class Actions Jump on Breaches
Whether breaches affect Sony Playstation, Adobe, Target, or some other company, the class action firms have been busy filing lawsuits based upon data breaches. For example, by year end, at least 40 suits had already been filed against Target, with seven filed the day Target disclosed the breach. The plaintiffs use various theories – including violations of consumer protection statutes, negligence, fraud, breach of contract, breach of fiduciary duty, invasion of privacy and conversion. But, if a consumer’s information was potentially breached, yet nothing happened to the consumer as a result, does that consumer have cognizable damages? That has been a huge sticking point for these lawsuits. Yet, the class action lawyers will continue to file these suits and some companies will settle to avoid further reputational damages and litigation expenses.
Don’t Count out the States
States have taken the lead in setting data breach notification standards, and in some cases data security requirements. For instance, in March 2010, Massachusetts enacted strict data security regulations. Organizations that own or license personal information of Massachusetts residents are required to develop and implement a written comprehensive information security program (“CISP”) to protect that information. Almost all of the states have standards setting forth what types of information are covered by data breaches, who gets notified, what content goes in the notifications and, the timing of the notifications. Multiple states are investigating the Target breach; certainly less well known breaches get state regulators’ attention as well. We predict the states will continue to be active regulators and enforcers of data security and data breaches, and will likely continue to “rule the roost” while federal legislation lags behind.
Preparation and Training Still Key
We’ve said before that, unfortunately, no company is immune from data breaches. Companies cannot assume that they have the best anti-malware or security features and that these other newsworthy breaches resulted from lapses that would not apply to them. Whether it is a sophisticated hacker or, more commonly, a well-meaning but negligent employee, data loss and data breaches will occur. All organizations should have procedures in place NOW to prevent data loss and to prepare for a breach. This includes IT, human resources, legal, and communications resources. Companies should designate a “data security/data breach” team with representatives from these key departments (working with outside counsel and other privacy breach specialists when needed). The team should meet periodically to review procedures, recommend improvements, and engage in periodic training on data security.
We can’t stress here enough about employee training. An employee who, for instance, wants to finish a project at home after stopping by the gym might download information that contains sensitive personal information onto a flash drive. Let’s say the gym bag gets stolen, along with the flash drive. Well, the employee’s unlucky company may now have a huge data breach situation on its hands requiring notices to customers, state attorneys general, and potential litigation and other expenses (such as paying for creditor monitoring, now industry standard). Employees need training about securing sensitive information – from shredding documents instead of putting them in the dumpster, to encrypting information that is being taken offsite, to avoiding “phishing” scams, to having unique passwords they change periodically. According to recent reports, “password” and “123456” are still among the most popular passwords. While data breaches cannot be avoided completely, we can ameliorate some risks with better practices in our organizations.
As the Federal Trade Commission (“FTC”) continues to flex its consumer protection muscles by bringing numerous administrative lawsuits, industry and members of Congress are questioning whether there is a level playing field that allows companies to properly defend themselves against FTC charges. Or, as some say, does the FTC have the “home court advantage” in its role as investigator and prosecutor, armed with very broad authority under Section 5 of the FTC Act –leaving many companies to decide simply to settle rather than face the Goliath FTC. However, some companies have been bucking that trend recently and challenging the FTC’s authority (particularly in the area of regulating data security and FTC officials’ impartiality.
As background, the FTC may begin an enforcement action if it has “reason” to believe that the FTC Act is being or has been violated. Section 5(a) of the FTC Act prohibits “unfair or deceptive acts or practices in or affecting commerce.” The FTC also enforces several other consumer protection statutes, including the Fair Credit Reporting Act, the Do-Not-Call Implementation Act of 2003, and the Children’s Online Privacy Protection Act.
Under Section 5(b) of the FTC Act, the FTC can challenge “unfair or deceptive acts or practices” or violations of certain other laws (such as those listed above) in an administrative adjudication. The way this works is the FTC issues a complaint putting forth its charges. Many companies faced with such complaints inevitably settle with the FTC, rather than endure an administrative trial. Those companies that contest the charges face a trial-type proceeding before an FTC administrative law judge. FTC staff counsel “prosecute” the complaint. The administrative law judge later issues an initial decision. Either party can appeal the initial decision to the full FTC for review.
Many observers, including the American Bar Association, have criticized this situation — where the FTC acts as both prosecutor and judge — as inherently unfair. After the FTC’s decision, the respondent organization (or individual)may appeal to a federal court of appeals. However, at this point, an extensive record has been made and this assumes an organization or individual has the resources to devote to a federal appeal. (In addition, the FTC can also bring consumer protection enforcement directly in court rather than through administrative litigation).
The FTC’s winning record in these administrative proceedings has many observers questioning the process and the FTC’s potential impartiality. House antitrust chairman Spencer Bachus (R-Ala.) called out the FTC’s apparent lack of impartiality and fairness, stating “ a company might wonder whether it is worth putting up a defense at all.”
Just a couple weeks ago, however, medical testing company LabMD went on the offense and sought the disqualification of an FTC Commissioner. Facing an administrative proceeding relating to its alleged failure to secure patient information data, LabMD moved to disqualify Commissioner Julie Brill from consideration of its case. LabMD claimed that the Commissioner made numerous statements at industry conferences prejudging its ongoing litigation. Specifically, LabMD claimed Brill stated LabMD that had violated the law, rather than indicating that LabMD was under investigation or in litigation. The FTC opposed the disqualification. However, Commissioner Brill voluntarily recused herself from the case on Christmas Eve to avoid “undue distraction” from the administrative litigation.
As the FTC litigates in several key areas – data privacy, financial services, credit repair, telemarketing – we expect administrative litigation will increase in 2014. While some companies will continue to settle to avoid continued litigation expenses and possible further detrimental outcomes, we think others will take the LabMD route and seek relief when they believe the processes are not transparent or the FTC is exceeding its authority.
FTC Vigilant on Children’s Privacy – Rejects Proposal for Collecting Verifiable Parental Consent Under COPPA
On November 12, 2013, the Federal Trade Commission (“FTC”), in a 4-0 vote, denied AssertID’s application for approval of a proposed verifiable parental consent (“VPC”) method under the Children’s Online Privacy Protection Rule (“COPPA”). Under the FTC’s COPPA rule, covered online websites and services must obtain “verifiable parental consent” (“VPC”) before collecting personal information from children under 13. The agency’s revised COPPA rule became effective in July; among other changes, it expanded the categories that can constitute “personal information.” The FTC’s COPPA rule sets forth several acceptable methods of obtaining parental consent. Notably, the rule also allows parties to seek FTC approval of other VPC methods.
The FTC’s approval process allows organizations to present innovative VPC methods, thereby permitting flexibility and taking into account new technologies, while still ensuring that parents provide consent on behalf of their children as required under COPPA. The FTC requires that applicants seeking approval for a unique VPC provide: (1) a detailed description of the proposed parental consent method; and (2) an analysis of how the method is reasonably calculated in light of available technology, to ensure that the person providing consent is the child’s parent.
The FTC reviewed AssertID’s proposed VPC method following a public comment period. AssertID’s product, “ConsentID,” would ask a parent’s “friends” on a social network to verify the identity of the parent and the existence of the parent-child relationship (“social-graph verification”). The FTC concluded that “ConsentID” did not meet the criteria to ensure that the person providing consent is the child’s parent. The agency determined that it is premature to approve ConsentID, since AssertID did not present sufficient research or marketplace evidence demonstrating the efficacy of social-graph verification.
The FTC also questioned the efficacy of social-graph efficacy in the “real world.” The agency noted that relying upon social network users to confirm parental consent posed many problems including the fact that many profiles are fabricated (noting that Facebook’s SEC 10-Q indicates it has approximately 83 million fake accounts). In conclusion, the agency found that “identity verification via social-graph is an emerging technology and further research, development, and implementation is necessary to demonstrate that it is sufficiently reliable to verify that individuals are parents authorized to consent to the collection of children’s personal information.”
The FTC has approved and denied other VPCs. The agency’s denial of AsssertID’s application signals that while the FTC encourages the uses of new technologies to obtain VPC under COPPA, it will review new methods carefully, mandating research results and demonstrable success in a “real world” scenario rather than just a beta test. Website operators collecting personal information of children under 13 (and “personal information” now includes geolocation information, as well as photos, videos, and audio files that contain a child’s image or voice) should review their COPPA compliance, including their methods of VPC. The FTC continues to be especially vigilant in protecting certain categories of personal information, including children’s information, financial information, and health information.
A lawsuit filed in Massachusetts state court recently raised the issue of whether a former employee’s LinkedIn post announcing a new job could violate an anti-solicitation clause of a non-compete contract with the former employer.
In KNF&T Inc. v. Muller, staffing company KNF&T filed suit against its former vice president, Charlotte Muller, for violating a non-compete contract in a number of ways, one of which was a LinkedIn update which notified Ms. Muller’s 500+ contacts of her new job. Among those contacts were Ms. Muller’s former clients at KNF&T. KNF&T filed suit alleging that the update notification violated her one year non-compete contract by soliciting business from current KNF&T clients.
The court issued a narrow ruling stating that the posting did not violate the non-compete agreement because Ms. Muller’s new position in information technology recruiting did not directly compete with KNF&T’s work in recruiting administrative support specialists.
Since the court was able to resolve the case based on a differentiation in practice areas, it did not have to resolve the issue of whether a LinkedIn notification could violate the terms of a non-competition agreement. Such a determination will always depend of the particular facts of the case, such as whether the new position directly competes with the former employer, whether the individual is connected with former clients on LinkedIn, and the content of the notification.
Employees subject to a non-competition agreement should exercise caution when using social media to announce a new position. If they do make an announcement, they should consult the terms of their non-compete agreement to determine what could constitute a violation. For instance, if the non-compete only prohibits solicitation of the former employer’s current clients, the employee should be sure to exclude any such clients from the notification by selecting which groups receive the message. The time spent paring down the list of recipients is well worth avoiding a potential lawsuit.