Manufacturers and marketers know that the more consumer data they have, the more they can tailor and direct their advertising, their products, and their product placement. This helps them to maximize sales and minimize costs. Thanks to the combination of cheap data storage and ubiquitous data capturers (e.g., smart phones, credit cards, the Web), the amount of consumer data out there to mine is astounding. Hence the recently-popularized term, “Big Data.”
But the misuse of data could result in government enforcement actions and, more importantly, serious privacy violations that can affect everyone.
Some of the practical challenges and concerns flowing from the use of big data were addressed recently by FTC Commissioner Julie Brill at the 23rd Computers, Freedom and Privacy conference on June 26. Issues raised include noncompliance with the Fair Credit Reporting Act and consumer privacy matters such as transparency, notice and choice, and deidentification (scrubbing consumer data of personal identifiers).
The FCRA: Those whose business includes data collection or dissemination should determine whether their practices fall within the boundaries of the FCRA. As Brill pointed out, “entities collecting information across multiple sources and providing it to those making employment, credit, insurance and housing decisions must do so in a manner that ensures the information is as accurate as possible and used for appropriate purposes.” If Brill’s comments are any indication of enforcement actions to come, businesses should be aware that the FTC is on the lookout for big data enterprises that don’t adhere to FCRA requirements.
Consumer Privacy: Brill gave some credit to big data giant Acxiom for its recent announcement that it plans to allow consumers to see what information the company holds about them, but she noted that this access is of limited use when consumers have no way of knowing who the data brokers are or how their information is being used. Brill highlighted how consumer data is being (questionably) used by national retailer Target: the somewhat funny yet disturbing news story about Target Stores identifying a teen’s pregnancy. It is a classic example of why consumers ought to have notice of what data is being collected on them and how that information is being used.
Consumers also need to have, as Brill suggested, the opportunity to correct information about themselves. This makes sense. Data collection is imperfect – different individuals’ information may be inaccurately combined; someone’s information may have been hacked; someone could be the victim of cyber-bullying, and other mishaps and errors can occur. Consumers should be able to review and correct information for errors. Finally, Brill highlighted concerns that current efforts to scrub consumer data may be ineffective, as companies are getting better at taking different data points and still being able to accurately identify the individual. “Scrubbed” data in the wrong hands could be as harmful as a direct security breach.
Brill encouraged companies to follow the “privacy by design” recommendations issued by the FTC in order to build more protections into their products and services. She further emphasized her initiative “Reclaim Your Name,” which is set to promote consumer knowledge and access to data collection. Companies that are in the business of data collection, mining and analytics should take note of the FTC’s efforts to empower the consumer against the overuse or misuse of consumer data. If you want to stay on the good side of the FTC – and on the good side of the informed consumer – work with the consumer, and provide meaningful notice, choice and consent.
Following a public comment period, the Federal Trade Commission recently approved a final order settling charges against mobile device manufacturer HTC America, Inc. HTC develops and manufactures mobile devices based on the Android, Windows Mobile, and Windows Phone operating systems. This case, which focuses on device security, is the FTC’s first case against a device manufacturer.
The FTC alleged that HTC failed to take reasonable steps to secure the software it developed for its smartphones and tablet computers. According to the FTC, HTC’s failures introduced various security flaws that placed consumers’ sensitive information at risk. The FTC’s action against HTC signals the agency’s continued focus on data security and data privacy issues and use of its broad “Section 5” authority, which the FTC has repeatedly asserted against various organizations, including its ongoing litigation with Wyndham Hotels. The HTC case also reiterates the agency’s strong interest in securing mobile networks,[link to blog regarding mobile apps], now that mobile phones, which are full of sensitive contact, financial, and other personal information, have become so prevalent.
Companies may be asking what HTC actually did to warrant this FTC action. The FTC claims that HTC, when customizing the software on mobile devices, failed to provide its staff with sufficient security training, failed to review or test the software on its mobile devices for potential security vulnerabilities, failed to follow commonly accepted secure coding practices, and did not have a process for receiving and addressing vulnerability reports from third parties.
In particular, the FTC asserted that HTC devices potentially permitted malicious applications to send text messages, record audio, and install additional malware onto a consumer’s device, without the user’s consent or even knowledge. These malicious applications allegedly could access financial and medical information and other sensitive information such as a user’s geolocation and text message content.
In particular, in the case of Android devices, the FTC claimed that HTC pre-installed a custom application that could download and install applications outside the normal Android installation process. However, HTC did not include an appropriate permission check code to protect the pre-installed application from installation. Consequently, a third party application could command this pre-installed application to download and install any additional applications onto the device without a user’s knowledge or consent.
The FTC further charged that HTC’s actions actually undermined Android consent mechanisms that, but for HTC’s actions, would have prevented unauthorized access and transmission of sensitive information. The FTC’s complaint alleged that the vulnerabilities have been present on approximately 18.3 million HTC devices running Android. The complaint further alleged that HTC could have prevented these vulnerabilities through readily available, low-cost measures, such as adding a few lines of permission check code when programming its pre-installed applications.
In a precedent-setting remedy, the FTC’s final order requires HTC to develop and release software patches within 30 days of service of the FTC’s final order on HTC. The patches must fix vulnerabilities in millions of HTC’s devices, including every covered device having an operating system version released on or after December 2010. HTC must also establish a comprehensive security program designed to address security risks during the development of HTC devices. The FTC requires the program to include consideration of employee training and management; product design, development and research; secure software design and testing; and review, assessment, and response to third party security vulnerability reports.
Further, HTC must undergo independent security assessments every other year for the next 20 years. Among other requirements, the independent, professional assessment must certify that HTC’s security program operates with sufficient effectiveness to provide reasonable assurance that the security of covered device functionality and the security, confidentiality, and integrity of covered information is protected and has operated during the reporting period. HTC is barred from making false or misleading statements about the security and privacy of consumers’ data on HTC devices.
The FTC’s action against HTC has broad application beyond the mobile device and software marketplace. The agency’s action further solidifies the FTC’s role as the leading enforcer of data security standards. Once again the FTC has demonstrated that it is setting data security standards and will continue to monitor and police the marketplace when it believes companies have not incorporated what it believes are commonly accepted security features or when organizations have failed to take steps to prevent vulnerabilities.
Beta testing is underway for Google Glass, a new technology that provides the functionality of a smartphone in a headset worn like glasses. Much like a smartphone, the Glass headset is able to exchange messages with other mobile devices, take pictures, record videos, and access search engines to respond to user queries. But unlike a smartphone, the device’s optical head-mounted display, voice recognition, and front-facing camera give users hands-free access to its features, including the ability to capture photographs and video recordings of the world in front of them.
For now, Glass is only available to developers and a small group of test users known as Google Explorers. The device is expected to go on sale to the public in 2014. In the meantime, public speculation swells, and the blogosphere is full of conflicting reports about what the device can and cannot do. Some suggest that the device will utilize facial recognition and eye-tracking software to show icons and statistics above people whom the user recognizes. A more common concern is that the device will be able to photograph and record what the user sees and then share that data with third parties without permission from the user or those whose likenesses are being captured.
Because of this lack of clarity, lawmakers around the world are urging Google to affirmatively address the myriad of privacy concerns raised by this new technology. Last month, an international group of privacy regulators – including representatives from Australia, Canada, Israel, Mexico, New Zealand, Switzerland, and a European Commission panel – signed off on a letter to Google’s CEO Larry Page asking for more information regarding the company’s plans to ensure compliance with their data protection laws.
Here in the United States, the House Bipartisan Privacy Caucus issued a similar letter of its own. In addition to a variety of questions regarding the device’s capabilities, the letters reference some of Google’s recent data privacy mishaps and ask whether Google intends to take proactive steps to ensure the protection of user and nonuser privacy.
Google’s Vice President of Public Policy and Governmental Relations (and former New York Congresswoman) Susan Molinari issued a formal response to the House Bipartisan Privacy Caucus. According to Molinari, Google “recognize[s] that new technology is going to bring up new types of questions, so [they] have been thinking carefully about how [they] design Glass from its inception.”
To address concerns about the picture and video capabilities, Molinari highlighted several features designed to “give users control” and “help other people understand what Glass users are doing.” For example, specific user commands are required to search the Internet or find directions, and the user must either press a button on the arm of the Glass or say “Take a photo” or “Record a video” in order to access those features.
While Google is already subject to commitments it made to the FTC regarding the requirement to afford advertisers non-discriminatory access to its search engine, the FTC’s latest guidance makes clear that Google and other search engines must also maintain clear disclosures to the public about sponsored content in search results.
On June 24, 2013, in a series of letters to general search engines such as Google, Yahoo, and Ask.com, as well as to specialized search engines, the FTC issued updated guidance concerning disclosures regarding paid advertisements in search results.
This latest FTC action follows on the heels of the Commission’s recent updates to the Dot Com Disclosures and the updated Endorsements and Testimonials Guides. The FTC’s letters came in response to industry and consumer organizations’ requests to the Commission to update its policies on search engine results, last released in 2002. The FTC also noted that it has observed a decline in search engines’ compliance since 2002.
The FTC’s central concern, first articulated in 2002, remains the problem that consumers may be deceived in violation of Section 5 of the FTC Act unless search engines clearly and prominently distinguish advertising from natural search results.
Consumers assume that search results reflect the most relevant results. When results appear because the advertiser has paid a search engine for, say, prominent placement, that placement could be deceptive to consumers if they are unaware of the commercial relationship between the advertiser and the search engine.
The growth of mobile commerce in particular has spurred the FTC to issue new guidelines. Search results on a mobile phone screens are, by their nature, small, and consumers could be easily confused by paid search results if the “paid” nature of those results is not clear.
In the new guidance, the FTC states that if search engines continue to distinguish advertising results by giving a different background color or shading combined with a text label (such as “sponsored” or “ad”), the search engines should consider multiple factors to ensure that any labels and visual cues are sufficiently “noticeable and understandable” to consumers. The agency clarified that there is no “one size fits all” and that search engines may use various methods, provided the disclosures are noticeable and understandable.
Proper disclosures, according to the FTC, include the following:
• Visual Cues – Search engines must select hues of sufficient luminosity to account for varying monitor types, technology settings, and lighting conditions. The FTC notes that search engines should consider using web pages of different luminosities for mobile devices and desktop computers. Further, the FTC recommends that search engines should use:
o more prominent shading that has a clear outline;
o a prominent border that distinctly sets off advertising from the natural search results; or
o both prominent shading and a border
• Text Labels – The FTC asserts that text labels must be used in addition to the visual cues a search engine may use to distinguish advertising. Text labels must:
o use language that explicitly and unambiguously conveys that a search result is advertising;
o be large and visible enough for consumers to notice it;
o be located near the search results (or group of search results) that it qualifies and where consumers will see it; and
o be placed immediately in front of an advertising result, or in the upper-left hand corner of an ad block, including any grouping of paid specialized results in adequately sized and colored font.
The new guidance also recognizes that technology will continue to evolve, such as voice assistants on mobile devices (e.g., the iPhone’s “Siri”). While technology may change, the new guidance makes clear that the FTC Act’s Section 5 prohibition on deceptive practices remains. Therefore, businesses must make sure that they differentiate advertising from other information. For instance, if a voice interface is used to deliver search results (for example, “find me a Mexican restaurant”) the search engine should disclose audibly any paid advertisements in adequate volume and cadence for ordinary listeners to hear and comprehend.
The FTC continues to be vigilant in monitoring the online marketplace. Search engines and advertisers need to review their practices, keeping in mind that disclosures that may be readily apparent on a desktop may be hidden on a mobile screen. As with the “Dot Com Disclosures,” the agency is providing guidance to businesses; however, FTC enforcement remains vigilant and companies that do not clearly disclose paid advertising in search results could face an FTC investigation.
Recently, the Consumer Financial Protection Bureau, the watchdog agency of the financial industry, has proved that it has considerable bite. Created under the Dodd-Frank act to fill gaps in regulatory coverage, the CFPB’s mandate is to enforce federal regulations that, among other things, restrict “unfair deceptive or abusive acts or practices” in consumer finance. The CFPB in recent months announced two major debt relief crackdowns, the most recent of which permanently shut the doors of a Florida company.
Last month, the CFPB announced that it filed a complaint against a Florida debt-relief company that misled consumers across the country by charging upfront fees for debt-relief services without actually settling most of the consumers’ debts. According to the complaint, the defendants engaged in abusive practices by knowingly enrolling vulnerable consumers who had inadequate incomes to complete debt-relief programs. The complaint charged American Debt Settlement Solutions, Inc. (ADSS) and its owner, Michael DiPanni, with actions that were not just unfair and deceptive, but also abusive. Indeed, this case if the first time that the CFPB in its short history has enforced this prohibition on “abusive” acts or practices.
While “unfair” and “deceptive” are familiar terms to anyone who follows the Federal Trade Commission, the term “abusive” is new to Dodd-Frank and has been the subject of much consternation among Republicans in Congress, who consider it too vague. With this complaint, the CFPB provided what may be its first example of the type of conduct it will consider “abusive.” ADSS allegedly collected about $500,000 in fees from hundreds of consumers in multiple states, charging illegal upfront fees for debt-relief services and “falsely promising them it would begin to settle their debts within three to six months when, in reality, services rarely materialized.
The CFPB said the actions were “abusive” because consumers reasonably relied on the company to “act in their interest by enrolling them in a debt-relief program that they can be reasonably expected to complete, and which will therefore result in the negotiation, settlement, reduction, or alteration of the terms of their debts.” The CFPB simultaneously filed a proposed consent order that would settle the matter by halting the company’s operations and imposing a $15,000 fine.
ADSS and its owner may have walked away relatively unscathed, with only a civil penalty, but others caught in the CFPB’s cross hairs have not been as fortunate. Earlier this year, the CFPB filed suit against two lawyers and two debt relief companies in New York, alleging that they charged thousands of consumers illegal advance fees and left some worse off financially, while illegally profiting themselves. One of the lawyers, Michael Levitis, also faces mail and wire fraud charges brought by the Manhattan U.S. Attorney’s Office – the first-ever criminal charges stemming from a CFPB referral. What’s notable in this complaint is that the acts are described as both deceptive and unfair, but not as abusive.
Although a relatively new agency, the CFPB is proving that it has the chops to take down offenders in the financial industry. Both the Florida and New York cases are signs of future enforcement, and they send a stern warning to offenders – if you prey on vulnerable consumers, be prepared for a fight.
Ignorance of the law is no excuse; nor is (willful) ignorance of a business partner’s illegal activities.
That’s a lesson to be learned from a recent amended complaint filed by the FTC which named a payment processor in its complaint against a telemarketer that allegedly engaged in a scam concerning credit card interest rate reduction. The Commission originally filed suit against the telemarketing company, Innovative Wealth Builders, Inc., and its owners in January for misrepresenting the debt relief service they were selling, charging a fee before providing debt relief services, and billing consumers without their express informed consent. The January action resulted in the temporary shutdown of the company’s operations pending outcome of the suit.
But earlier this month, the Commission filed an amended complaint, adding charges against Independent Resources Network Corp., IWB’s payment processor. The payment processor was accused of assisting and facilitating IWB’s deceptive practices, in violation of the Telemarketing Sales Rule (TSR). 16 C.F.R. § 310.3(b).
The TSR was created by the FTC in 1995 at the direction of Congress, which directed the Commission to proscribe rules to address abusive and deceptive telemarketing acts. The FTC has amended the TSR several times in order to respond to developments in telemarketing schemes. The amendments allow for liability for third parties such as payment processors and lead generators that have provided “substantial assistance or support” to any seller or telemarketer while knowing, or consciously avoiding knowing, that the seller or telemarketer is engaged in activity in violation of the TSR.
In the FTC’s complaint against IRN, the Commission notes that IRN “processed millions of dollars of credit card transactions for IWB, thereby earning considerable fees for itself.” The complaint identifies several indicators that would have, or should have, put IRN on notice of IWB’s practices in violation of the TSR, including the following:
• IWB sent IRN copies of company documents including, but not limited to, telemarketing scripts and samples of the IWB defendants’ “financial plan”
• IRN was aware that IWB had a variety of complaints on consumer websites
• IWB had an “F” rating with Better Business Bureau and IRN accessed the BBB website several times
• IRN received thousands of copies of chargeback disputes initiated by dissatisfied consumers
• IRN received multiple fraud alerts from Discover regarding IWB
Instead of ceasing to process transactions for IWB, IRN responded to these “red flags” by increasing the percentage it withheld from transactions processed for the telemarketer and holding such sums in a reserve account.
Some takeaways from the complaint and other FTC developments: if you are going to invest in basic due diligence to determine the risk level and credibility of a prospective account, you should probably follow through with your findings, following the letter of the law. The FTC recently issued additional proposed rule changes to the TSR to address more payment processing concerns.
Some affiliate marketers have recently gotten involved in the risky world of online trading. Online trading, particularly the trading of binary options, has become an attractive alternative for some affiliate marketers to traditional forms of online marketing.
However, those companies that do get involved in this market must be aware of the presence of the U.S. Commodity Futures Trading Commission (CFTC), which regulates these markets.
Simply put, binary options means “two options.” The system offers traders a simple choice whether an asset will close above a certain price (a “call option”) or below (a “put option”) at the end of the day. Lately, there seems to be a great deal of confusion regarding the legality of binary options trading in the United States.
The question is not so much whether binary options are legal in the United States but whether the firms offering them are listed on a proper U.S. exchange and are properly registered with and regulated by the Commodity Futures Trading Commission (CFTC). Nadex, for example, is a regulated U.S. exchange, which is designated by the CFTC and permitted to accept U.S. residents as members.
In a recent lawsuit, the CFTC charged the Ireland-based “Intrade The Prediction Market Limited” and “Trade Exchange Network Limited” with offering commodity option contracts to U.S. customers for trading, including option contracts on whether certain U.S. economic numbers or the prices of gold and currencies would reach a certain level by a certain future date, all in violation of the CFTC’s ban on off-exchange options trading.
For now, it seems that regulators like the CFTC have focused their attention on the actual firms offering these trading options. However, the CFTC has been sending cease and desist letters to affiliates in this space as well. Affiliates working in such risky markets must know the firms for which they are working. Some online trading firms may say they do not accept U.S. customers, but saying it is very different than actually representing and warranting that fact in a contractual document with their affiliates and indemnifying affiliates from liability.
For further information, see my article in the April 2013 issue of FeedFront, a magazine for affiliate marketers.
Over the past decade the Federal Trade Commission has brought cybersecurity enforcement actions against various private companies, imposing tens of millions of dollars in monetary penalties and requiring companies to maintain more stringent data-security practices. No company has ever challenged the FTC’s authority to regulate cybersecurity in this way in court – until now. On June 17, 2013, a federal court will finally get a chance to weigh in on whether the scope of the FTC’s regulatory jurisdiction is so broad as to include setting standards for cybersecurity.
In FTC v. Wyndham Worldwide Corporation, et al., the FTC launched a civil action against the parent company of the Wyndham hotels and three of its subsidiaries for data security failures that led to three major data breaches in less than two years. The Commission’s complaint charges that Wyndham’s security practices were unfair and deceptive in violation of the FTC Act.
Unlike many other data-security FTC enforcement actions, in which the defendant has chosen to cut its losses and settle out of court, Wyndham has decided to stand and fight with a motion to dismiss. Judge Esther Salas of the U.S. District Court for the District of New Jersey is expected to rule on Wyndham’s motion on June 17.
With respect to the FTC’s unfairness claim, Wyndham’s motion asserts that the FTC is attempting to circumvent the legislative process by acting as if “it has the statutory authority to do that which Congress has refused: establish data-security standards for the private sector and enforce those standards in federal court.”
According to Wyndham, “on multiple occasions in the 1990s and early 2000s the FTC publicly acknowledged that it lacked authority to prescribe substantive data-security standards under the [FTC Act]. For that very reason, the FTC has repeatedly asked Congress to enact legislation giving it such authority.” Further, Wyndham highlights the Senate’s failure to pass the Cybersecurity Act of 2012, which sought to address the need for specific data-security standards for the private sector, and President Obama’s February 2013 Executive Order on cybersecurity that was issued in response to the Congressional stalemate.
On its face, Wyndham’s motion to dismiss seems quite strong. However, the facts that the FTC is alleging do not cut in Wyndham’s favor. The Commission’s complaint alleges that Wyndham’s failure to “adequately limit access between and among the Wyndham-branded hotels’ property management systems, [Wyndham] Hotels and Resorts’ corporate network, and the Internet” allowed intruders to use weak access points (e.g., a single hotel’s local computer network) to hack into the entire Wyndham Hotels and Resorts’ corporate network. From there, the intruders were able to gain access to the payment management systems of scores of Wyndham-branded hotels.
According to the FTC, Wyndham failed to remedy known security vulnerabilities, employ reasonable measures to detect unauthorized access, and follow proper incident response procedures following the first breach in April 2008. Thus, the corporation remained vulnerable to attacks that took place the following year. All told, the intruders compromised over 600,000 consumer payment card accounts, exported hundreds of thousands of payment card account numbers to a domain registered in Russia, and used them to make over $10.6 million in fraudulent purchases.
Unfortunately – as Wyndham notes in its motion to dismiss – hacking has become an endemic problem. There has been no shortage of stories about major cyber-attacks on private companies and governmental entities alike: from Google and Microsoft to the NASA and the FBI. And the FTC has not been shy about bringing enforcement actions against private companies with inadequate security measures.
If Wyndham prevails, the case could usher in a major reduction in FTC enforcement efforts. However, if the court sides with the FTC, the commission will be further empowered to regulate data security practices. With such high stakes on both sides, any decision is likely to result in an appeal. In the meantime, companies in various industry sectors that maintain personal consumer information are awaiting next week’s decision.
On May 6, 2013, the U.S. Senate passed the “Marketplace Fairness Act,” which allows states to collect sales tax on online purchases, whether or not the online retailer has a physical presence in the state. If this bill becomes law, it would change the structure that has been in place since the 1992 Supreme Court ruling in Quill v. North Dakota, 504 U.S. 298 (1992), which held that states could collect sales tax on online transactions only if they also had a physical presence in the state such as a warehouse, a store, or in some cases, an online affiliate.
The act would allow states to require all retailers with more than $1 million in sales to collect and remit sales taxes to state and local jurisdictions. Retailers would collect the tax at the point of purchase, code each sale by zip code, and remit the taxes to the eligible states and local municipalities. Although states would not be required to implement a tax on online sales, many would probably choose to do so as they look for ways to generate much-needed revenue to compensate for budget shortfalls. By taxing online sales, states could generate an estimated $23 billion a year in local and state sales taxes. Additionally, states are likely to receive pressure from local businesses seeking to level the playing fields for brick-and-mortar retailers who feel that they’re at an unfair advantage for having to charge tax on goods that customers can often buy tax-free online.
As Internet sales taxes become more common, one group likely to benefit is Internet affiliates. Prior to this bill, states such as Illinois sought to circumvent Quill by stating that Internet affiliates created the requisite “nexus” of a physical presence within a state. This caused online stores, including retailer behemoth Amazon, to cease using affiliates in any states where the affiliate would constitute a nexus. If a physical nexus is no longer required, affiliates would no longer be singled out and terminated due to their presence in any particular state.
Considerable support for a bill of this sort was likely inevitable. When online shopping was still new, online sales were minimal and most people did their shopping locally, meaning that the loss of state and local tax revenue was minimal. However, the dramatic increase in the choices available online, along with quick and free shipping, means that by some estimates up to 85 percent of Internet users do at least some shopping online. The corresponding decrease in patronage at local stores meant that states were missing out on taxes from those purchases. As a result, this bill would give states the opportunity to collect what they see as lost revenue.
That is not to say, however, that the bill will eventually become law. The bill faces stiff opposition in the Republican-controlled House, where some lawmakers see the bill as a tax increase. They face additional pressure from the Conservative Action Project, which has obtained more than 50 signatures from business and political leaders in a letter opposing the Marketplace Fairness Act on the premise that “retailers would be subject to laws imposed by states with which they have no direct connection, and in whose political system they have no voice. It is regulation without representation, allowing politicians to raise revenue, without fear of a public backlash.”
Currently, it appears that the bill is unlikely to become law. However, politicians will continue to raise revenue regardless. If the federal law does not pass, states will likely continue to issue broad and increasingly strained interpretations of what constitutes a “presence” in the state in order to collect revenues from online merchants.
The Federal Trade Commission has made it quite clear that it is serious about advising mobile app developers that the rules of the road will be changing very soon. Since 2011, the Commission has been working to update the rules governing the collection of children’s personal information by mobile apps. The relevant law is the Children’s Online Privacy Protection Act (COPPA), and the rules are set to change in just over a month, on July 1.
As part of its effort to encourage compliance, the Commission recently issued more than 90 warning letters to app developers, both foreign and domestic, whose online services appear to collect data from children under the age of 13. The letters alert the recipients about the upcoming COPPA rule change and encourage them to review their apps, policies, and procedures for compliance. According to the letters, the Commission did not evaluate whether the recipients’ apps or company practices are in compliance. Therefore, we view this move as a public warning to all app developers that may be collecting personal information from children.
Until now, COPPA, which was originally enacted in 1998, defined “personal information” to include only the basics such as a child’s name, contact information, and social security number. Over the past decade, it has become antiquated by the development of mobile apps and other technological advancements affecting data collection. Unfortunately but understandably, COPPA’s original incarnation failed to account for the proclivities of today’s children, who – reared in the age of smartphones, Facebook, and Google-everything – routinely use mobile apps to share their lives with their friends, their family, and the world.
The FTC has expressed major concerns that, unbeknownst to many users, mobile app developers also collect and disseminate their users’ persistent identifiers (things such as cookies, IP addresses, and mobile device IDs). This information, which can recognize users over time and across different websites and online services, is often used by developers and third parties to market products to children based on each child’s specific online behavior. Come July 1, this practice will be illegal.
Under the revised rule, the definition of “personal information” has been expanded to include persistent identifiers, photos and videos with a child’s image, and recordings of a child’s voice. Additionally, developers of apps directed to children under 13 – or that knowingly collect personal information from children under 13 – will be required to post accurate privacy policies, provide notice, and obtain verifiable parental consent before collecting, using, or disclosing such information. However, there are some exceptions for developers that only use the information to support internal operations (i.e., analyze the app’s functionality, authenticate app users, etc.)
Protecting children’s privacy continues to be one of the Commission’s major initiatives, and the FTC has levied some hefty penalties for COPPA violations over the past year. That said, the Commission has indicated that it may be more lenient in cases where a small business has violated the rule despite well-intentioned attempts to comply. As we mentioned back in February, developers should beware of increased data privacy enforcement on the state level, as well. We encourage all mobile app developers to be proactive and review/update their policies to ensure compliance and avoid costly penalties.