FTC Beat
Posts Tagged ‘Cyber law’
Oct 14
2013

After Google Action, Those Who Dig for Dirt Must Dig a Little Harder

Google recently announced that it would be taking action to demote websites that profit from the use of mugshot photos. These mugshot sites compile booking photographs taken after people’s arrests and publish them along with the arrestees’ names and information concerning the charges against them. Individuals who want their mugshot and arrest record deleted from the site usually must pay a fee ranging anywhere from $10 to $400. Until recently, when a Google user searched the Internet for the name of a recent arrestee, the search hits would include, and often prioritize, mugshot sites. Owners of those sites were content with that outcome; many others were not.

New York Times writer David Segal was one of the latter. In a recent article, Segal took Google to task for not penalizing mugshot sites, which many believe traffic in exploitation. Segal argued that Google should take corrective action because it had prioritized the sites in contravention of its own stated corporate goal that favors original web content. Mugshots do not offer original content; instead, they gather and use images and text from third-party sources.

Before his article ran, Segal contacted Google to discuss the issue. Google responded that it had been working to address the problem in a consistent way. Days later, a Google spokesperson confirmed that mugshot sites do not comply with one of the search giant’s guidelines. To address the problem, Google amended its algorithm, presumably to disfavor sites without original content.

Consequently, mugshot sites are now pushed off the front page of Google search results. People digging for dirt now have to look a little bit harder.

Others who object to mugshot sites have taken the fight to regulators and legislators. On October 7, the Maryland Consumer Protection Division settled its case against the owner of Joomsef.net for false and deceptive advertising. Joomsef’s owner, Stanislav Komsky, published information on the site about traffic offenses, but added statements falsely suggesting there had been an arrest. Persons identified on the site had to pay $40 to $90 to have the information removed. As part of the settlement, Komsky must take down the site, return all payments to consumers, and pay a penalty of $7,500.

Other states are addressing the problem through legislation. Segal points out that Oregon and Georgia have passed laws this year giving site owners 30 days to take down an image, free of charge, if an individual proves that he or she was exonerated or that the individual’s record has been expunged. Utah attacked the problem another way. There, sheriffs are prohibited from giving out headshots to websites that charge for deleting them. Lawmakers in other states, like Florida Representative Carl Zimmerman, have introduced legislation targeting the sites, but many of those bills died from lack of support.

These acts of government are constrained, as they should be, in view of free-speech guarantees under the First Amendment. By contrast, the private sector is not so limited and, therefore, may end up striking the decisive blow against mugshot sites. Things are heading in that direction. MasterCard, Discover, American Express, and PayPal recently pledged to sever all ties with mugshot sites, and Visa has asked merchant banks to investigate the practices of the sites.

posted in:
Fraud, Internet Law
Oct 11
2013

FTC Takes Tough Action Against ‘Scareware’ Tactics

A great way to make money is to develop a product or service that responds to a consumer want or demand, and then to stay ahead of prospective competitors by offering better pricing or quality. A not-so-great way to make money is to convince consumers to buy a product or service that they don’t really want or need, at inflated rates. A highly dubious way to make money is to trick consumers into paying for something they didn’t want and didn’t mean to buy.

Businesses operating in this third category, which may include a scareware marketer or two, have to consider risk versus reward. Is the reward of temporary profits worth the risk of legal action; what is the likelihood of legal action; and what is the potential cost of such action?

Someone who operates on tricks over treats, or by pure scareware tactics, may expect business to dry up as consumers learn to avoid their traps. Such an operator must also face the looming threat of consumer legal action, government intervention, or run-ins with credit card companies alarmed by high chargeback rates.

For these types of businesses in the mobile marketing space, the cost of potential government intervention is going up. A recent settlement between the Federal Trade Commission and Jesta Digital LLC points to the severe penalties a business may face for operating on the sidelines of fair play. The consequences include a hefty fine, consumer refunds, restricted billing practices and stringent compliance measures for years to come.

Jesta (which also does business as Jamster) is known mostly for its marketplace of ringtones, photos, videos and apps. Starting in 2011, it ran a scareware campaign, purportedly for anti-virus software, that the FTC asserts crossed the line into deceptive advertising. The ads ran on the free version of the Angry Birds app for Android. Using a graphic that looks like the Android robot logo, the banner ad displayed a warning that viruses had been detected on the device – even though no virus scan was conducted. According to the FTC, when the consumers clicked on the “remove [virus]” button, or similar “warning” buttons, Jesta directed them through a number of pages about virus protection that left to very fine print a monthly service fee for ringtones and other content.

The FTC alleges that consumers were even charged at the instant of pressing a “Protect Your Android Today” button. Through the use of Wireless Access Protocol (WAP) billing, the company was able to charge consumers through their cell phone numbers without needing to obtain express authorization. (It may be that the use of the billing practice actually spurred the FTC into action as wireless carriers initiated their own penalties against Jesta for the large number of consumers demanding refunds.) The FTC also alleges that the anti-virus software often failed at download (apparently at one point, only 372 people out of 100,000 subscribers actually received some sort of anti-virus app download link).

The FTC describes numerous deceptive practices: mimicking the Android logo to confuse consumers into believing the virus warnings were credible, charging consumers without their knowledge or consent, failing to provide services charged for. The company apparently was aware that its scareware tactics crossed the line, as an email correspondence among company executives noted that the chief marketing officer was “anxious to move our business out of being a scam and more into a valued service.”

So now the company must pay the FTC a $1.2 million penalty and offer to refund consumers. The process of identifying and notifying consumers of their refund options and tracking all this to show to the FTC will be a costly undertaking. Another major cost will be the stringent and detailed billing practices that the company – and all participants, including principals and agents – must adhere to, disclosures it must make, and compliance monitoring and recordkeeping requirements it must adhere to, for 20 years. The settlement agreement is far more than a hand slap; its terms keep Jesta (and its principals!) beholden to the FTC for the foreseeable future.

Mobile marketers who may calculate risk versus reward and decide that a get-rich-quick scheme is worth the risk should think again. The FTC is making deceptive marketing tactics, like many scareware campaigns, a priority. We have seen strong action from the agency in the recent past, including hefty penalties for the company Innovative Marketing and its principal Marc D’Souza. Moreover, the newly-appointed head of consumer protection at the FTC, Jessica Rich, has noted that the FTC is expanding digital enforcement, increasing the risk of getting caught in the agency’s cross-hairs.

Sep 29
2013

FTC Takes First Enforcement Action on ‘Internet of Things’

A company that markets video cameras that are designed to allow consumers to monitor their homes remotely has agreed to settle charges with the FTC that it failed to properly protect consumers’ privacy. This marks the FTC’s first enforcement action against a marketer of a product with connectivity to the Internet and other mobile devices, commonly referred to as the “Internet of Things.”

The FTC’s complaint alleges that TRENDNet marketed its cameras for uses ranging from baby monitoring to home security and that TRENDNet told customers that its products were “secure.” In fact, however, the devices were compromised by a hacker who posted links on the Internet to live feeds of over 700 cameras. Additionally, TRENDNet stored and transmitted user credentials in clear unencrypted text.

Under the terms of its settlement with the FTC, TRENDnet is prohibited from misrepresenting the security of its cameras or the security, privacy, confidentiality, or integrity of the information that its cameras or devices transmit. The company must also establish a comprehensive security program and notify customers about security issues with the cameras and must provide a software update to customers to address security issues.

“The Internet of Things holds great promise for innovative consumer products and services,” FTC Chairwoman Edith Ramirez said. “But consumer privacy and security must remain a priority as companies develop more devices that connect to the Internet.”

The FTC’s authority to regulate and penalize companies that the agency claims do not protect consumers with sufficient data security is being challenged in federal court in New Jersey by The Wyndham Hotel Group. Wyndham has argued, among other things, that the FTC has not published any formal rules on data security and therefore cannot penalize companies that it deems have not protected consumer information. That case is pending.

This is the first time the FTC has brought an enforcement action involving the “Internet of Things,” but the FTC has already signaled it will be carefully watching how the Internet of Things develops. In particular, the FTC will be hosting a workshop in November to explore these new technologies. The agency previously sought comment from interested stakeholders on the Internet of Things – including the privacy and data security implications of interconnected devices. We expect that the FTC will continue to explore these issues, with a particular emphasis on how these devices collect and share information, particularly sensitive and personal information, such as health information.

Sep 09
2013

With Complaint Against LabMD, FTC Continues to Flex Enforcement Muscle on Data Security

The Federal Trade Commission recently filed another complaint against a company for alleged data security lapses. As readers of this blog know, the FTC has initiated numerous lawsuits against companies in various industries for data security and privacy violations, although it is facing a backlash from Wyndham and large industry organizations for allegedly lacking the appropriate authority to set data security standards in this way.

The FTC’s latest target is LabMD, an Atlanta-based cancer detection laboratory that performs tests on samples obtained from physicians around the country. According to an FTC press release, the FTC’s complaint (which is being withheld while the FTC and LabMD resolve confidentiality issues) alleges that LabMD failed to reasonably protect the security of the personal data (including medical information) of approximately 10,000 consumers, in two separate incidents.

Specifically, according to the FTC, LabMD billing information for over 9,000 consumers was found on a peer-to-peer (P2P) file-sharing network. The information included a spreadsheet containing insurance billing information with Social Security numbers, dates of birth, health insurance provider information, and standardized medical treatment codes.

In the second incident, the Sacramento, California Police Department found LabMD documents in the possession of identity thieves. The documents included names, Social Security numbers, and some bank account information. The FTC states that some of these Social Security numbers were being used by multiple individuals, indicating likely identity theft.

The FTC’s complaint alleges that LabMD did not implement or maintain a comprehensive data security program to protect individuals’ information, that it did not adequately train employees on basic security practices, and that it did not use readily available measures to prevent and detect unauthorized access to personal information, among other alleged failures.

The complaint includes a proposed order against LabMD that would require the company to implement a comprehensive information security program. The program would also require an evaluation every two years for 20 years by an independent certified security professional. LabMD would further be required to provide notice to any consumers whose information it has reason to believe was or could have been accessible to unauthorized persons and to consumers’ health insurance companies.

LabMD has issued a statement challenging the FTC’s authority to regulate data security, and stated that it was the victim of Internet “trolls” who presumably stole the information. This latest complaint is yet another sign that the FTC continues to monitor companies’ data security practices, particularly respecting health, financial, and children’s information. Interestingly, the LabMD data breaches were not huge – with only 10,000 consumers affected. But, the breach of, and potential unauthorized access to, sensitive health information and Social Security numbers tend to raise the FTC’s attention.

While industry awaits the district court’s decision on Wyndham’s motion to dismiss based on the FTC’s alleged lack of authority to set data security standards, companies should review and document their data security practices, particularly when it comes to sensitive personal information. Of course, in addition to the FTC, some states, such as Massachusetts, have their own data security standards, and most states require reporting of data breaches affecting personal information.

Aug 06
2013

FTC Looking Closely at Impact on Consumers of ‘Big Data’

Manufacturers and marketers know that the more consumer data they have, the more they can tailor and direct their advertising, their products, and their product placement. This helps them to maximize sales and minimize costs. Thanks to the combination of cheap data storage and ubiquitous data capturers (e.g., smart phones, credit cards, the Web), the amount of consumer data out there to mine is astounding. Hence the recently-popularized term, “Big Data.”

But the misuse of data could result in government enforcement actions and, more importantly, serious privacy violations that can affect everyone.

Some of the practical challenges and concerns flowing from the use of big data were addressed recently by FTC Commissioner Julie Brill at the 23rd Computers, Freedom and Privacy conference on June 26. Issues raised include noncompliance with the Fair Credit Reporting Act and consumer privacy matters such as transparency, notice and choice, and deidentification (scrubbing consumer data of personal identifiers).

The FCRA: Those whose business includes data collection or dissemination should determine whether their practices fall within the boundaries of the FCRA. As Brill pointed out, “entities collecting information across multiple sources and providing it to those making employment, credit, insurance and housing decisions must do so in a manner that ensures the information is as accurate as possible and used for appropriate purposes.” If Brill’s comments are any indication of enforcement actions to come, businesses should be aware that the FTC is on the lookout for big data enterprises that don’t adhere to FCRA requirements.

Consumer Privacy: Brill gave some credit to big data giant Acxiom for its recent announcement that it plans to allow consumers to see what information the company holds about them, but she noted that this access is of limited use when consumers have no way of knowing who the data brokers are or how their information is being used. Brill highlighted how consumer data is being (questionably) used by national retailer Target: the somewhat funny yet disturbing news story about Target Stores identifying a teen’s pregnancy. It is a classic example of why consumers ought to have notice of what data is being collected on them and how that information is being used.

Consumers also need to have, as Brill suggested, the opportunity to correct information about themselves. This makes sense. Data collection is imperfect – different individuals’ information may be inaccurately combined; someone’s information may have been hacked; someone could be the victim of cyber-bullying, and other mishaps and errors can occur. Consumers should be able to review and correct information for errors. Finally, Brill highlighted concerns that current efforts to scrub consumer data may be ineffective, as companies are getting better at taking different data points and still being able to accurately identify the individual. “Scrubbed” data in the wrong hands could be as harmful as a direct security breach.

Brill encouraged companies to follow the “privacy by design” recommendations issued by the FTC in order to build more protections into their products and services. She further emphasized her initiative “Reclaim Your Name,” which is set to promote consumer knowledge and access to data collection. Companies that are in the business of data collection, mining and analytics should take note of the FTC’s efforts to empower the consumer against the overuse or misuse of consumer data. If you want to stay on the good side of the FTC – and on the good side of the informed consumer – work with the consumer, and provide meaningful notice, choice and consent.

posted in:
Privacy
Jul 30
2013

FTC Orders Mobile Device Maker to Patch Up Its Software Security

Following a public comment period, the Federal Trade Commission recently approved a final order settling charges against mobile device manufacturer HTC America, Inc. HTC develops and manufactures mobile devices based on the Android, Windows Mobile, and Windows Phone operating systems. This case, which focuses on device security, is the FTC’s first case against a device manufacturer.

The FTC alleged that HTC failed to take reasonable steps to secure the software it developed for its smartphones and tablet computers. According to the FTC, HTC’s failures introduced various security flaws that placed consumers’ sensitive information at risk. The FTC’s action against HTC signals the agency’s continued focus on data security and data privacy issues and use of its broad “Section 5” authority, which the FTC has repeatedly asserted against various organizations, including its ongoing litigation with Wyndham Hotels. The HTC case also reiterates the agency’s strong interest in securing mobile networks,[link to blog regarding mobile apps], now that mobile phones, which are full of sensitive contact, financial, and other personal information, have become so prevalent.

Companies may be asking what HTC actually did to warrant this FTC action. The FTC claims that HTC, when customizing the software on mobile devices, failed to provide its staff with sufficient security training, failed to review or test the software on its mobile devices for potential security vulnerabilities, failed to follow commonly accepted secure coding practices, and did not have a process for receiving and addressing vulnerability reports from third parties.

In particular, the FTC asserted that HTC devices potentially permitted malicious applications to send text messages, record audio, and install additional malware onto a consumer’s device, without the user’s consent or even knowledge. These malicious applications allegedly could access financial and medical information and other sensitive information such as a user’s geolocation and text message content.

In particular, in the case of Android devices, the FTC claimed that HTC pre-installed a custom application that could download and install applications outside the normal Android installation process. However, HTC did not include an appropriate permission check code to protect the pre-installed application from installation. Consequently, a third party application could command this pre-installed application to download and install any additional applications onto the device without a user’s knowledge or consent.

The FTC further charged that HTC’s actions actually undermined Android consent mechanisms that, but for HTC’s actions, would have prevented unauthorized access and transmission of sensitive information. The FTC’s complaint alleged that the vulnerabilities have been present on approximately 18.3 million HTC devices running Android. The complaint further alleged that HTC could have prevented these vulnerabilities through readily available, low-cost measures, such as adding a few lines of permission check code when programming its pre-installed applications.

In a precedent-setting remedy, the FTC’s final order requires HTC to develop and release software patches within 30 days of service of the FTC’s final order on HTC. The patches must fix vulnerabilities in millions of HTC’s devices, including every covered device having an operating system version released on or after December 2010. HTC must also establish a comprehensive security program designed to address security risks during the development of HTC devices. The FTC requires the program to include consideration of employee training and management; product design, development and research; secure software design and testing; and review, assessment, and response to third party security vulnerability reports.

Further, HTC must undergo independent security assessments every other year for the next 20 years. Among other requirements, the independent, professional assessment must certify that HTC’s security program operates with sufficient effectiveness to provide reasonable assurance that the security of covered device functionality and the security, confidentiality, and integrity of covered information is protected and has operated during the reporting period. HTC is barred from making false or misleading statements about the security and privacy of consumers’ data on HTC devices.

The FTC’s action against HTC has broad application beyond the mobile device and software marketplace. The agency’s action further solidifies the FTC’s role as the leading enforcer of data security standards. Once again the FTC has demonstrated that it is setting data security standards and will continue to monitor and police the marketplace when it believes companies have not incorporated what it believes are commonly accepted security features or when organizations have failed to take steps to prevent vulnerabilities.

Jul 18
2013

Google Glass Sounds Exciting — But What About Privacy?

Beta testing is underway for Google Glass, a new technology that provides the functionality of a smartphone in a headset worn like glasses. Much like a smartphone, the Glass headset is able to exchange messages with other mobile devices, take pictures, record videos, and access search engines to respond to user queries. But unlike a smartphone, the device’s optical head-mounted display, voice recognition, and front-facing camera give users hands-free access to its features, including the ability to capture photographs and video recordings of the world in front of them.

For now, Glass is only available to developers and a small group of test users known as Google Explorers. The device is expected to go on sale to the public in 2014. In the meantime, public speculation swells, and the blogosphere is full of conflicting reports about what the device can and cannot do. Some suggest that the device will utilize facial recognition and eye-tracking software to show icons and statistics above people whom the user recognizes. A more common concern is that the device will be able to photograph and record what the user sees and then share that data with third parties without permission from the user or those whose likenesses are being captured.

Because of this lack of clarity, lawmakers around the world are urging Google to affirmatively address the myriad of privacy concerns raised by this new technology. Last month, an international group of privacy regulators – including representatives from Australia, Canada, Israel, Mexico, New Zealand, Switzerland, and a European Commission panel – signed off on a letter to Google’s CEO Larry Page asking for more information regarding the company’s plans to ensure compliance with their data protection laws.

Here in the United States, the House Bipartisan Privacy Caucus issued a similar letter of its own. In addition to a variety of questions regarding the device’s capabilities, the letters reference some of Google’s recent data privacy mishaps and ask whether Google intends to take proactive steps to ensure the protection of user and nonuser privacy.

Google’s Vice President of Public Policy and Governmental Relations (and former New York Congresswoman) Susan Molinari issued a formal response to the House Bipartisan Privacy Caucus. According to Molinari, Google “recognize[s] that new technology is going to bring up new types of questions, so [they] have been thinking carefully about how [they] design Glass from its inception.”

To address concerns about the picture and video capabilities, Molinari highlighted several features designed to “give users control” and “help other people understand what Glass users are doing.” For example, specific user commands are required to search the Internet or find directions, and the user must either press a button on the arm of the Glass or say “Take a photo” or “Record a video” in order to access those features.

Molinari’s letter plainly states that Google’s privacy policies will not change for Glass. Instead, Glass will be governed by the terms of the Google’s current privacy policy. However, the company has created policies for developers making Glass apps, also known as “Glassware.” For example, developers will not be allowed to incorporate facial recognition into their Glassware, and they are prohibited from disabling or turning off the display when using the camera. Glassware developers must also agree to terms of service for Glass’s application programming interfaces (APIs) and have and follow their own privacy policies. We are carefully observing Google’s actions to ensure that they are in keeping with Google’s promises, and we encourage all Glassware developers, as well, to comply with Google’s policies.

posted in:
Privacy
Jun 30
2013

FTC to Search Engines: Distinguish Paid Search Results or Risk FTC Action

While Google is already subject to commitments it made to the FTC regarding the requirement to afford advertisers non-discriminatory access to its search engine, the FTC’s latest guidance makes clear that Google and other search engines must also maintain clear disclosures to the public about sponsored content in search results.

On June 24, 2013, in a series of letters to general search engines such as Google, Yahoo, and Ask.com, as well as to specialized search engines, the FTC issued updated guidance concerning disclosures regarding paid advertisements in search results.

This latest FTC action follows on the heels of the Commission’s recent updates to the Dot Com Disclosures and the updated Endorsements and Testimonials Guides. The FTC’s letters came in response to industry and consumer organizations’ requests to the Commission to update its policies on search engine results, last released in 2002. The FTC also noted that it has observed a decline in search engines’ compliance since 2002.

The FTC’s central concern, first articulated in 2002, remains the problem that consumers may be deceived in violation of Section 5 of the FTC Act unless search engines clearly and prominently distinguish advertising from natural search results.

Consumers assume that search results reflect the most relevant results. When results appear because the advertiser has paid a search engine for, say, prominent placement, that placement could be deceptive to consumers if they are unaware of the commercial relationship between the advertiser and the search engine.

The growth of mobile commerce in particular has spurred the FTC to issue new guidelines. Search results on a mobile phone screens are, by their nature, small, and consumers could be easily confused by paid search results if the “paid” nature of those results is not clear.

In the new guidance, the FTC states that if search engines continue to distinguish advertising results by giving a different background color or shading combined with a text label (such as “sponsored” or “ad”), the search engines should consider multiple factors to ensure that any labels and visual cues are sufficiently “noticeable and understandable” to consumers. The agency clarified that there is no “one size fits all” and that search engines may use various methods, provided the disclosures are noticeable and understandable.

Proper disclosures, according to the FTC, include the following:

• Visual Cues – Search engines must select hues of sufficient luminosity to account for varying monitor types, technology settings, and lighting conditions. The FTC notes that search engines should consider using web pages of different luminosities for mobile devices and desktop computers. Further, the FTC recommends that search engines should use:

o more prominent shading that has a clear outline;
o a prominent border that distinctly sets off advertising from the natural search results; or
o both prominent shading and a border

• Text Labels – The FTC asserts that text labels must be used in addition to the visual cues a search engine may use to distinguish advertising. Text labels must:

o use language that explicitly and unambiguously conveys that a search result is advertising;
o be large and visible enough for consumers to notice it;
o be located near the search results (or group of search results) that it qualifies and where consumers will see it; and
o be placed immediately in front of an advertising result, or in the upper-left hand corner of an ad block, including any grouping of paid specialized results in adequately sized and colored font.

The new guidance also recognizes that technology will continue to evolve, such as voice assistants on mobile devices (e.g., the iPhone’s “Siri”). While technology may change, the new guidance makes clear that the FTC Act’s Section 5 prohibition on deceptive practices remains. Therefore, businesses must make sure that they differentiate advertising from other information. For instance, if a voice interface is used to deliver search results (for example, “find me a Mexican restaurant”) the search engine should disclose audibly any paid advertisements in adequate volume and cadence for ordinary listeners to hear and comprehend.

The FTC continues to be vigilant in monitoring the online marketplace. Search engines and advertisers need to review their practices, keeping in mind that disclosures that may be readily apparent on a desktop may be hidden on a mobile screen. As with the “Dot Com Disclosures,” the agency is providing guidance to businesses; however, FTC enforcement remains vigilant and companies that do not clearly disclose paid advertising in search results could face an FTC investigation.

Jun 12
2013

Wyndham Case Challenges FTC’s Authority Over Cybersecurity

Over the past decade the Federal Trade Commission has brought cybersecurity enforcement actions against various private companies, imposing tens of millions of dollars in monetary penalties and requiring companies to maintain more stringent data-security practices. No company has ever challenged the FTC’s authority to regulate cybersecurity in this way in court – until now. On June 17, 2013, a federal court will finally get a chance to weigh in on whether the scope of the FTC’s regulatory jurisdiction is so broad as to include setting standards for cybersecurity.

In FTC v. Wyndham Worldwide Corporation, et al., the FTC launched a civil action against the parent company of the Wyndham hotels and three of its subsidiaries for data security failures that led to three major data breaches in less than two years. The Commission’s complaint charges that Wyndham’s security practices were unfair and deceptive in violation of the FTC Act.

Unlike many other data-security FTC enforcement actions, in which the defendant has chosen to cut its losses and settle out of court, Wyndham has decided to stand and fight with a motion to dismiss. Judge Esther Salas of the U.S. District Court for the District of New Jersey is expected to rule on Wyndham’s motion on June 17.

The FTC complaint alleges that Wyndham Hotels and Resorts (a Wyndham subsidiary and named defendant) had a privacy policy that stated that they “safeguard customers’ personally identifiable information by using industry standard practices” and they “make commercially reasonable efforts to make to make [their] collection of such information consistent with all applicable laws and regulations.”

The FTC argues that this policy covers the individual hotels and that it was deceptive because the defendants failed to implement “reasonable and appropriate” data-security measures. Wyndham’s motion to dismiss attacks the facts of the deception claim by quoting language from the Wyndham Hotels and Resorts privacy policy that expressly explains that the privacy policy does not apply to the individual hotels. Wyndham argues that, taken as a whole, Wyndham Hotels and Resorts’ privacy policy is not deceptive.

With respect to the FTC’s unfairness claim, Wyndham’s motion asserts that the FTC is attempting to circumvent the legislative process by acting as if “it has the statutory authority to do that which Congress has refused: establish data-security standards for the private sector and enforce those standards in federal court.”

According to Wyndham, “on multiple occasions in the 1990s and early 2000s the FTC publicly acknowledged that it lacked authority to prescribe substantive data-security standards under the [FTC Act]. For that very reason, the FTC has repeatedly asked Congress to enact legislation giving it such authority.” Further, Wyndham highlights the Senate’s failure to pass the Cybersecurity Act of 2012, which sought to address the need for specific data-security standards for the private sector, and President Obama’s February 2013 Executive Order on cybersecurity that was issued in response to the Congressional stalemate.

On its face, Wyndham’s motion to dismiss seems quite strong. However, the facts that the FTC is alleging do not cut in Wyndham’s favor. The Commission’s complaint alleges that Wyndham’s failure to “adequately limit access between and among the Wyndham-branded hotels’ property management systems, [Wyndham] Hotels and Resorts’ corporate network, and the Internet” allowed intruders to use weak access points (e.g., a single hotel’s local computer network) to hack into the entire Wyndham Hotels and Resorts’ corporate network. From there, the intruders were able to gain access to the payment management systems of scores of Wyndham-branded hotels.

According to the FTC, Wyndham failed to remedy known security vulnerabilities, employ reasonable measures to detect unauthorized access, and follow proper incident response procedures following the first breach in April 2008. Thus, the corporation remained vulnerable to attacks that took place the following year. All told, the intruders compromised over 600,000 consumer payment card accounts, exported hundreds of thousands of payment card account numbers to a domain registered in Russia, and used them to make over $10.6 million in fraudulent purchases.

Unfortunately – as Wyndham notes in its motion to dismiss – hacking has become an endemic problem. There has been no shortage of stories about major cyber-attacks on private companies and governmental entities alike: from Google and Microsoft to the NASA and the FBI. And the FTC has not been shy about bringing enforcement actions against private companies with inadequate security measures.

If Wyndham prevails, the case could usher in a major reduction in FTC enforcement efforts. However, if the court sides with the FTC, the commission will be further empowered to regulate data security practices. With such high stakes on both sides, any decision is likely to result in an appeal. In the meantime, companies in various industry sectors that maintain personal consumer information are awaiting next week’s decision.

posted in:
Cybersecurity
Mar 15
2013

FTC Revises Online Advertising Disclosure Guidelines: Say It and Say It Clearly

This week, the FTC released updated guidance to its 2000 “Dot Com Disclosures,” a guide covering disclosures in online advertising. The online world has certainly changed in 13 years, and the new guidelines, available here, cover advances in online advertising, including mobile advertising.

One central theme still prevails: existing consumer protection laws and rules apply no matter where you offer products and services: newspapers, magazines, TV and radio commercials, websites, direct marketing, and mobile marketing. Thus, the basic principle applies that companies must ensure that their advertisements are truthful and accurate, including providing disclosures necessary to ensure that an advertisement is not misleading. Further, the disclosures should be clear and conspicuous – irrespective of the medium of the message.

In determining whether a disclosure is “clear and conspicuous” as the FTC requires, advertisers should consider the disclosure’s placement in the ad. Importantly, the 2000 guidelines defined proximity of disclosures to ads as “near, and when possible, on the same screen.” The new guidelines state that disclosures should be “as close as possible” to the relevant claim. The closer the disclosure is to the claim, the better it is for FTC compliance purposes.

Advertisers should also consider: the prominence of the disclosure; whether it is unavoidable (e.g., consumers must scroll past the disclosure before they can make a purchase); whether other parts of the ad distract attention from the disclosure; whether the disclosure should be repeated at different places on the website; whether audio message disclosures are of sufficient volume and cadence (e.g., too fast); whether visual disclosures appear long enough; and, whether the language of the disclosure is appropriate for the intended audience. The FTC suggests avoiding “legalese” or technical jargon.

Mobile marketers should take note that the FTC provided some additional guidance regarding disclosure issues particular to mobile marketing. In particular, the FTC stated that the various devices and platforms upon which an advertisement appears or a claim is made should be considered. For example, if the advertiser cannot make necessary disclosures because of the limit of the space (e.g., in a mobile app), then the claim should not be made on the platform.

The FTC does permit hyperlinks for disclosures in certain circumstances. However, hyperlinks must:
– be obvious
– be labeled appropriately to convey the importance, nature and relevance of the information they lead to (such as “Service plan required. Get service plan prices here”)
– be used consistently
– be placed as close as possible to the relevant information the hyperlink qualifies and made noticeable
– take consumers directly to the disclosure after clicking

Companies should assess the effectiveness of the hyperlink by monitoring click-through rates and make changes accordingly. The agency also suggests that advertisers design ads so that scrolling is not necessary to find a disclosure. The FTC discourages hyperlinks for disclosures involving product costs or certain health and safety issues (similar to its 2000 guidelines).

The FTC suggests that companies display disclosures before a consumer chooses to buy a product or service – in other words, the disclosure should appear before a consumer adds a purchase to his or her “shopping cart” or before the consumer clicks “buy now.” The FTC also cautions that necessary disclosures should not be relegated to general “terms of use” and similar contractual terms on a website. Further, because so many consumers have set their computers to prevent “pop ups,” the FTC discourages companies from placing disclosures in “pop ups.”

Probably the most helpful part of the new guidelines are the 22 different examples of proper/improper disclosures the FTC provides at the end of the guidelines. As companies move forward in promoting products and services online, particularly on mobile platforms, reviewing these examples along with the general principles of truthful and complete statements in advertising may save a company from an FTC enforcement action.

Organizations are increasingly marketing their products and services on mobile platforms. Advertisers should take note that special considerations apply in the mobile marketplace, especially the space and text size limitations. If a disclosure is necessary to prevent an advertisement from being deceptive, unfair, or otherwise violative of an FTC rule, it must be clear and placed next to the offer. If that can’t be done, the safest course would be to move the offer to another platform, such as a traditional website. The FTC and the states have demonstrated that they take a keen interest in mobile marketing and they will be watching claims and disclosures in the smartphone/tablet universe.

Connect with Us Share

About Ifrah Law

Crime in the Suites is authored by the Ifrah Law Firm, a Washington DC-based law firm specializing in the defense of government investigations and litigation. Our client base spans many regulated industries, particularly e-business, e-commerce, government contracts, gaming and healthcare.

Ifrah Law focuses on federal criminal defense, government contract defense and procurement, healthcare, and financial services litigation and fraud defense. Further, the firm's E-Commerce attorneys and internet marketing attorneys are leaders in internet advertising, data privacy, online fraud and abuse law, iGaming law.

The commentary and cases included in this blog are contributed by founding partner Jeff Ifrah, partners Michelle Cohen, David Deitch, and associates Rachel Hirsch, Jeff Hamlin, Steven Eichorn, Sarah Coffey, Nicole Kardell, Casselle Smith, and Griffin Finan. These posts are edited by Jeff Ifrah. We look forward to hearing your thoughts and comments!

Visit the Ifrah Law Firm website

Popular Posts