Google recently announced that it would be taking action to demote websites that profit from the use of mugshot photos. These mugshot sites compile booking photographs taken after people’s arrests and publish them along with the arrestees’ names and information concerning the charges against them. Individuals who want their mugshot and arrest record deleted from the site usually must pay a fee ranging anywhere from $10 to $400. Until recently, when a Google user searched the Internet for the name of a recent arrestee, the search hits would include, and often prioritize, mugshot sites. Owners of those sites were content with that outcome; many others were not.
New York Times writer David Segal was one of the latter. In a recent article, Segal took Google to task for not penalizing mugshot sites, which many believe traffic in exploitation. Segal argued that Google should take corrective action because it had prioritized the sites in contravention of its own stated corporate goal that favors original web content. Mugshots do not offer original content; instead, they gather and use images and text from third-party sources.
Before his article ran, Segal contacted Google to discuss the issue. Google responded that it had been working to address the problem in a consistent way. Days later, a Google spokesperson confirmed that mugshot sites do not comply with one of the search giant’s guidelines. To address the problem, Google amended its algorithm, presumably to disfavor sites without original content.
Consequently, mugshot sites are now pushed off the front page of Google search results. People digging for dirt now have to look a little bit harder.
Others who object to mugshot sites have taken the fight to regulators and legislators. On October 7, the Maryland Consumer Protection Division settled its case against the owner of Joomsef.net for false and deceptive advertising. Joomsef’s owner, Stanislav Komsky, published information on the site about traffic offenses, but added statements falsely suggesting there had been an arrest. Persons identified on the site had to pay $40 to $90 to have the information removed. As part of the settlement, Komsky must take down the site, return all payments to consumers, and pay a penalty of $7,500.
Other states are addressing the problem through legislation. Segal points out that Oregon and Georgia have passed laws this year giving site owners 30 days to take down an image, free of charge, if an individual proves that he or she was exonerated or that the individual’s record has been expunged. Utah attacked the problem another way. There, sheriffs are prohibited from giving out headshots to websites that charge for deleting them. Lawmakers in other states, like Florida Representative Carl Zimmerman, have introduced legislation targeting the sites, but many of those bills died from lack of support.
These acts of government are constrained, as they should be, in view of free-speech guarantees under the First Amendment. By contrast, the private sector is not so limited and, therefore, may end up striking the decisive blow against mugshot sites. Things are heading in that direction. MasterCard, Discover, American Express, and PayPal recently pledged to sever all ties with mugshot sites, and Visa has asked merchant banks to investigate the practices of the sites.
A great way to make money is to develop a product or service that responds to a consumer want or demand, and then to stay ahead of prospective competitors by offering better pricing or quality. A not-so-great way to make money is to convince consumers to buy a product or service that they don’t really want or need, at inflated rates. A highly dubious way to make money is to trick consumers into paying for something they didn’t want and didn’t mean to buy.
Businesses operating in this third category, which may include a scareware marketer or two, have to consider risk versus reward. Is the reward of temporary profits worth the risk of legal action; what is the likelihood of legal action; and what is the potential cost of such action?
Someone who operates on tricks over treats, or by pure scareware tactics, may expect business to dry up as consumers learn to avoid their traps. Such an operator must also face the looming threat of consumer legal action, government intervention, or run-ins with credit card companies alarmed by high chargeback rates.
For these types of businesses in the mobile marketing space, the cost of potential government intervention is going up. A recent settlement between the Federal Trade Commission and Jesta Digital LLC points to the severe penalties a business may face for operating on the sidelines of fair play. The consequences include a hefty fine, consumer refunds, restricted billing practices and stringent compliance measures for years to come.
Jesta (which also does business as Jamster) is known mostly for its marketplace of ringtones, photos, videos and apps. Starting in 2011, it ran a scareware campaign, purportedly for anti-virus software, that the FTC asserts crossed the line into deceptive advertising. The ads ran on the free version of the Angry Birds app for Android. Using a graphic that looks like the Android robot logo, the banner ad displayed a warning that viruses had been detected on the device – even though no virus scan was conducted. According to the FTC, when the consumers clicked on the “remove [virus]” button, or similar “warning” buttons, Jesta directed them through a number of pages about virus protection that left to very fine print a monthly service fee for ringtones and other content.
The FTC alleges that consumers were even charged at the instant of pressing a “Protect Your Android Today” button. Through the use of Wireless Access Protocol (WAP) billing, the company was able to charge consumers through their cell phone numbers without needing to obtain express authorization. (It may be that the use of the billing practice actually spurred the FTC into action as wireless carriers initiated their own penalties against Jesta for the large number of consumers demanding refunds.) The FTC also alleges that the anti-virus software often failed at download (apparently at one point, only 372 people out of 100,000 subscribers actually received some sort of anti-virus app download link).
The FTC describes numerous deceptive practices: mimicking the Android logo to confuse consumers into believing the virus warnings were credible, charging consumers without their knowledge or consent, failing to provide services charged for. The company apparently was aware that its scareware tactics crossed the line, as an email correspondence among company executives noted that the chief marketing officer was “anxious to move our business out of being a scam and more into a valued service.”
So now the company must pay the FTC a $1.2 million penalty and offer to refund consumers. The process of identifying and notifying consumers of their refund options and tracking all this to show to the FTC will be a costly undertaking. Another major cost will be the stringent and detailed billing practices that the company – and all participants, including principals and agents – must adhere to, disclosures it must make, and compliance monitoring and recordkeeping requirements it must adhere to, for 20 years. The settlement agreement is far more than a hand slap; its terms keep Jesta (and its principals!) beholden to the FTC for the foreseeable future.
Mobile marketers who may calculate risk versus reward and decide that a get-rich-quick scheme is worth the risk should think again. The FTC is making deceptive marketing tactics, like many scareware campaigns, a priority. We have seen strong action from the agency in the recent past, including hefty penalties for the company Innovative Marketing and its principal Marc D’Souza. Moreover, the newly-appointed head of consumer protection at the FTC, Jessica Rich, has noted that the FTC is expanding digital enforcement, increasing the risk of getting caught in the agency’s cross-hairs.
On October 3, 2013, the Consumer Financial Protection Bureau announced it had filed a complaint in federal district court in Washington state against a leading debt-settlement payment processor, Meracord LLC, and its CEO. The CFPB contends that Meracord helped third parties collect millions of dollars in illegal upfront fees from consumers.
The complaint alleged violations of the Federal Trade Commission’s Telemarketing Sales Rule (TSR) and the Consumer Financial Protection Act of 2010. The CFPB contended that Meracord maintained accounts and processed payments for consumers who had contracted with providers of debt-relief servicers and mortgage assistance relief services. As is often the case, when consumers enroll in a debt-relief program, they also enter into a separate agreement with a payment processor, which establishes and maintains a “dedicated account” for the consumer. At the time of enrollment, the debt-relief service provider instructs the consumer to stop paying his or her unsecured debts and, instead, to make monthly payments to the payment processor. The processor can later pay renegotiated debts to the creditor and also pay the debt-relief servicers’ fees.
The CFPB alleged that, since October 27, 2010, Meracord processed payments for more than 250,000 consumers receiving debt-relief services from more than 250 debt-relief service servicers. According to the agency, consumers paid debt-relief service providers before any debts were settled. The Telemarketing Sales Rule has special requirements for debt reduction services. In particular, providers are not allowed to request or take fees for services before providing debt-relief services resulting in actual renegotiation or other settlement of a consumer’s debt and a payment by the consumer to a creditor. The FTC asserted that Meracord processed payments for debt reduction services which routinely charged advanced fees to consumers in violation of the TSR.
The TSR also makes it unlawful for third parties to assist others in violating the TSR. The CFPB used this section of the TSR against Meracord. Since Meracord collected the payments from consumers and would know whether or not they had been disbursed to creditors, and when they had been disbursed to the debt-relief servicers, Meracord would have knowledge that the debt-relief servicers were violating the TSR by collecting fees prior to delivering debt-relief services that resulted in payments to creditors.
Meracord and its CEO have agreed to settle the case. In the Stipulated Final Judgment and Order, Meracord and its CEO, Linda Remsberg, agree that they will permanently enjoined from providing account-maintenance or payment-processing services to any provider of a debt-relief service or a mortgage assistance relief service. The proposed settlement (which must be approved in court) also provides for a civil money penalty of $1.37 million and compliance reporting and monitoring, as well as ongoing recordkeeping requirements.
The CFPB’s action signals that it will use its authority to reach organizations that it believes provide substantial assistance to others allegedly violating consumer protection laws within its jurisdiction. CFPB Director Richard Cordray said, “By taking a stand against those who facilitate illegal activity, we can root out harmful behavior across the debt-settlement industry and better protect consumers.” Thus, it is not only those companies dealing directly with consumers who need to be cognizant of the CFPB’s reach. In particular, organizations within the “chain” of industries such as debt-settlement and credit repair, should review their compliance with laws and rules the CFPB may enforce (usually shared with other agencies such as the FTC), and which include the Fair Debt Collections Practices Act, the Fair Credit Reporting Act, the Telemarketing Sales Rule, the Business Opportunities Rule, and other consumer financial-related statutes.
It’s quite clear that the Federal Trade Commission and the Federal Communications Commission view existing federal consumer protection and communications statutes as fully applicable to new modes of communication such as texting. One excellent recent example is the FTC’s stipulated settlement, including a payment of $1 million, with a debt collection agency that had sent out text messages in order to collect debts.
The FTC had filed suit under the Fair Debt Collection Practices Act (FDCPA) against National Attorney Collection Services, Inc., National Attorney Services LLC, and Archie Donovan (as an individual). This appears to be the first FTC complaint alleging the illegal use of text messaging to collect consumer debts. In addition, the defendants were also alleged to have violated the FDCPA in more traditional ways by publicly revealing consumer debts to family members and co-workers, sending mailings that had a picture on the envelope of an outstretched arm shaking out an upside-down consumer to empty the money in their pockets, and falsely portraying themselves as law firms or attorneys in phone calls and mailings, as well as in text messages. Of course, the “older” methods of violations were troublesome in and of themselves, but there were two specific points that we see as trend-setting in FTC enforcement.
The first point is the FTC’s emphasis that the medium of text messages does not change disclosure obligations under the FDCPA. The FTC has continued to crack down on illegal behavior that may be carried out by non-traditional means. As Jessica Rich, director of the FTC’s Bureau of Consumer Protection, has said, “No matter how debt collectors communicate with consumers — by mail, by phone, by text or some other way — they have to follow the law.”
The consumer protections in the FDCPA that require the disclosure in initial communications that the company is a debt collector and that any communications may be used to collect a debt apply equally to text messages, even though there may be significant space and size limitations. Likewise, any follow-up text message must state that the communication comes from a debt collector.
The second noteworthy point was the level of consent required by the stipulated order. The stipulated order provides that “express consent” shall mean that prior to sending a text message to a consumer’s mobile telephone: “(i) the Defendants . . . shall have clearly and prominently disclosed that the debtor may receive collection text messages on mobile phone numbers . . . in connection with the transaction that is the subject of the text message; and (ii) the individual has taken an additional affirmative step, including a signature or electronic signature, that indicates their agreement to receive such contacts.”
The FTC appears to have adopted a more stringent definition of consent (similar to the FCC) and is using the stipulated order as a means of notifying companies and consumers of the higher standard. Of course, it is possible to argue that the FTC is only requiring these particular defendants to meet the higher standard because of their alleged prior bad acts. However, we believe it more likely that the FTC is attempting to enforce a standard of express consent similar to that which the FCC has recently promulgated. Consequently, all companies are well advised to meet this higher standard of consent.
The FTC has now put the industry on alert to ensure that their text messages comply with any applicable law. The idiosyncrasies of modern methods of communication do not limit the compliance obligation. Ignorance is not a defense, even though Donovan’s attorney said that “the companies are now in compliance,” and that “nobody was intending to violate the law.”
The Federal Trade Commission recently filed another complaint against a company for alleged data security lapses. As readers of this blog know, the FTC has initiated numerous lawsuits against companies in various industries for data security and privacy violations, although it is facing a backlash from Wyndham and large industry organizations for allegedly lacking the appropriate authority to set data security standards in this way.
The FTC’s latest target is LabMD, an Atlanta-based cancer detection laboratory that performs tests on samples obtained from physicians around the country. According to an FTC press release, the FTC’s complaint (which is being withheld while the FTC and LabMD resolve confidentiality issues) alleges that LabMD failed to reasonably protect the security of the personal data (including medical information) of approximately 10,000 consumers, in two separate incidents.
Specifically, according to the FTC, LabMD billing information for over 9,000 consumers was found on a peer-to-peer (P2P) file-sharing network. The information included a spreadsheet containing insurance billing information with Social Security numbers, dates of birth, health insurance provider information, and standardized medical treatment codes.
In the second incident, the Sacramento, California Police Department found LabMD documents in the possession of identity thieves. The documents included names, Social Security numbers, and some bank account information. The FTC states that some of these Social Security numbers were being used by multiple individuals, indicating likely identity theft.
The FTC’s complaint alleges that LabMD did not implement or maintain a comprehensive data security program to protect individuals’ information, that it did not adequately train employees on basic security practices, and that it did not use readily available measures to prevent and detect unauthorized access to personal information, among other alleged failures.
The complaint includes a proposed order against LabMD that would require the company to implement a comprehensive information security program. The program would also require an evaluation every two years for 20 years by an independent certified security professional. LabMD would further be required to provide notice to any consumers whose information it has reason to believe was or could have been accessible to unauthorized persons and to consumers’ health insurance companies.
LabMD has issued a statement challenging the FTC’s authority to regulate data security, and stated that it was the victim of Internet “trolls” who presumably stole the information. This latest complaint is yet another sign that the FTC continues to monitor companies’ data security practices, particularly respecting health, financial, and children’s information. Interestingly, the LabMD data breaches were not huge – with only 10,000 consumers affected. But, the breach of, and potential unauthorized access to, sensitive health information and Social Security numbers tend to raise the FTC’s attention.
While industry awaits the district court’s decision on Wyndham’s motion to dismiss based on the FTC’s alleged lack of authority to set data security standards, companies should review and document their data security practices, particularly when it comes to sensitive personal information. Of course, in addition to the FTC, some states, such as Massachusetts, have their own data security standards, and most states require reporting of data breaches affecting personal information.
Manufacturers and marketers know that the more consumer data they have, the more they can tailor and direct their advertising, their products, and their product placement. This helps them to maximize sales and minimize costs. Thanks to the combination of cheap data storage and ubiquitous data capturers (e.g., smart phones, credit cards, the Web), the amount of consumer data out there to mine is astounding. Hence the recently-popularized term, “Big Data.”
But the misuse of data could result in government enforcement actions and, more importantly, serious privacy violations that can affect everyone.
Some of the practical challenges and concerns flowing from the use of big data were addressed recently by FTC Commissioner Julie Brill at the 23rd Computers, Freedom and Privacy conference on June 26. Issues raised include noncompliance with the Fair Credit Reporting Act and consumer privacy matters such as transparency, notice and choice, and deidentification (scrubbing consumer data of personal identifiers).
The FCRA: Those whose business includes data collection or dissemination should determine whether their practices fall within the boundaries of the FCRA. As Brill pointed out, “entities collecting information across multiple sources and providing it to those making employment, credit, insurance and housing decisions must do so in a manner that ensures the information is as accurate as possible and used for appropriate purposes.” If Brill’s comments are any indication of enforcement actions to come, businesses should be aware that the FTC is on the lookout for big data enterprises that don’t adhere to FCRA requirements.
Consumer Privacy: Brill gave some credit to big data giant Acxiom for its recent announcement that it plans to allow consumers to see what information the company holds about them, but she noted that this access is of limited use when consumers have no way of knowing who the data brokers are or how their information is being used. Brill highlighted how consumer data is being (questionably) used by national retailer Target: the somewhat funny yet disturbing news story about Target Stores identifying a teen’s pregnancy. It is a classic example of why consumers ought to have notice of what data is being collected on them and how that information is being used.
Consumers also need to have, as Brill suggested, the opportunity to correct information about themselves. This makes sense. Data collection is imperfect – different individuals’ information may be inaccurately combined; someone’s information may have been hacked; someone could be the victim of cyber-bullying, and other mishaps and errors can occur. Consumers should be able to review and correct information for errors. Finally, Brill highlighted concerns that current efforts to scrub consumer data may be ineffective, as companies are getting better at taking different data points and still being able to accurately identify the individual. “Scrubbed” data in the wrong hands could be as harmful as a direct security breach.
Brill encouraged companies to follow the “privacy by design” recommendations issued by the FTC in order to build more protections into their products and services. She further emphasized her initiative “Reclaim Your Name,” which is set to promote consumer knowledge and access to data collection. Companies that are in the business of data collection, mining and analytics should take note of the FTC’s efforts to empower the consumer against the overuse or misuse of consumer data. If you want to stay on the good side of the FTC – and on the good side of the informed consumer – work with the consumer, and provide meaningful notice, choice and consent.
Following a public comment period, the Federal Trade Commission recently approved a final order settling charges against mobile device manufacturer HTC America, Inc. HTC develops and manufactures mobile devices based on the Android, Windows Mobile, and Windows Phone operating systems. This case, which focuses on device security, is the FTC’s first case against a device manufacturer.
The FTC alleged that HTC failed to take reasonable steps to secure the software it developed for its smartphones and tablet computers. According to the FTC, HTC’s failures introduced various security flaws that placed consumers’ sensitive information at risk. The FTC’s action against HTC signals the agency’s continued focus on data security and data privacy issues and use of its broad “Section 5” authority, which the FTC has repeatedly asserted against various organizations, including its ongoing litigation with Wyndham Hotels. The HTC case also reiterates the agency’s strong interest in securing mobile networks,[link to blog regarding mobile apps], now that mobile phones, which are full of sensitive contact, financial, and other personal information, have become so prevalent.
Companies may be asking what HTC actually did to warrant this FTC action. The FTC claims that HTC, when customizing the software on mobile devices, failed to provide its staff with sufficient security training, failed to review or test the software on its mobile devices for potential security vulnerabilities, failed to follow commonly accepted secure coding practices, and did not have a process for receiving and addressing vulnerability reports from third parties.
In particular, the FTC asserted that HTC devices potentially permitted malicious applications to send text messages, record audio, and install additional malware onto a consumer’s device, without the user’s consent or even knowledge. These malicious applications allegedly could access financial and medical information and other sensitive information such as a user’s geolocation and text message content.
In particular, in the case of Android devices, the FTC claimed that HTC pre-installed a custom application that could download and install applications outside the normal Android installation process. However, HTC did not include an appropriate permission check code to protect the pre-installed application from installation. Consequently, a third party application could command this pre-installed application to download and install any additional applications onto the device without a user’s knowledge or consent.
The FTC further charged that HTC’s actions actually undermined Android consent mechanisms that, but for HTC’s actions, would have prevented unauthorized access and transmission of sensitive information. The FTC’s complaint alleged that the vulnerabilities have been present on approximately 18.3 million HTC devices running Android. The complaint further alleged that HTC could have prevented these vulnerabilities through readily available, low-cost measures, such as adding a few lines of permission check code when programming its pre-installed applications.
In a precedent-setting remedy, the FTC’s final order requires HTC to develop and release software patches within 30 days of service of the FTC’s final order on HTC. The patches must fix vulnerabilities in millions of HTC’s devices, including every covered device having an operating system version released on or after December 2010. HTC must also establish a comprehensive security program designed to address security risks during the development of HTC devices. The FTC requires the program to include consideration of employee training and management; product design, development and research; secure software design and testing; and review, assessment, and response to third party security vulnerability reports.
Further, HTC must undergo independent security assessments every other year for the next 20 years. Among other requirements, the independent, professional assessment must certify that HTC’s security program operates with sufficient effectiveness to provide reasonable assurance that the security of covered device functionality and the security, confidentiality, and integrity of covered information is protected and has operated during the reporting period. HTC is barred from making false or misleading statements about the security and privacy of consumers’ data on HTC devices.
The FTC’s action against HTC has broad application beyond the mobile device and software marketplace. The agency’s action further solidifies the FTC’s role as the leading enforcer of data security standards. Once again the FTC has demonstrated that it is setting data security standards and will continue to monitor and police the marketplace when it believes companies have not incorporated what it believes are commonly accepted security features or when organizations have failed to take steps to prevent vulnerabilities.
Ignorance of the law is no excuse; nor is (willful) ignorance of a business partner’s illegal activities.
That’s a lesson to be learned from a recent amended complaint filed by the FTC which named a payment processor in its complaint against a telemarketer that allegedly engaged in a scam concerning credit card interest rate reduction. The Commission originally filed suit against the telemarketing company, Innovative Wealth Builders, Inc., and its owners in January for misrepresenting the debt relief service they were selling, charging a fee before providing debt relief services, and billing consumers without their express informed consent. The January action resulted in the temporary shutdown of the company’s operations pending outcome of the suit.
But earlier this month, the Commission filed an amended complaint, adding charges against Independent Resources Network Corp., IWB’s payment processor. The payment processor was accused of assisting and facilitating IWB’s deceptive practices, in violation of the Telemarketing Sales Rule (TSR). 16 C.F.R. § 310.3(b).
The TSR was created by the FTC in 1995 at the direction of Congress, which directed the Commission to proscribe rules to address abusive and deceptive telemarketing acts. The FTC has amended the TSR several times in order to respond to developments in telemarketing schemes. The amendments allow for liability for third parties such as payment processors and lead generators that have provided “substantial assistance or support” to any seller or telemarketer while knowing, or consciously avoiding knowing, that the seller or telemarketer is engaged in activity in violation of the TSR.
In the FTC’s complaint against IRN, the Commission notes that IRN “processed millions of dollars of credit card transactions for IWB, thereby earning considerable fees for itself.” The complaint identifies several indicators that would have, or should have, put IRN on notice of IWB’s practices in violation of the TSR, including the following:
• IWB sent IRN copies of company documents including, but not limited to, telemarketing scripts and samples of the IWB defendants’ “financial plan”
• IRN was aware that IWB had a variety of complaints on consumer websites
• IWB had an “F” rating with Better Business Bureau and IRN accessed the BBB website several times
• IRN received thousands of copies of chargeback disputes initiated by dissatisfied consumers
• IRN received multiple fraud alerts from Discover regarding IWB
Instead of ceasing to process transactions for IWB, IRN responded to these “red flags” by increasing the percentage it withheld from transactions processed for the telemarketer and holding such sums in a reserve account.
Some takeaways from the complaint and other FTC developments: if you are going to invest in basic due diligence to determine the risk level and credibility of a prospective account, you should probably follow through with your findings, following the letter of the law. The FTC recently issued additional proposed rule changes to the TSR to address more payment processing concerns.
Over the past decade the Federal Trade Commission has brought cybersecurity enforcement actions against various private companies, imposing tens of millions of dollars in monetary penalties and requiring companies to maintain more stringent data-security practices. No company has ever challenged the FTC’s authority to regulate cybersecurity in this way in court – until now. On June 17, 2013, a federal court will finally get a chance to weigh in on whether the scope of the FTC’s regulatory jurisdiction is so broad as to include setting standards for cybersecurity.
In FTC v. Wyndham Worldwide Corporation, et al., the FTC launched a civil action against the parent company of the Wyndham hotels and three of its subsidiaries for data security failures that led to three major data breaches in less than two years. The Commission’s complaint charges that Wyndham’s security practices were unfair and deceptive in violation of the FTC Act.
Unlike many other data-security FTC enforcement actions, in which the defendant has chosen to cut its losses and settle out of court, Wyndham has decided to stand and fight with a motion to dismiss. Judge Esther Salas of the U.S. District Court for the District of New Jersey is expected to rule on Wyndham’s motion on June 17.
With respect to the FTC’s unfairness claim, Wyndham’s motion asserts that the FTC is attempting to circumvent the legislative process by acting as if “it has the statutory authority to do that which Congress has refused: establish data-security standards for the private sector and enforce those standards in federal court.”
According to Wyndham, “on multiple occasions in the 1990s and early 2000s the FTC publicly acknowledged that it lacked authority to prescribe substantive data-security standards under the [FTC Act]. For that very reason, the FTC has repeatedly asked Congress to enact legislation giving it such authority.” Further, Wyndham highlights the Senate’s failure to pass the Cybersecurity Act of 2012, which sought to address the need for specific data-security standards for the private sector, and President Obama’s February 2013 Executive Order on cybersecurity that was issued in response to the Congressional stalemate.
On its face, Wyndham’s motion to dismiss seems quite strong. However, the facts that the FTC is alleging do not cut in Wyndham’s favor. The Commission’s complaint alleges that Wyndham’s failure to “adequately limit access between and among the Wyndham-branded hotels’ property management systems, [Wyndham] Hotels and Resorts’ corporate network, and the Internet” allowed intruders to use weak access points (e.g., a single hotel’s local computer network) to hack into the entire Wyndham Hotels and Resorts’ corporate network. From there, the intruders were able to gain access to the payment management systems of scores of Wyndham-branded hotels.
According to the FTC, Wyndham failed to remedy known security vulnerabilities, employ reasonable measures to detect unauthorized access, and follow proper incident response procedures following the first breach in April 2008. Thus, the corporation remained vulnerable to attacks that took place the following year. All told, the intruders compromised over 600,000 consumer payment card accounts, exported hundreds of thousands of payment card account numbers to a domain registered in Russia, and used them to make over $10.6 million in fraudulent purchases.
Unfortunately – as Wyndham notes in its motion to dismiss – hacking has become an endemic problem. There has been no shortage of stories about major cyber-attacks on private companies and governmental entities alike: from Google and Microsoft to the NASA and the FBI. And the FTC has not been shy about bringing enforcement actions against private companies with inadequate security measures.
If Wyndham prevails, the case could usher in a major reduction in FTC enforcement efforts. However, if the court sides with the FTC, the commission will be further empowered to regulate data security practices. With such high stakes on both sides, any decision is likely to result in an appeal. In the meantime, companies in various industry sectors that maintain personal consumer information are awaiting next week’s decision.
The Federal Trade Commission recently approved nine final orders that settle charges against seven rent-to-own stores and a software design firm and its principals. The charges stemmed from shocking allegations that the companies spied on consumers using computers that the consumers had rented from them. Among other things, the Commission’s complaint alleged that the computers were equipped with software (PC Rental Agent) that used the rented computer’s webcam to take “pictures of children, individuals not fully clothed, and couples engaged in sexual activities.”
PC Rental Agent was designed by one of the defendants, DesignerWare, LLC, a Pennsylvania-based software company that licenses software to rent-to-own companies to assist them in locating stolen merchandise and collecting late payments. PC Rental Agent has three critical features: a kill switch, geophysical location tracking, and a Detective Mode. Using the “kill switch” and geophysical location tracking, DesignerWare could remotely disable and locate the rented computers. However, at the request of the rent-to-own stores, DesignerWare would remotely activate the “Detective Mode” on an individual computer and “surreptitiously log the computer user’s keystrokes, capture screenshots and take pictures with the computer’s webcam and send the data to DesignerWare servers.”
DesignerWare did not review the data gathered, rather it forwarded it, unencrypted, directly to an email account designated by the particular rent-to-own store. In numerous instances the data included “private and confidential details about the computer users” including user names and passwords for email, banking, and social media accounts in addition to users’ social security numbers, financial statements, and medical records.
In settling the complaint, the companies agreed to a ban on the use of monitoring software and deceptive methods to gather consumer information. This includes a bar on the use of fake software registration screens to collect personal consumer information and the use of geophysical location tracking without consumer notice and consent. The seven rent-to-own companies are also barred from using improperly gathered information to collect on customer accounts. DesignerWare and its principals are barred from providing others with the means to commit illegal acts. Additionally, all of the defendants are subject to recordkeeping requirements that will allow the FTC to monitor their compliance for the next 20 years.
In a case with such sensational facts, it is quite notable that beyond the FTC monitoring requirement, the penalties are essentially a restatement of the rules by which all companies must regularly abide. It is unclear why no civil penalty was issued for behavior that sounds as egregious as this behavior does. Perhaps it is because there were no allegations of malicious intent or that the data was transferred to third parties or used in any way other than to retrieve rented computers. Whatever the case may be, this is yet another reminder that companies should ensure that they give proper notice before collecting customers’ personal information and avoid collecting more information than necessary.