On August 22, 2013, the U.S. Court of Appeals for the Third Circuit ruled unanimously that under the Telephone Consumer Protection Act (TCPA), consumers may withdraw their consent to have robo-callers call them. The full text of the opinion is available here.
The appeals court ruled in favor of Ashley Gager, who was contacted by Dell Financial Services after she revoked her prior express consent to be contacted. In 2007, Gager applied for a line of credit from Dell, which she received and upon which she later defaulted. Gager’s application for a credit line required that she provide her home phone number. In that place in the application she listed her cell phone number. After she defaulted on her credit line, Dell began calling Gager from an automated telephone dialing system. In 2010, Gager sent Dell a letter listing her phone number, which she did not indicate was a cell number, asking Dell not to call her anymore. Gager alleged that after receiving her letter, Dell called her cell phone using an automated dialing system approximately 40 times over a three week period. The TCPA, among other things, bars companies from using an automatic telephone dialing system or a prerecorded voice to call mobile phones, absent prior express consent or an emergency.
The district court granted Dell’s motion to dismiss the complaint for failure to state a claim, holding that Gager could not revoke her prior express consent to receive calls. The district court held that because Dell did not qualify as a “debt collector,” the revocation rules under the Fair Debt Collection Practices Act (FDCPA) did not apply. Thus, the court reasoned that since the revocation rules were inapplicable and the TCPA is silent on revocation of consent, such a right did not exist. The court also noted that the Federal Communications Commission, which has the power to implement rules and regulations under the TCPA, had not issued any advisory opinions at the time that specifically addressed the right to revoke consent.
The Third Circuit reversed the district court’s ruling and found that consumers do have a right to revoke consent. The court rejected Dell’s argument that because the TCPA is silent as to whether a consumer may revoke consent to be contacted via an autodialing system, such a right to revoke did not exist. The Third Circuit’s opinion emphasized that the TCPA is a remedial statute that was passed to protect consumers from unwanted calls and should be construed to benefit consumers. Preventing consumers from revoking their consent to receive calls would not be consistent with the purpose of the statute.
The Third Circuit also noted that the FCC issued a declaratory ruling In the Matter of Rules and Regulations Implementing the Telephone Consumer Protection Act of 1991, SoundBite Communications Inc., after the district court dismissed Gager’s claim, which primarily addresses other issues under the TCPA, but also touched on the issue of the right of consumers to revoke express consent. The SoundBite decision notes that neither the text of the TCPA, nor the legislative history, directly addresses how prior express consent can be revoked, but also notes that “consumer consent to receive . . . messages is not unlimited.” The Third Circuit relied on the SoundBite decision in finding that a consumer may revoke informed consent after it has been given and that there is no temporal limitation on the revocation period.
Dell will still be able to call Gager regarding her delinquent account, but the TCPA prohibits Dell from using an automated dialing system to do so, since the TCPA prohibits autodialed or prerecorded calls to mobile phones without express written consent (or in an emergency). Presumably, Dell can still contact Gager via live calls or through technology that does not amount to an automatic telephone dialing system.
In light of this decision in the Third Circuit, businesses should review their TCPA policies to ensure that they are complying with all rules and regulations. Additionally, on October 16, two additional changes to the TCPA rules will go into effect that impose stricter requirements on claiming exceptions to TCPA liability and all TCPA policies should be reviewed to account for these changes. Businesses should also specifically review their TCPA policies to endure that there is a procedure in place for consumers to opt out of receiving calls and text messages, even if they have previously provided consent. Taking and respecting opt-out requests is an important compliance practice that, if not followed, can lead to significant litigation — and potential damages and penalties.
The credit reporting industry – dominated by Experian, Equifax and Transunion – maintains a precarious balance of obligations: On the one hand, these companies bear a responsibility to banks and other businesses at large to retain reliable information to ensure that the credit scores they report are a fair representation of the individual’s credit-worthiness. On the other hand, federal law, including the Fair Credit Reporting Act, imposes an obligation upon the credit reporting agencies and other related companies to conduct reasonable investigations to address disputes about errors in individuals’ credit files. In both instances, the companies bear a weighty responsibility.
For this reason, companies in the credit reporting industry are subject to intensive regulatory scrutiny – historically by the Federal Trade Commission and, more recently, by the Consumer Financial Protection Bureau. Both agencies have issued reports on their studies of the way in which credit reporting companies handle the information entrusted to them, and how they respond to consumer disputes.
This past Sunday, CBS’s 60 Minutes – a show that most people associate with responsible news reporting – ran a segment that unfairly distorted these reports about credit reporting agencies’ compliance with their obligations. The show, which was largely based on an advance copy of an FTC study, relied upon selective interpretation of the data in that study, throwing out snippets of information without being specific on what the data meant.
The vast majority of the story can hardly be viewed as unbiased: interviews with a politically motivated state attorney general, two plaintiffs’ attorneys who spend their careers suing the credit reporting agencies, a handful of dissatisfied consumers, and several disgruntled former call center employees whose role in addressing consumer complaints was never really explained in a meaningful way. The result was a show clearly intended to convey a message that the credit data retained by these companies is riddled with errors, and that the credit reporting agencies fail to comply with their legal obligations to take steps when there is a claim of an inaccuracy.
In fact, as the Consumer Data Industry Association has pointed out, the FTC study shows that 98 percent of credit reports are materially accurate. In this regard, 60 Minutes missed the most critical point in the research – that the measure of accuracy is tied to the question of whether an error has consequences for consumers and not just whether there is an error that has little or no impact on credit scores. The FTC study actually concluded that only 2.2 percent of credit reports have an error that would lead to higher-priced credit for the consumer.
60 Minutes compounded its error by repeatedly asserting that it was “nearly impossible to expunge” an error in a credit report, and providing a forum for a state attorney general and two plaintiffs’ attorneys to assert that the credit reporting companies do not comply with their obligations under federal law. This one-sided treatment does not square with a 2011 study from the Political and Economic Research Council that showed that consumers were satisfied with the resolution of their disputes in 95 percent of the cases. It also does not square with the results of a year-long study of the dispute process by the FTC in which the agency found no violations of law.
It is not hard to understand what motivated 60 Minutes to run this story: Because everyone has a credit score, an inflammatory story about credit scores is likely to get everyone’s attention. But the one-sided and distorted way in which 60 Minutes presented this information was a disservice to the public. And even if credit reporting agencies are not perfect, they deserve better treatment at the hands of those who have the public’s ear.
On October 16, 2013, two changes will go into effect in the rules implementing the federal Telephone Consumer Protection Act (TCPA). Importantly, these rules impose stricter requirements on mobile messaging and prerecorded telemarketing calls. The rule changes, announced back in February 2012, may spur further litigation concerning the scope of the TCPA. All businesses should review the new requirements to ensure compliance or risk significant potential litigation expenses and negative publicity.
TCPA litigation has been increasing significantly in recent years. The number of TCPA-related cases filed in 2012 increased by 34 percent compared to 2011 and was more than three times the number of cases brought in 2010. Part of the reason fueling the uptick in TCPA litigation is the increasing use of mobile messaging, combined with the enormous potential damages possible under the statute. Every individual text, call or fax that is found to be in violation of the TCPA can result in damages from $500 to $1,500 and there is no limit on the number of violations that can be included in an individual suit. The Federal Communications Commission (FCC) and state attorney generals, as well as private litigants, may also enforce the TCPA.
Some major companies have been hit with significant penalties under the TCPA. In May, Papa John’s International agreed to pay $16.5 million as part of a settlement of a TCPA class action stemming from claims that the company sent unsolicited text messages to more than 200,000 people through a third-party marketer. Steve Madden and Domino’s Pizza have also both reached settlements this year agreeing to fines of nearly $10 million to settle TCPA claims.
The two changes going into effect in October are as follows. One exception from liability under the TCPA for phone calls or text messages using an autodialer or a prerecorded message is for those that are made with “prior express consent.” Under the new interpretation from the FCC of the prior consent exception, with limited exceptions, a business can only invoke the prior express consent exception for autodialed or prerecorded calls to a mobile phone or for prerecorded telemarketing calls to a residential line if the called party has physically or electronically signed an agreement that clearly authorizes calls or texts to be made to their phone number by that particular sender. Additionally, a recipient’s signing the agreement must be optional and cannot be tied to the purchase of any goods or services.
The other significant change to the TCPA rules is the elimination of the “established business relationship” exception for prerecorded telemarketing calls to residences. Previously, businesses could avoid TCPA liability for prerecorded telemarketing calls that otherwise were prohibited by claiming that they had an established business relationship with the consumer by virtue of a previous purchase or other business interactions. The new regulations have eliminated this exemption, meaning businesses are now required to obtain written consent for all prerecorded telemarketing to residential phone numbers, even those that are for previous customers. With this change, the FCC followed the Federal Trade Commission (FTC), which made a similar express consent requirement under the Telemarketing Sales Rule for prerecorded telemarketing calls a few years ago.
As some of the recent cases have shown, businesses can face enormous potential liability under the TCPA, including liability for actions of third-party marketers acting on behalf of them. The statistics demonstrate that plaintiffs’ lawyers are aggressively pursuing TCPA actions, and the changes in the rules may lead to yet more TCPA cases. Given the changes that will go into effect in October, businesses should review their TCPA policies to ensure that they are in compliance, so that they can avoid the possibility of paying onerous penalties.
This week the Federal Trade Commission entered into a consent decree with Certegy Check Services, one of the nation’s check authorization service companies, pursuant to which Certegy has agreed to pay $3.5 million to settle charges that it violated the Fair Credit Reporting Act (FCRA). This massive penalty – the second largest ever – reinforces the perception that the FTC will continue vigorous enforcement against what it perceives as violations of that venerable statute, first passed in 1970.
The FCRA establishes obligations not only for the three big consumer reporting agencies (CRAs) – Experian, Transunion, and Equifax – but also for “nationwide specialty consumer reporting agencies”. These are CRAs that compile and maintain files on consumers on a nationwide basis relating to medical records or payments, residential or tenant history, check writing history, employment history, or insurance claims. Certegy, which falls within this latter category of entities subject to the FCRA, was obligated to “follow reasonable procedures to assure maximum possible accuracy” in the reports it provided concerning consumers’ financial information, and was also obligated to investigate any consumer dispute regarding such informationwithin a reasonable period of time, to report back to the consumer, and to delete any information that is inaccurate, incomplete, or unverifiable.
While Certegy is not as well known to consumers as the big three credit reporting agencies, it plays an important role in consumer transactions. When people want to pay by check, many businesses rely on Certegy for a check authorization recommendation that is based in part on information in its files about consumers’ check writing history. Certegy also furnishes information to other credit reporting agencies, which may multiply the effect of any inaccuracies.
The FTC alleged in its complaint that Certegy failed to comply with many of its obligations as a nationwide specialty consumer reporting agency. Among other things, the FTC asserted that Certegy would not undertake the required reinvestigation of allegedly inaccurate information, and would place unfair burdens on consumers in connection with requests for such reinvestigations. The FTC states that this is its first case alleging a violation of the so-called “Furnisher Rule” relating to regulations governing such entities that furnish credit report information on consumers.
The stipulated order will certainly change the way that Certegy does business but, perhaps even more important, the $3.5 million penalty should attract the attention of other entities whose businesses are subject to the FCRA. Such businesses would be wise to revisit their policies and procedures to ensure that they comply with the obligations under the statute and related regulations to ensure that they will not be the next target of FTC enforcement in this area.
Manufacturers and marketers know that the more consumer data they have, the more they can tailor and direct their advertising, their products, and their product placement. This helps them to maximize sales and minimize costs. Thanks to the combination of cheap data storage and ubiquitous data capturers (e.g., smart phones, credit cards, the Web), the amount of consumer data out there to mine is astounding. Hence the recently-popularized term, “Big Data.”
But the misuse of data could result in government enforcement actions and, more importantly, serious privacy violations that can affect everyone.
Some of the practical challenges and concerns flowing from the use of big data were addressed recently by FTC Commissioner Julie Brill at the 23rd Computers, Freedom and Privacy conference on June 26. Issues raised include noncompliance with the Fair Credit Reporting Act and consumer privacy matters such as transparency, notice and choice, and deidentification (scrubbing consumer data of personal identifiers).
The FCRA: Those whose business includes data collection or dissemination should determine whether their practices fall within the boundaries of the FCRA. As Brill pointed out, “entities collecting information across multiple sources and providing it to those making employment, credit, insurance and housing decisions must do so in a manner that ensures the information is as accurate as possible and used for appropriate purposes.” If Brill’s comments are any indication of enforcement actions to come, businesses should be aware that the FTC is on the lookout for big data enterprises that don’t adhere to FCRA requirements.
Consumer Privacy: Brill gave some credit to big data giant Acxiom for its recent announcement that it plans to allow consumers to see what information the company holds about them, but she noted that this access is of limited use when consumers have no way of knowing who the data brokers are or how their information is being used. Brill highlighted how consumer data is being (questionably) used by national retailer Target: the somewhat funny yet disturbing news story about Target Stores identifying a teen’s pregnancy. It is a classic example of why consumers ought to have notice of what data is being collected on them and how that information is being used.
Consumers also need to have, as Brill suggested, the opportunity to correct information about themselves. This makes sense. Data collection is imperfect – different individuals’ information may be inaccurately combined; someone’s information may have been hacked; someone could be the victim of cyber-bullying, and other mishaps and errors can occur. Consumers should be able to review and correct information for errors. Finally, Brill highlighted concerns that current efforts to scrub consumer data may be ineffective, as companies are getting better at taking different data points and still being able to accurately identify the individual. “Scrubbed” data in the wrong hands could be as harmful as a direct security breach.
Brill encouraged companies to follow the “privacy by design” recommendations issued by the FTC in order to build more protections into their products and services. She further emphasized her initiative “Reclaim Your Name,” which is set to promote consumer knowledge and access to data collection. Companies that are in the business of data collection, mining and analytics should take note of the FTC’s efforts to empower the consumer against the overuse or misuse of consumer data. If you want to stay on the good side of the FTC – and on the good side of the informed consumer – work with the consumer, and provide meaningful notice, choice and consent.
Following a public comment period, the Federal Trade Commission recently approved a final order settling charges against mobile device manufacturer HTC America, Inc. HTC develops and manufactures mobile devices based on the Android, Windows Mobile, and Windows Phone operating systems. This case, which focuses on device security, is the FTC’s first case against a device manufacturer.
The FTC alleged that HTC failed to take reasonable steps to secure the software it developed for its smartphones and tablet computers. According to the FTC, HTC’s failures introduced various security flaws that placed consumers’ sensitive information at risk. The FTC’s action against HTC signals the agency’s continued focus on data security and data privacy issues and use of its broad “Section 5” authority, which the FTC has repeatedly asserted against various organizations, including its ongoing litigation with Wyndham Hotels. The HTC case also reiterates the agency’s strong interest in securing mobile networks,[link to blog regarding mobile apps], now that mobile phones, which are full of sensitive contact, financial, and other personal information, have become so prevalent.
Companies may be asking what HTC actually did to warrant this FTC action. The FTC claims that HTC, when customizing the software on mobile devices, failed to provide its staff with sufficient security training, failed to review or test the software on its mobile devices for potential security vulnerabilities, failed to follow commonly accepted secure coding practices, and did not have a process for receiving and addressing vulnerability reports from third parties.
In particular, the FTC asserted that HTC devices potentially permitted malicious applications to send text messages, record audio, and install additional malware onto a consumer’s device, without the user’s consent or even knowledge. These malicious applications allegedly could access financial and medical information and other sensitive information such as a user’s geolocation and text message content.
In particular, in the case of Android devices, the FTC claimed that HTC pre-installed a custom application that could download and install applications outside the normal Android installation process. However, HTC did not include an appropriate permission check code to protect the pre-installed application from installation. Consequently, a third party application could command this pre-installed application to download and install any additional applications onto the device without a user’s knowledge or consent.
The FTC further charged that HTC’s actions actually undermined Android consent mechanisms that, but for HTC’s actions, would have prevented unauthorized access and transmission of sensitive information. The FTC’s complaint alleged that the vulnerabilities have been present on approximately 18.3 million HTC devices running Android. The complaint further alleged that HTC could have prevented these vulnerabilities through readily available, low-cost measures, such as adding a few lines of permission check code when programming its pre-installed applications.
In a precedent-setting remedy, the FTC’s final order requires HTC to develop and release software patches within 30 days of service of the FTC’s final order on HTC. The patches must fix vulnerabilities in millions of HTC’s devices, including every covered device having an operating system version released on or after December 2010. HTC must also establish a comprehensive security program designed to address security risks during the development of HTC devices. The FTC requires the program to include consideration of employee training and management; product design, development and research; secure software design and testing; and review, assessment, and response to third party security vulnerability reports.
Further, HTC must undergo independent security assessments every other year for the next 20 years. Among other requirements, the independent, professional assessment must certify that HTC’s security program operates with sufficient effectiveness to provide reasonable assurance that the security of covered device functionality and the security, confidentiality, and integrity of covered information is protected and has operated during the reporting period. HTC is barred from making false or misleading statements about the security and privacy of consumers’ data on HTC devices.
The FTC’s action against HTC has broad application beyond the mobile device and software marketplace. The agency’s action further solidifies the FTC’s role as the leading enforcer of data security standards. Once again the FTC has demonstrated that it is setting data security standards and will continue to monitor and police the marketplace when it believes companies have not incorporated what it believes are commonly accepted security features or when organizations have failed to take steps to prevent vulnerabilities.
Beta testing is underway for Google Glass, a new technology that provides the functionality of a smartphone in a headset worn like glasses. Much like a smartphone, the Glass headset is able to exchange messages with other mobile devices, take pictures, record videos, and access search engines to respond to user queries. But unlike a smartphone, the device’s optical head-mounted display, voice recognition, and front-facing camera give users hands-free access to its features, including the ability to capture photographs and video recordings of the world in front of them.
For now, Glass is only available to developers and a small group of test users known as Google Explorers. The device is expected to go on sale to the public in 2014. In the meantime, public speculation swells, and the blogosphere is full of conflicting reports about what the device can and cannot do. Some suggest that the device will utilize facial recognition and eye-tracking software to show icons and statistics above people whom the user recognizes. A more common concern is that the device will be able to photograph and record what the user sees and then share that data with third parties without permission from the user or those whose likenesses are being captured.
Because of this lack of clarity, lawmakers around the world are urging Google to affirmatively address the myriad of privacy concerns raised by this new technology. Last month, an international group of privacy regulators – including representatives from Australia, Canada, Israel, Mexico, New Zealand, Switzerland, and a European Commission panel – signed off on a letter to Google’s CEO Larry Page asking for more information regarding the company’s plans to ensure compliance with their data protection laws.
Here in the United States, the House Bipartisan Privacy Caucus issued a similar letter of its own. In addition to a variety of questions regarding the device’s capabilities, the letters reference some of Google’s recent data privacy mishaps and ask whether Google intends to take proactive steps to ensure the protection of user and nonuser privacy.
Google’s Vice President of Public Policy and Governmental Relations (and former New York Congresswoman) Susan Molinari issued a formal response to the House Bipartisan Privacy Caucus. According to Molinari, Google “recognize[s] that new technology is going to bring up new types of questions, so [they] have been thinking carefully about how [they] design Glass from its inception.”
To address concerns about the picture and video capabilities, Molinari highlighted several features designed to “give users control” and “help other people understand what Glass users are doing.” For example, specific user commands are required to search the Internet or find directions, and the user must either press a button on the arm of the Glass or say “Take a photo” or “Record a video” in order to access those features.
While Google is already subject to commitments it made to the FTC regarding the requirement to afford advertisers non-discriminatory access to its search engine, the FTC’s latest guidance makes clear that Google and other search engines must also maintain clear disclosures to the public about sponsored content in search results.
On June 24, 2013, in a series of letters to general search engines such as Google, Yahoo, and Ask.com, as well as to specialized search engines, the FTC issued updated guidance concerning disclosures regarding paid advertisements in search results.
This latest FTC action follows on the heels of the Commission’s recent updates to the Dot Com Disclosures and the updated Endorsements and Testimonials Guides. The FTC’s letters came in response to industry and consumer organizations’ requests to the Commission to update its policies on search engine results, last released in 2002. The FTC also noted that it has observed a decline in search engines’ compliance since 2002.
The FTC’s central concern, first articulated in 2002, remains the problem that consumers may be deceived in violation of Section 5 of the FTC Act unless search engines clearly and prominently distinguish advertising from natural search results.
Consumers assume that search results reflect the most relevant results. When results appear because the advertiser has paid a search engine for, say, prominent placement, that placement could be deceptive to consumers if they are unaware of the commercial relationship between the advertiser and the search engine.
The growth of mobile commerce in particular has spurred the FTC to issue new guidelines. Search results on a mobile phone screens are, by their nature, small, and consumers could be easily confused by paid search results if the “paid” nature of those results is not clear.
In the new guidance, the FTC states that if search engines continue to distinguish advertising results by giving a different background color or shading combined with a text label (such as “sponsored” or “ad”), the search engines should consider multiple factors to ensure that any labels and visual cues are sufficiently “noticeable and understandable” to consumers. The agency clarified that there is no “one size fits all” and that search engines may use various methods, provided the disclosures are noticeable and understandable.
Proper disclosures, according to the FTC, include the following:
• Visual Cues – Search engines must select hues of sufficient luminosity to account for varying monitor types, technology settings, and lighting conditions. The FTC notes that search engines should consider using web pages of different luminosities for mobile devices and desktop computers. Further, the FTC recommends that search engines should use:
o more prominent shading that has a clear outline;
o a prominent border that distinctly sets off advertising from the natural search results; or
o both prominent shading and a border
• Text Labels – The FTC asserts that text labels must be used in addition to the visual cues a search engine may use to distinguish advertising. Text labels must:
o use language that explicitly and unambiguously conveys that a search result is advertising;
o be large and visible enough for consumers to notice it;
o be located near the search results (or group of search results) that it qualifies and where consumers will see it; and
o be placed immediately in front of an advertising result, or in the upper-left hand corner of an ad block, including any grouping of paid specialized results in adequately sized and colored font.
The new guidance also recognizes that technology will continue to evolve, such as voice assistants on mobile devices (e.g., the iPhone’s “Siri”). While technology may change, the new guidance makes clear that the FTC Act’s Section 5 prohibition on deceptive practices remains. Therefore, businesses must make sure that they differentiate advertising from other information. For instance, if a voice interface is used to deliver search results (for example, “find me a Mexican restaurant”) the search engine should disclose audibly any paid advertisements in adequate volume and cadence for ordinary listeners to hear and comprehend.
The FTC continues to be vigilant in monitoring the online marketplace. Search engines and advertisers need to review their practices, keeping in mind that disclosures that may be readily apparent on a desktop may be hidden on a mobile screen. As with the “Dot Com Disclosures,” the agency is providing guidance to businesses; however, FTC enforcement remains vigilant and companies that do not clearly disclose paid advertising in search results could face an FTC investigation.
Recently, the Consumer Financial Protection Bureau, the watchdog agency of the financial industry, has proved that it has considerable bite. Created under the Dodd-Frank act to fill gaps in regulatory coverage, the CFPB’s mandate is to enforce federal regulations that, among other things, restrict “unfair deceptive or abusive acts or practices” in consumer finance. The CFPB in recent months announced two major debt relief crackdowns, the most recent of which permanently shut the doors of a Florida company.
Last month, the CFPB announced that it filed a complaint against a Florida debt-relief company that misled consumers across the country by charging upfront fees for debt-relief services without actually settling most of the consumers’ debts. According to the complaint, the defendants engaged in abusive practices by knowingly enrolling vulnerable consumers who had inadequate incomes to complete debt-relief programs. The complaint charged American Debt Settlement Solutions, Inc. (ADSS) and its owner, Michael DiPanni, with actions that were not just unfair and deceptive, but also abusive. Indeed, this case if the first time that the CFPB in its short history has enforced this prohibition on “abusive” acts or practices.
While “unfair” and “deceptive” are familiar terms to anyone who follows the Federal Trade Commission, the term “abusive” is new to Dodd-Frank and has been the subject of much consternation among Republicans in Congress, who consider it too vague. With this complaint, the CFPB provided what may be its first example of the type of conduct it will consider “abusive.” ADSS allegedly collected about $500,000 in fees from hundreds of consumers in multiple states, charging illegal upfront fees for debt-relief services and “falsely promising them it would begin to settle their debts within three to six months when, in reality, services rarely materialized.
The CFPB said the actions were “abusive” because consumers reasonably relied on the company to “act in their interest by enrolling them in a debt-relief program that they can be reasonably expected to complete, and which will therefore result in the negotiation, settlement, reduction, or alteration of the terms of their debts.” The CFPB simultaneously filed a proposed consent order that would settle the matter by halting the company’s operations and imposing a $15,000 fine.
ADSS and its owner may have walked away relatively unscathed, with only a civil penalty, but others caught in the CFPB’s cross hairs have not been as fortunate. Earlier this year, the CFPB filed suit against two lawyers and two debt relief companies in New York, alleging that they charged thousands of consumers illegal advance fees and left some worse off financially, while illegally profiting themselves. One of the lawyers, Michael Levitis, also faces mail and wire fraud charges brought by the Manhattan U.S. Attorney’s Office – the first-ever criminal charges stemming from a CFPB referral. What’s notable in this complaint is that the acts are described as both deceptive and unfair, but not as abusive.
Although a relatively new agency, the CFPB is proving that it has the chops to take down offenders in the financial industry. Both the Florida and New York cases are signs of future enforcement, and they send a stern warning to offenders – if you prey on vulnerable consumers, be prepared for a fight.
Ignorance of the law is no excuse; nor is (willful) ignorance of a business partner’s illegal activities.
That’s a lesson to be learned from a recent amended complaint filed by the FTC which named a payment processor in its complaint against a telemarketer that allegedly engaged in a scam concerning credit card interest rate reduction. The Commission originally filed suit against the telemarketing company, Innovative Wealth Builders, Inc., and its owners in January for misrepresenting the debt relief service they were selling, charging a fee before providing debt relief services, and billing consumers without their express informed consent. The January action resulted in the temporary shutdown of the company’s operations pending outcome of the suit.
But earlier this month, the Commission filed an amended complaint, adding charges against Independent Resources Network Corp., IWB’s payment processor. The payment processor was accused of assisting and facilitating IWB’s deceptive practices, in violation of the Telemarketing Sales Rule (TSR). 16 C.F.R. § 310.3(b).
The TSR was created by the FTC in 1995 at the direction of Congress, which directed the Commission to proscribe rules to address abusive and deceptive telemarketing acts. The FTC has amended the TSR several times in order to respond to developments in telemarketing schemes. The amendments allow for liability for third parties such as payment processors and lead generators that have provided “substantial assistance or support” to any seller or telemarketer while knowing, or consciously avoiding knowing, that the seller or telemarketer is engaged in activity in violation of the TSR.
In the FTC’s complaint against IRN, the Commission notes that IRN “processed millions of dollars of credit card transactions for IWB, thereby earning considerable fees for itself.” The complaint identifies several indicators that would have, or should have, put IRN on notice of IWB’s practices in violation of the TSR, including the following:
• IWB sent IRN copies of company documents including, but not limited to, telemarketing scripts and samples of the IWB defendants’ “financial plan”
• IRN was aware that IWB had a variety of complaints on consumer websites
• IWB had an “F” rating with Better Business Bureau and IRN accessed the BBB website several times
• IRN received thousands of copies of chargeback disputes initiated by dissatisfied consumers
• IRN received multiple fraud alerts from Discover regarding IWB
Instead of ceasing to process transactions for IWB, IRN responded to these “red flags” by increasing the percentage it withheld from transactions processed for the telemarketer and holding such sums in a reserve account.
Some takeaways from the complaint and other FTC developments: if you are going to invest in basic due diligence to determine the risk level and credibility of a prospective account, you should probably follow through with your findings, following the letter of the law. The FTC recently issued additional proposed rule changes to the TSR to address more payment processing concerns.