The rise of social media has led to the application of old law to new forms of communication. For instance, an effort by the National Labor Relations Board to educate workers on their right to engage in protected concerted activity has left some employers feeling that the NLRB went too far in supporting employees’ rights – particularly their right to post disparaging work-related comments on social media forums without reprisal.
Section 7 of the National Labor Relations Act (NLRA) protects all private-sector employees’ absolute right to engage in protected concerted activity, including the right to discuss among themselves their wages, hours, benefits and other terms and conditions of their employment. Generally, this requires two or more employees acting together to improve wages or working conditions, but the action of a single employee may be considered concerted if he or she involves co-workers before acting, or acts on behalf of others. It also requires that the improvement sought benefit more than just the employee taking action, so as to distinguish protected concerted activity from mere individual complaints.
Last year, the NLRB launched a website seeking to educate workers on their right to engage in protected concerted activity. The site provides several examples of cases in which employers violated an employee’s right to engage in protected concerted activity over the Internet. For example, in one case the NLRB issued a complaint against an employer that terminated an employee who criticized her supervisor on Facebook. The Board also found that the employer’s Internet policy, which prohibited employees from making negative statements about the company or supervisors, interfered with the right to engage in concerted activity.
The NLRB has in fact ruled in workers’ favor in a number of social media cases. For example, in Hispanics United of Buffalo, the NLRB considered a case in which an employer discharged five employees because of their Facebook posts. In that case, an employee went on Facebook to solicit her coworkers’ thoughts on work-related criticism she received from a fellow employee. In response, four coworkers weighed in about working conditions, work load and staffing issues at the company. All of the employees’ posts were made off-duty on the employees’ personal computers. The employer terminated all five employees, claiming that their comments constituted harassment of the employee mentioned in the initial post.
An NLRB administrative law judge reviewed the case and found that the employees had been unlawfully discharged. The ALJ found that the NLRA protects employees in “circumstances where individual employees seek to initiate or to induce or to prepare for group action, as well as individual employees bringing truly group complaints to the attention of management,” even if that action takes place online. Since the employees were discussing the terms and conditions of their employment, the discussion was protected concerted activity within the meaning of Section 7 of the NLRA.
While cases like Hispanics United of Buffalo have served as a rallying cry for employers on the NLRB’s perceived overreaching in support of workers, a recent report on NLRB social and general media cases reveals that the NLRB actually sided with employers in slightly more than half the time by finding that employees’ statements on Facebook or Twitter did not constitute “protected concerted activity” under the NLRA. For example, in Karl Knauz Motors, Inc., the NLRB found that an employee was lawfully terminated for his Facebook postings about an accident that took place at a car dealership owned by his employer. The NLRB found that these comments were not protected because they were not related to the terms and conditions of his employment.
Similarly, in another case brought before the Board, an employee who had just been reprimanded by her supervisor posted a Facebook status that consisted of an expletive and the name of the company that employed her. One of her coworkers “liked” that status. Half an hour later the same employee posted a comment expressing her belief that the company did not value its employees. None of the employee’s coworkers responded to that posting. The company terminated the employee for her postings.
On review, the NLRB upheld the employee’s termination, finding that the posts were merely the expression of a personal gripe. The NLRB’s Associate General Counsel summarized the Board’s reasoning by stating, “The Charging Party had no particular audience in mind when she made that post, the post contained no language suggesting that she sought to initiate or induce coworkers to engage in group action, and the post did not grow out of a prior discussion about terms and conditions of employment with her coworkers. Moreover, there is no evidence that she was seeking to induce or prepare for group action or to solicit group support for her individual complaint. Although one of her coworkers offered her sympathy and indicated some general dissatisfaction with her job, she did not engage in any extended discussion with the Charging Party over working conditions or indicate any interest in taking action with the charging party.”
Despite the uproar over the NLRB’s application of “protected concerted activities” to social media, this does not represent a shift from the NLRB’s previous decisions. It merely applies existing policy to a new set of facts brought about by technological changes in how workers communicate. As before, employers may set limits on employee’s social media activities as long as they do not impinge on the employees’ protected concerted activities.
As part of its aggressive program to protect consumers in financial matters, the Consumer Protection Financial Bureau (CFPB) has announced that it is prepared to adopt a controversial “disparate impact” theory of liability against lenders. A case that the U.S. Supreme Court may accept would have a major impact on whether the CFPB is actually going to be able to do that.
The “disparate impact” theory was first articulated by the Supreme Court and further addressed by the Civil Rights Act of 1991 in the employment discrimination context. In a 1971 decision, Griggs v. Duke Power Co., the Court held that Title VII “proscribes not only overt discrimination but also practices that are fair in form, but discriminatory in operation.”
In the employment context, under Griggs, even though an employer may not intend to discriminate against a protected group, it may still be found liable under anti-discrimination laws for practices that disproportionately disadvantage such a group.
The theory was administratively adopted for federal fair lending laws in the 1990s, as laid out in a 1994 Interagency Policy Statement on Fair Lending. This statement from the Department of Justice and other federal agencies says that lenders may be liable for fair lending law violations if their policies or practices are shown to have a disparate impact on protected groups – even if there was no intent to discriminate. The statement, however, does not have the force of law.
In addition, the federal government, in practice, had not aggressively pursued fair lending cases in the absence of intentional discrimination against a protected group — until the Obama Administration’s CFPB announced its intention to use the “disparate impact” theory.
That is where the pending Supreme Court case, Mount Holly v. Mount Holly Gardens Citizens in Action, Inc. comes in. In that case, the Township of Mount Holly, N.J., made plans to redevelop a blighted residential area that was primarily inhabited by low- and moderate-income minority residents. Under the plan, the neighborhood would be demolished, and significantly more-expensive housing would be built. Many of the residents objected to the redevelopment, saying that their neighborhood would be destroyed and that they would not be able to afford to live in the new neighborhood. They sued under the Fair Housing Act, alleging that although the plan was not specifically targeted against minorities, it would have a disparate impact on them. The U.S. Court of Appeals for the Third Circuit allowed the case to proceed, and the Supreme Court is now considering it.
The issue is whether “disparate impact” is cognizable under the Fair Housing Act, as it is in the employment context. If the Court holds that impact as well as intent leads to a cause of action under the Fair Housing Act, the CFPB will go ahead and act under the theory. It will bring cases, for example, against banks that make loans only in areas that happen to be inhabited by high-income people and decline to make loans in areas where low-income people (many of whom are minorities) live. It will use geography as a proxy for racial or ethnic discrimination: Where were loans made, and where were they denied?
The Supreme Court has not yet decided whether it will hear the Mount Holly case. The most recent activity was the Court’s request, at the end of October, that the U.S. solicitor general formally express the views of the U.S. government on the issue. The solicitor general has not yet filed, and it will probably be a few weeks until he does file and the justices consider the SG’s arguments and decide whether to grant certiorari.
Consumer advocacy groups have actively pushed the disparate impact theory. The National Fair Housing Alliance has filed administrative complaints against Bank of America, Wells Fargo, and U.S. Bancorp, alleging that bank practices in maintaining foreclosed properties discriminate against people in predominantly black and Hispanic neighborhoods. Bank of America, Wells Fargo and SunTrust have recently paid some $500 million to settle claims: Since the banks opted to settle these cases, there was no formal legal ruling on the theory of liability.
Thus, “disparate impact” has been slowly taking hold in the lending context – without any real statutory basis or judicial clarification. The theory is still being used only by extension or analogy to the employment context. A high court ruling would clarify this very important area of law. Lenders, developers, and borrowers are waiting for clarification.
The Federal Trade Commission announced on December 19, 2012, that it has adopted final amendments to the Children’s Online Privacy Protection Act (COPPA) that strengthen privacy protections online and give parents greater control over their children’s personal information. FTC officials said that they updated the rules to keep pace with the increasing use of mobile phones and tablets by children.
The original rules have not seen significant changes since they went into effect in 2000. The FTC has been examining possible changes to the COPPA rules since March 2010 and has received hundreds of comments from interested parties through multiple comment periods.
“Congress enacted COPPA in the desktop era and we live in an era of smartphones and mobile marketing,” FTC Chairman Jon Leibowitz said. “This is a landmark update of a seminal piece of legislation.”
The new rules go into effect on July 1, 2013. The vote was approved by a 3-1 vote, with one commissioner abstaining. Commissioner Maureen Ohlhaussen voted no on the ground that she believes a core provision of the new rules, extending the statutory definition of “operator” to impose obligations on certain websites or online services that do not collect personal information from children or have access to or control of such information collected by a third party, exceeds the scope of the authority granted by Congress in COPPA.
The new rules significantly increase the types of companies that are required to obtain parental permission before knowingly collecting personal details from children, as well as the types of information that will require parental consent to collect.
Under the new amendments, the FTC said companies must seek permission from parents to collect a child’s photographs, videos, audio files, and geo-location information.
The new rules also expand the definition of personal information to include persistent IDs, such as a unique serial number on a mobile phone or the IP address of a browser, if they are used to show a child behavior-based ads. It requires third parties such as advertising networks and social media networks that know they are operating on children’s sites to notify and obtain consent from parents before collecting personal information. Additionally, the rule makes children’s sites responsible for notifying parents about data collection by third parties that are integrated into their services.
The FTC said that the new amendments will now require apps and websites that are targeted at children with third-party plug-ins to websites such as Twitter and Facebook, to require parental consent to collect personal information. Those third parties must obtain parental consent when they have “actual knowledge” that they are collecting information from a website or service targeted at children.
In a departure from the rule changes that were proposed by the government in August, the FTC explicitly exempted app stores, such as those run by Google and Apple, from responsibility for privacy violations by games and software sold in their stores. The government also reversed a prior proposal by agreeing to continue to allow parental consent to be obtained by email as long as apps and websites only collect the data for internal usage.
Now that these new guidelines have been issued, all operators need to review their policies to ensure compliance. These revisions have significantly expanded the type of information that is considered private and the number of companies that will need to comply. The FTC has previously brought enforcement actions against companies that were in violation of COPPA in the past, and these new rules will allow for more actions to be brought in the future.
On December 18, 2012, the Federal Trade Commission issued orders requiring nine data brokerage companies to provide the agency with information on how they collect data from consumers and use it. The nine companies asked to provide this data to the FTC include Acxiom, Datalogix, Intellius and Peekyou.
Data brokers are companies that collect personal information about consumers from a variety of sources, both public and non-public, and then package the information and sell it to companies. As the FTC noted in its announcement, in many ways this data can benefit consumers and the economy by enabling companies to prevent fraud or allowing customers to see ads that interest them.
However, the FTC seems concerned that much of the data brokerage industry operates unregulated. No current laws require data brokers to maintain the privacy of an individual’s data unless it is used for employment, credit, insurance, housing, or another similar purpose. Some estimates indicate that these data brokers have several thousand details on the majority of adults in the United States.
The FTC is specifically seeking details about:
1. The nature and sources of the consumer information that data brokers collect.
2. How data brokers use, maintain, and disseminate the information they collect.
3. The extent to which the data brokers allow consumers to access and correct their information or to opt out of having their personal information sold.
The FTC said that it will use the responses to prepare a study and to make recommendations on whether and how the industry could improve its privacy practices.
The FTC has already called on Congress to address data brokers’ practices through legislation. In March, the FTC advocated for legislation to “address the invisibility of, and consumers’ lack of control over, data brokers’ collection and use of consumer information.” The FTC has also urged Congress to pass a law that would require data brokers to let individuals examine the data contained in files on them, similar to the way that federal laws allow for consumers to get free credit reports every year.
In July, Rep. Edward Markey (D-MA) and Rep. Joe Barton (R-TX), co-chairs of the Bipartisan Congressional Privacy Caucus, opened an investigation into the practices of the industry. The Privacy Caucus has expressed concerns that many Americans do not know how the industry operates and that controls may be lacking for individuals over their own information.
In October, Sen. John D. Rockefeller IV (D-WV) opened his own investigation into the data broker industry. Rockefeller said he was struck by the amount of personal, medical, and financial information that could be collected and sold.
This week’s announcement provides further notice that the FTC has intensified its scrutiny of the data brokerage industry. Companies in the data compilation business should continue to monitor their practices to ensure that they are complying with all regulations and should stay abreast of any forthcoming changes in regulations and laws.
The Federal Trade Commission released a report on December 10, 2012, that concluded that mobile apps targeted at children were collecting large amounts of data from children and sharing their information with advertisers without disclosing their practices.
The FTC report examined 400 leading apps designed for kids that were sold in the mobile stores run by Apple and Google. The agency said it is launching an investigation to determine if certain mobile apps developers have violated the Children’s Online Privacy Protection Act (COPPA) or engaged in unfair or deceptive trade practices.
The FTC’s authority over children’s mobile apps comes from laws that prohibit unfair and deceptive acts of commerce, as well as from COPPA, which requires operators of online services for children under 13 to get consent from parents before collecting and sharing personal information, among other requirements.
The report itself does not call for regulatory changes. However, the FTC is reviewing COPPA to determine if it needs to be updated, and is expected to announce updates soon COPPA was enacted in 1998, and FTC officials say the law needs to be changed to reflect the growing prominence of mobile apps and social networking sites used by children. The regulations under COPPA have not been substantially revised since its introduction. COPPA sets forth specific requirements for websites aimed at children, but its guidance on mobile technology is far less clear.
The FTC proposed updating COPPA, but it has been met with pushback thus far from technology companies. The proposed changes could significantly increase the need for children’s sites and apps to obtain parental permission to collect certain types of data, including device IDs, photos, and voice recordings. FTC officials have also emphasized that they consider the exact location of a mobile device to be personal information that would require parental permission to collect.
The FTC report noted that it was particularly concerned with the collection of a user’s device ID, which is a string of letters or numbers that identifies each mobile device. Nearly 60 percent of the mobile apps that the FTC reviewed transmitted the device ID. Some of those apps then shared that ID with an advertising network or other third party, including some apps that disclosed the phone number and location of the device. Additionally, more than half the apps also contained interactive features such as advertising or in-app purchases that were largely undisclosed to parents.
Only 20 percent of the apps reviewed in the report disclosed any information about the app’s privacy practices. FTC Chairman Jon Leibowitz said, “Our study shows that kids’ apps siphon an alarming amount of information from mobile devices without disclosing this fact to parents.”
This week’s report serves as further notice to all mobile app developers that the FTC is monitoring the mobile app market. App developers, particularly developers that are targeting children, need to review their compliance with FTC guidelines, as well as their overall truth-in-advertising and data privacy policies, to make sure their apps are complying. The FTC has made clear that it will take enforcement actions against industry participants and will continue to aggressively pursue action in the future.
Yesterday, California’s Attorney General Kamala Harris filed the state’s first suit under California’s Online Privacy Protection Act. The lawsuit, against Delta Air Lines, followed the Attorney General’s warning letters to Delta and many other companies in October to post privacy policies with their mobile apps to inform users of what personally identifiable information is being collected and how the information is used by the company (previously covered by FTC Beat here).
Delta’s app collects substantial amounts of personal information, including full names, telephone numbers, email addresses, photographs, and geo-locations. According to the complaint, “Users of the Fly Delta application do not know what personally identifiable information Delta collects about them, how Delta uses that information, or to whom that information is shared, disclosed, or sold.” The AG asserts that Delta’s conduct violates the Online Privacy Protection Act and California’s Unfair Competition Law.
The Federal Communications Commission recently ruled that companies may send a one-time text message confirming a consumer’s opt-out of texts without violating the Telephone Consumer Protection Act (“TCPA”), and potentially facing large class action lawsuits.
This pro-business ruling represents a victory for SoundBite, the company that sought a declaratory ruling from the FCC, as well as for other businesses that use mobile texting to communicate with customers. Many businesses (including SoundBite) are facing class actions under the TCPA for sending this type of confirmatory message.
The TCPA prohibits, among other things, autodialed calls to mobile phones, unless the sender has received prior express consent from the recipient for such calls. The FCC has ruled that text “calls” are covered by this prohibition. Thus, under the TCPA, an autodialed call that sends a text to a mobile phone without prior express consent (irrespective of the type of message) is prohibited. The TCPA provides for FCC and state attorney general enforcement as well as private litigation. Plaintiffs’ lawyers have latched onto the TCPA for several years and have recovered substantial amounts in judgments and settlements.
SoundBite sends text messages on behalf of a number of companies that have obtained express consent to send texts to particular wireless subscribers, including banks, utilities, and retailers. SoundBite follows the Mobile Marketing Association’s best practices which include the transmission of a text message to a subscriber confirming that subscriber’s request to opt-out of receiving future messages. When a consumer opts-out of receiving future text messages, a one-time reply is sent back (usually within minutes) via text confirming receipt.
While many of the FCC’s rulings on the TCPA have not been viewed as business-friendly, this latest ruling represents a victory for businesses. Several large associations and businesses filed in support of SoundBite’s petition, including the American Bankers Association and the Consumer Bankers Association. SoundBite also had the support of the National Association of Consumer Advocates. The parties argued that confirmation messages are, in fact, consumer-friendly as they provide important information to the consumer to let him or her know that the opt-out was received and the messages will stop.
The FCC concluded that, as long as prior express consent of the receiving party exists before sending any messages, a one-time text confirming an opt-out request does not violate the TCPA: “We conclude that a consumer’s prior express consent to receive text messages from an entity can be reasonably construed to include consent to receive a final, one-time text message confirming that such consent is being revoked at the request of the consumer.”
Importantly, the FCC stated that these opt-out texts may only confirm the opt-out request and may not include any marketing or promotional information (or an attempt to convince the consumer to reconsider his or her opt-out) and can be the only additional message sent to the consumer after the receipt of the opt-out request. In addition, if the confirmation message is sent more than five minutes after the opt-out, the burden will fall on the sender to demonstrate that the delay was reasonable. The FCC also asserted that it will monitor consumer complaints and take action if senders are using confirmation texts as an additional opportunity.
Businesses that receive threats of TCPA lawsuits for confirmatory texts will now be able to use this FCC ruling in their defense. Plaintiffs may challenge the FCC’s interpretation of the strict statutory language, however, as they have done in other instances. Organizations wishing to use confirmatory opt-out texts should review the FCC’s ruling and ensure that their confirmations comport with the FCC’s guidance, especially regarding timing and ban on advertising and promotional messages
On November 19, 2012, the Federal Trade Commission and the Consumer Financial Protection Bureau announced that they have launched a new coordinated effort to protect consumers, focusing on mortgage advertisements that they say are deceptive.
The CFPB and the FTC worked together to review roughly 800 mortgage ads. These ads were produced by entities involved in different aspects of the mortgage process, including mortgage brokers and lenders, lead generators, real estate agents, home builders, and others. The ads were featured on a wide range of media including newspaper, direct mail, email and social media.
The agencies stated that some of these ads had specifically targeted the elderly and veterans.
The letters warned the recipients that they may be in violation of the Mortgage Acts and Practices Advertising Rule (MAP Rule) that took effect in August 2011, which prohibits misleading claims concerning government affiliation, fees, costs, interest rates, payment associated with the loan, and the amount of cash or credit that is available to the consumer. The MAP Rule does not apply to traditional banks, meaning today’s actions affect only non-banks.
The FTC and the CFPB both have enforcement authority over non-bank mortgage ads under the MAP Rule. The agencies stressed that as part of the initiative they are working together to assure that consistent standards are applied across agencies. The agencies will conduct separate investigations focused on different targets to better utilize their resources and avoid double-teaming businesses.
“Working together and applying consistent standards to all types of clients in all types of ads is a very important means of making sure that mortgage advertisers are on notice that they have to comply with the law,” said Thomas Pahl, the assistant director of the FTC’s Division of Financial Practices.
The FTC and the CFPB issued more than 30 warning letters to mortgage advertisers, warning them that their advertisements may be deceptive. Both agencies stated that they have also opened formal investigations into other advertisers that may have committed more serious violations of the law. Violators of the MAP Rule can be subject to civil fines.
“Misrepresentation in mortgage products can deprive consumers of important information while making one of the biggest financial decisions of the lives,” CFPB Director Richard Cordray stated. “Baiting consumers with false ads to buy into mortgage products would be illegal.”
The review of the advertisements revealed several different types of claims that regulators could possibly find misleading, including ads that suggested that a company was affiliated with a government agency, ads that guaranteed approval and offered low monthly payments without discussing the conditions of the offers, and ads offering a low fixed mortgage rate without discussing significant loan terms.
The announcement shows that the FTC and the CFPB are taking an aggressive and proactive look at companies that offer products in the financial services sector. Companies that offer mortgage and other consumer lending products should know that the FTC and the CFPB are paying special attention to them and that their advertisements need to comply with federal regulations.
The chairmen of the Congressional Bipartisan Privacy Caucus just released the responses they received from nine major data brokers whom they queried in July about how each broker collects, assembles and sells consumer information to third parties. In their responses, the nine companies — Acxiom, Epsilon, Equifax, Experian, Harte-Hanks, Intelius, Fair Isaac, Merkle and Meredith Corp. – generally asserted that they were not data brokers. Some companies claimed they analyze data rather than broker it. Copies of the brokers’ responses and the original letters can be found here.
Interestingly, several of the brokers acknowledged obtaining their data from social networks such as LinkedIn and Facebook, in addition to telephone directories, government agencies, and financial institutions.
The legislators issued a joint statement in which they noted shortcomings in the brokers’ answers, stating that “many questions about how these data brokers operate have been left unanswered, particularly how they analyze personal information to categorize and rate consumers.”
Members of Congress have indicated that they will continue to scrutinize the data brokerage industry. Issues of particular concern for the legislators include: the sale of personal information to third parties for targeted advertising, the gathering and selling of information relating to children and teenagers, and the lack of transparency in data brokers’ practices and available information. The Privacy Caucus has expressed concern that many Americans do not know how the industry operates and that controls may be lacking for individuals over their own information.
The FTC has already called on Congress to address data brokers’ practices through legislation. In March, the FTC advocated for legislation to “address the invisibility of, and consumers’ lack of control over, data brokers’ collection and use of consumer information.” We anticipate continued review of data brokers by Congress and federal agencies including the FTC. Companies in the data compilation business should continue to monitor ongoing proceedings.
It should be noted, however, that not all companies that gather personal information actually “broker” it in a manner that raises concern. Some companies compile information and remove identifying data before providing it to third parties; other companies gather information under contract for a business with whom a consumer has an existing business relationship – as a means to promote better customer service by tailoring offerings that will be of interest to consumers generally or to a particular consumer. Many consumers have indicated a willingness to receive these types of tailored offerings.
Progress in the world of biometrics should cause us all to shudder. Cameras in public locations can now employ facial recognition to direct advertising to us based upon an assessment of our age, sex, and other characteristics. Cameras can determine our reaction to and engagement in video games and movies. It sounds a bit like a world composed of two-way mirrors. But instead of shuddering, we sometimes knowingly, sometimes carelessly, support the technology – and other data collection practices – through our online and commercial activities.
How many of us constantly update and tag our Facebook pages with pictures of us and our loved ones and where we’ve been? How many take advantage of product/service discounts by scanning our smart phones and “liking” products on Facebook? How many of us are now buying into dating apps and social apps that are based on facial recognition technology? The fact is that much of our data can be, and is being, collected and we consumers (especially in the United States) seem to have no problem with it, even volunteering for it.
Perhaps fortunately, some regulators are stepping in and keeping a watchful eye on these developments and looking for ways to curb the potentially nefarious use of consumer data. The FTC and its Division of Privacy and Identity Protection recently published its list of best practices for companies who use facial recognition technologies. The publication, “Facing Facts: Best Practices for Common Uses of Facial Recognition Technologies,” underlines important concerns about being able to identify anonymous individuals in public and about attendant security breaches such as hacking. The FTC’s proposed best practices include the following:
• Companies should maintain reasonable data security protections to prevent unauthorized information “scraping” of consumer images and biometric data.
• Companies should maintain appropriate retention and disposal practices.
• Companies should consider the sensitivity of information when developing facial recognition products and services, e.g., they should avoid placing signs in sensitive areas, such as bathrooms, locker rooms, health care facilities, or places where children congregate.
• Companies using digital signs capable of demographic detection should provide clear notice to consumers that the technologies are in use, before consumers come into contact with the signs.
• Social networks should provide consumers with (1) an easy-to-find, meaningful choice not to have their biometric data collected and used for facial recognition; and (2) the ability to turn off the feature at any time and delete any biometric data previously collected.
• Companies should obtain a consumer’s affirmative express consent before using a consumer’s image or any biometric data in a materially different manner than they represented when they collected the data.
• Companies should not use facial recognition to identify anonymous images of a consumer to someone who could not otherwise identify him or her, without obtaining the consumer’s affirmative express consent.
The guidelines come only a few months after the FTC’s March 2012 Privacy Report (“Protecting Consumer Privacy in an Era of Rapid Change: Recommendations For Businesses and Policymakers”) and are a logical follow-on to the report. They incorporate the Privacy Report’s core principles: privacy by design, simplified consumer choice, and transparency. These principles and guidelines are a step in the direction of responsible data collection and responsible technological advancements.
We should point out that neither the Privacy Report nor the Best Practices in Facial Recognition are binding or enforceable as they do not fall under FTC’s legal authority. And the FTC prominently makes this disclaimer, noting that the guidelines are merely recommendations without the force of law. It is clear, however, that the FTC is appropriately preparing to assume enforcement authority, should Congress pursue privacy legislation (something the FTC recommends in the Privacy Report). That is obvious from the mere fact that the agency has established a Privacy and Identity Protection Division.
Companies that are developing or seeking to employ biometrics – or that employ other data collection practices – would be well advised to pay attention to the FTC’s recommendations. The guidelines provide insight into how an enforcement authority is likely to approach biometrics and other data collection practices. The guidelines also provide a framework for responsible use of consumer data. And even though consumers currently seem passive or dismissive about biometrics and data collection, it would take just one scandal or highly publicized incident for public opinion to change. Companies will benefit in the long run by building good will among consumers.