This week, the FTC released updated guidance to its 2000 “Dot Com Disclosures,” a guide covering disclosures in online advertising. The online world has certainly changed in 13 years, and the new guidelines, available here, cover advances in online advertising, including mobile advertising.
One central theme still prevails: existing consumer protection laws and rules apply no matter where you offer products and services: newspapers, magazines, TV and radio commercials, websites, direct marketing, and mobile marketing. Thus, the basic principle applies that companies must ensure that their advertisements are truthful and accurate, including providing disclosures necessary to ensure that an advertisement is not misleading. Further, the disclosures should be clear and conspicuous – irrespective of the medium of the message.
In determining whether a disclosure is “clear and conspicuous” as the FTC requires, advertisers should consider the disclosure’s placement in the ad. Importantly, the 2000 guidelines defined proximity of disclosures to ads as “near, and when possible, on the same screen.” The new guidelines state that disclosures should be “as close as possible” to the relevant claim. The closer the disclosure is to the claim, the better it is for FTC compliance purposes.
Advertisers should also consider: the prominence of the disclosure; whether it is unavoidable (e.g., consumers must scroll past the disclosure before they can make a purchase); whether other parts of the ad distract attention from the disclosure; whether the disclosure should be repeated at different places on the website; whether audio message disclosures are of sufficient volume and cadence (e.g., too fast); whether visual disclosures appear long enough; and, whether the language of the disclosure is appropriate for the intended audience. The FTC suggests avoiding “legalese” or technical jargon.
Mobile marketers should take note that the FTC provided some additional guidance regarding disclosure issues particular to mobile marketing. In particular, the FTC stated that the various devices and platforms upon which an advertisement appears or a claim is made should be considered. For example, if the advertiser cannot make necessary disclosures because of the limit of the space (e.g., in a mobile app), then the claim should not be made on the platform.
The FTC does permit hyperlinks for disclosures in certain circumstances. However, hyperlinks must:
- be obvious
- be labeled appropriately to convey the importance, nature and relevance of the information they lead to (such as “Service plan required. Get service plan prices here”)
- be used consistently
- be placed as close as possible to the relevant information the hyperlink qualifies and made noticeable
- take consumers directly to the disclosure after clicking
Companies should assess the effectiveness of the hyperlink by monitoring click-through rates and make changes accordingly. The agency also suggests that advertisers design ads so that scrolling is not necessary to find a disclosure. The FTC discourages hyperlinks for disclosures involving product costs or certain health and safety issues (similar to its 2000 guidelines).
Probably the most helpful part of the new guidelines are the 22 different examples of proper/improper disclosures the FTC provides at the end of the guidelines. As companies move forward in promoting products and services online, particularly on mobile platforms, reviewing these examples along with the general principles of truthful and complete statements in advertising may save a company from an FTC enforcement action.
Organizations are increasingly marketing their products and services on mobile platforms. Advertisers should take note that special considerations apply in the mobile marketplace, especially the space and text size limitations. If a disclosure is necessary to prevent an advertisement from being deceptive, unfair, or otherwise violative of an FTC rule, it must be clear and placed next to the offer. If that can’t be done, the safest course would be to move the offer to another platform, such as a traditional website. The FTC and the states have demonstrated that they take a keen interest in mobile marketing and they will be watching claims and disclosures in the smartphone/tablet universe.
Maryland Attorney General Douglas Gansler (D) has announced that his office is launching a new Internet Privacy Unit designed to address issues related to online privacy and to ensure that companies are in compliance with state and federal consumer protection laws. The unit will also handle issues related to cyberbullying and cybersecurity.
Gansler, who also serves as the president of the National Association of Attorneys General (NAAG), has previously stated that online privacy was a priority. Gansler said in a statement that Internet privacy is “one of the most essential consumer protection issues of the 21st century.”
The Internet Privacy Unit will also work with major industry stakeholders and privacy advocates to provide outreach and education to businesses and consumers. The unit may also pursue enforcement actions “where appropriate” to ensure that consumers’ privacy is protected.
One area of online privacy that the unit will examine is whether companies are complying with the Children’s Online Privacy Protection Act (COPPA), a federal law that restricts site operators from knowingly collecting personal data from children younger than 13. The Federal Trade Commission (FTC) announced in December that it adopted new rules governing COPPA that will go into effect in July 2013, which were the first significant revisions since the original rules went into effect in 2000. The new rules significantly increase the number of types of companies that are required to obtain parental permission before knowingly collecting personal details from children, as well as the types of information that will require parental consent to collect.
The unit will also “examine weaknesses” in online privacy policies. Not only will companies be required to have privacy policies in place, but these policies need to be thorough and comprehensive to ensure compliance with all relevant privacy laws. And, of course, companies need to be following in practice what they “preach” in their privacy policies.
The FTC and state attorney general offices will doubtless continue to be aggressive in their enforcement of privacy laws. Companies with an online presence should review their privacy policies and practices, particularly as affected by recent rule changes such as the COPPA revisions. Also, Maryland is signaling that it will be an active player in monitoring and enforcement of personal privacy and cybersecurity. While federal legislation continues to stall, the states are most definitely moving ahead.
Angered by the recent tragic suicide of Internet activist Aaron Swartz, a group of hackers claiming to be from the group Anonymous, made threats over the weekend to release sensitive information about the United States Department of Justice. The group claimed to have a file on multiple servers that is ready to be released immediately.
Swartz’s suicide has served to mobilize the group Anonymous, a loosely defined collective of Internet “hacktivists” that oppose attempts to limit Internet freedoms. Anonymous is a staunch advocate of open access to information, as was Swartz. Anonymous said that Swartz “was killed” because he “faced an impossible choice.”
Swartz was facing federal computer fraud charges that carried a maximum sentence of 35 years in prison, although in reality he probably would not have been given a sentence anywhere near approaching the statutory maximum. Prosecutors told Swartz’s legal team they would recommend to the judge a sentence of six months in a low-security setting.
The charges arose from allegations that he made freely available an enormous archive of research articles and similar documents offered by JSTOR, an online academic database, through the computers at the Massachusetts Institute of Technology.
Swartz was a leading activist involved in the movement to make information more freely available on the Internet and is credited with helping to lead the protests that ultimately defeated the Stop Online Piracy Act (SOPA), a statute that would have significantly broadened law enforcement powers in policing Internet content that may violate U.S. copyright laws.
Earlier this month, Rep. Zoe Lofgren (D-Calif.) indicated that she is drafting a bill that she terms “Aaron’s Law,” which would limit the scope of the Computer Fraud and Abuse Act, a 1986 law that prosecutors used to help bring these charges against Swartz.
The hackers reportedly hijacked the website of the United States Sentencing Commission, the federal agency responsible for the federal sentencing guidelines for criminal offenses. They said that the Sentencing Commission’s website was chosen because of its influence in creating sentences that they deemed unfair. The hackers posted a message that demanded reform of the criminal justice system or threatening that sensitive information would be leaked. Anonymous also posted an editable version of the website, which invited users to edit it as they pleased.
Today is Data Privacy Day. These recent incidents serve to show that no organization – not even the U.S. Department of Justice – is immune from security breaches. Data breaches and data losses will occur and it is crucial for an organization to be prepared and have policies in place to allow a quick response when something does happen.
The legal ramifications and bad publicity that follow such an incident can be very damaging to an organization. However, by making sure that you are prepared, you can minimize your damages. Preparedness involves consultation across a range of specialties, including information technology, legal advice, and public relations. The impact that a data breach or loss can have on the bottom line of any organization is enormous and preparation is the best method to combat it.
A data breach or data loss can also have far-reaching legal consequences under international, federal and various state laws. For example, companies may not realize that if they have even a few employees or customers in a state, it may trigger a number of different requirements under state privacy laws. In order to avoid problems with federal agencies or state attorney general offices, it is best for companies to have a plan in place in advance and make sure they are already compliant with all relevant laws.
The chairmen of the Congressional Bipartisan Privacy Caucus just released the responses they received from nine major data brokers whom they queried in July about how each broker collects, assembles and sells consumer information to third parties. In their responses, the nine companies — Acxiom, Epsilon, Equifax, Experian, Harte-Hanks, Intelius, Fair Isaac, Merkle and Meredith Corp. – generally asserted that they were not data brokers. Some companies claimed they analyze data rather than broker it. Copies of the brokers’ responses and the original letters can be found here.
Interestingly, several of the brokers acknowledged obtaining their data from social networks such as LinkedIn and Facebook, in addition to telephone directories, government agencies, and financial institutions.
The legislators issued a joint statement in which they noted shortcomings in the brokers’ answers, stating that “many questions about how these data brokers operate have been left unanswered, particularly how they analyze personal information to categorize and rate consumers.”
Members of Congress have indicated that they will continue to scrutinize the data brokerage industry. Issues of particular concern for the legislators include: the sale of personal information to third parties for targeted advertising, the gathering and selling of information relating to children and teenagers, and the lack of transparency in data brokers’ practices and available information. The Privacy Caucus has expressed concern that many Americans do not know how the industry operates and that controls may be lacking for individuals over their own information.
The FTC has already called on Congress to address data brokers’ practices through legislation. In March, the FTC advocated for legislation to “address the invisibility of, and consumers’ lack of control over, data brokers’ collection and use of consumer information.” We anticipate continued review of data brokers by Congress and federal agencies including the FTC. Companies in the data compilation business should continue to monitor ongoing proceedings.
It should be noted, however, that not all companies that gather personal information actually “broker” it in a manner that raises concern. Some companies compile information and remove identifying data before providing it to third parties; other companies gather information under contract for a business with whom a consumer has an existing business relationship – as a means to promote better customer service by tailoring offerings that will be of interest to consumers generally or to a particular consumer. Many consumers have indicated a willingness to receive these types of tailored offerings.
Progress in the world of biometrics should cause us all to shudder. Cameras in public locations can now employ facial recognition to direct advertising to us based upon an assessment of our age, sex, and other characteristics. Cameras can determine our reaction to and engagement in video games and movies. It sounds a bit like a world composed of two-way mirrors. But instead of shuddering, we sometimes knowingly, sometimes carelessly, support the technology – and other data collection practices – through our online and commercial activities.
How many of us constantly update and tag our Facebook pages with pictures of us and our loved ones and where we’ve been? How many take advantage of product/service discounts by scanning our smart phones and “liking” products on Facebook? How many of us are now buying into dating apps and social apps that are based on facial recognition technology? The fact is that much of our data can be, and is being, collected and we consumers (especially in the United States) seem to have no problem with it, even volunteering for it.
Perhaps fortunately, some regulators are stepping in and keeping a watchful eye on these developments and looking for ways to curb the potentially nefarious use of consumer data. The FTC and its Division of Privacy and Identity Protection recently published its list of best practices for companies who use facial recognition technologies. The publication, “Facing Facts: Best Practices for Common Uses of Facial Recognition Technologies,” underlines important concerns about being able to identify anonymous individuals in public and about attendant security breaches such as hacking. The FTC’s proposed best practices include the following:
• Companies should maintain reasonable data security protections to prevent unauthorized information “scraping” of consumer images and biometric data.
• Companies should maintain appropriate retention and disposal practices.
• Companies should consider the sensitivity of information when developing facial recognition products and services, e.g., they should avoid placing signs in sensitive areas, such as bathrooms, locker rooms, health care facilities, or places where children congregate.
• Companies using digital signs capable of demographic detection should provide clear notice to consumers that the technologies are in use, before consumers come into contact with the signs.
• Social networks should provide consumers with (1) an easy-to-find, meaningful choice not to have their biometric data collected and used for facial recognition; and (2) the ability to turn off the feature at any time and delete any biometric data previously collected.
• Companies should obtain a consumer’s affirmative express consent before using a consumer’s image or any biometric data in a materially different manner than they represented when they collected the data.
• Companies should not use facial recognition to identify anonymous images of a consumer to someone who could not otherwise identify him or her, without obtaining the consumer’s affirmative express consent.
The guidelines come only a few months after the FTC’s March 2012 Privacy Report (“Protecting Consumer Privacy in an Era of Rapid Change: Recommendations For Businesses and Policymakers”) and are a logical follow-on to the report. They incorporate the Privacy Report’s core principles: privacy by design, simplified consumer choice, and transparency. These principles and guidelines are a step in the direction of responsible data collection and responsible technological advancements.
We should point out that neither the Privacy Report nor the Best Practices in Facial Recognition are binding or enforceable as they do not fall under FTC’s legal authority. And the FTC prominently makes this disclaimer, noting that the guidelines are merely recommendations without the force of law. It is clear, however, that the FTC is appropriately preparing to assume enforcement authority, should Congress pursue privacy legislation (something the FTC recommends in the Privacy Report). That is obvious from the mere fact that the agency has established a Privacy and Identity Protection Division.
Companies that are developing or seeking to employ biometrics – or that employ other data collection practices – would be well advised to pay attention to the FTC’s recommendations. The guidelines provide insight into how an enforcement authority is likely to approach biometrics and other data collection practices. The guidelines also provide a framework for responsible use of consumer data. And even though consumers currently seem passive or dismissive about biometrics and data collection, it would take just one scandal or highly publicized incident for public opinion to change. Companies will benefit in the long run by building good will among consumers.
A recent decision by a federal judge in California has brought ICANN’s broad authority over the domain name system once again into question. Manwin Licensing International – perhaps the most lucrative provider of online adult-oriented content – brought an antitrust action against ICANN arising from the establishment of the .xxx top-level domain and the award of the registry contract for .xxx to ICM Registry. Manwin claimed, among other things, that because ICANN’s registry contract with ICM contains no restrictions on the price ICM may charge for its services (while providing for an enhanced fee to be paid by ICM to ICANN) and ICM is insulated from competition on renewal, the award of the contract violated the Sherman Antitrust Act.
In any antitrust case, the plaintiff must establish a “relevant market” that it can show is adversely affected by the anticompetitive actions. Here, Manwin sought to establish that the relevant markets affected by ICANN and ICM were the markets for affirmative registrations (i.e., the lack of an adequate economic substitute for .xxx domain names) and for defensive registrations (i.e., the need for trademark holders to protect their marks by registering .xxx names, for instance, playboy.xxx). The court made short work of Manwin’s claim with respect to the affirmative registration market, pointing out that domain names in other generic TLDs (gTLDs) are an adequate economic substitute for .xxx registrations. Indeed, the court pointed out that one of Manwin’s own websites – youporn.com – is the most popular free adult video website on the internet. Thus, the .com gTLD, among others, provides a perfectly adequate (if not superior) substitute to a .xxx registration.
However, the court was not so forgiving as to the defensive registration market. It held that Manwin adequately identified an adversely affected market in defensive registration because Manwin asserted that trademark owners and registrants of domain names in other gTLDs were compelled to register domain names in the .xxx TLD for defensive or blocking purposes, to protect their marks or other domain names from a loss of goodwill, prevent consumer confusion, or prevent association with adult entertainment. The court found no economic substitute for this market, as, it found the “only way to block a name in the .xxx TLD is to register a name in the .xxx TLD.” Therefore, the antitrust case will proceed with respect to the defensive registration market.
This decision has enormous potential consequences to the domain name registration market, particularly with the coming roll-out of new gTLDs. By way of example, one of the applied-for new gTLDs is .hotel. While Marriott has a very popular website located at marriott.com (as do Hyatt at hyatt.com, Hilton at hilton.com, etc.), these hoteliers may feel compelled to register their corresponding names and trademarks in the .hotel TLD, to protect against cybersquatters.
Compounding the problem, particularly for those with famous marks, is the issue of “typosquatters” who may register common misspellings of the mark in the new gTLD (such as marriot.hotel). Thus, the defensive registration market identified by Manwin has implications that extend far beyond the .xxx TLD — although .xxx has its own unique challenges not found with more mundane gTLDs, as the .xxx TLD’s association with adult content and pornography has the very real potential to tarnish otherwise unrelated marks. Imagine, for instance, pepsi.xxx (probably bad) versus pepsi.hotel (probably innocuous). Whether the existence of this case will cause a delay in the launch of the new gTLDs remains to be seen. It would seem that ICANN would proceed cautiously, as an adverse ruling might lead to a requirement that the registry contracts for gTLDs found to violate antitrust laws be unwound. Time will of course tell.
However, in the end, while Manwin seems to have hit upon a soft spot in ICANN’s shield, its claims ultimately seem overblown and contrary to the rights enjoyed by trademark owners and domain name registrants with respect to .xxx registrations. Setting aside blocking/sunrise rights that were afforded to trademark owners in advance of the public rollout of the .xxx TLD, trademark owners have extraordinary rights with respect to infringing domain names registered in .xxx. A trademark owner has available to it three means of challenging an infringing domain name registered in the .xxx TLD. These are the Rapid Evaluation Service (RES), the Charter Eligibility Dispute Resolution Policy (CEDRP), and the Uniform Dispute Resolution Policy (UDRP).
The RES provides a quick take-down process for infringing registered word marks or personal names of individuals. If an RES claimant shows that the domain name is identical or confusingly similar to a registered word mark that the claimant owns and uses, that the registrant has no rights or legitimate interests in the disputed domain name, and that the domain name was registered and is either being used in bad faith or cannot possibly be used in good faith, the domain name is directed to a page which states that the domain name has been deactivated. Temporary take-downs pending a final decision may be effected within two business days.
Trademark owners may also initiate a CEDRP proceeding, which will be handled by NAF, to challenge .xxx domain names that are being used in violation of the Adult Entertainment Industry eligibility requirements for the .xxx TLD (for instance, the example of pepsi.xxx, above). If the trademark owner is successful in a CEDRP proceeding, the offending domain name registration will be cancelled.
In addition, a trademark owner may initiate a UDRP proceeding with respect to a .xxx domain name registration, just as it might for an infringing domain name in any other TLD. Such a proceeding might result in the cancellation or transfer of the offending domain name – though if the registrant is not engaged in the adult entertainment industry, the domain name will not resolve.
In the meantime, since the Court’s ruling allowing Manwin’s case to proceed, ICM Registry has filed a counterclaim against Manwin, asserting antitrust and trade libel claims, amongst others. In the end, this battle promises to have consequences that extend far beyond the .xxx world in which it is clothed.
Companies that run websites must comply with laws and rules requiring the maintenance of personal privacy. While federal requirements such as those applicable to financial privacy and children’s privacy gain significant attention, website and app developers also should pay careful attention to state privacy requirements. State regulators are monitoring websites and apps for compliance with their privacy mandates.
Given the open nature of the Internet, companies and Web developers, as a practical matter, need to comply with the strictest state privacy requirements — since they can assume that their sites will be accessed from all the states.
So the recent letters sent by California Attorney General Kamala Harris to 100 companies and mobile app developers (including Delta, United Continental and Open Table), asking them to bring their privacy policies in line with California state law, are highly relevant to anyone whose Web site is going to be accessed in California.
In these letters, Harris gave companies and developers 30 days to come up with a plan to comply with the California privacy law, or tell her why it does not apply to a particular app. After the 30 days are up, Harris will apparently sue the firms or developers that aren’t complying, with a potential fine of up to $2,500 each time the app is downloaded.
“Protecting the privacy of online consumers is a serious law enforcement matter,” Harris said in a statement. “We have worked hard to ensure that app developers are aware of their legal obligations to respect the privacy of Californians, but it is critical that we take all necessary steps to enforce California’s privacy laws.”
We must emphasize that anyone who makes apps and websites available to consumers must comply with state as well as federal requirements. The California actions will only be the beginning.
Each October, the World Intellectual Property Organization (WIPO), a United Nations agency, hosts at its Geneva, Switzerland, headquarters about 50 participants from around the world for a two-day conclave to discuss recent developments and issues surrounding domain name trademark disputes. This conference brings together, in one place (as an added bonus, scenically overlooking Lake Geneva and the French Alps) representatives of domain name registrars and registries, and lawyers from every corner of the globe to discuss the Uniform Dispute Resolution Policy (UDRP), which governs disputes between domain name registrants and trademark owners in most generic top-level domains (gTLDs).
With ICANN’s roll-out of new gTLDs imminent, the UDRP is likely about to experience increased use and importance, as cybersquatters will doubtless target brand owners whenever and wherever possible in the new gTLDs.
The UDRP is not without its faults — but, in general, it provides brand owners with a fast, relatively inexpensive and effective means to shut down domain names that are registered to take advantage of the goodwill attached to their trademarks. At this year’s conference, as with those in years past, the most hotly contested issues involve domain names that resolve to “criticism” websites. It is with these issues that legal and cultural differences on the borderless Internet intersect and conflict. Many (but certainly not all) representatives from the United States see these issues through the lens of freedom of speech, while participants from elsewhere have no such point of reference. These cases come in two basic flavors.
First, there are the “trademark sucks” sorts of cases (the ubiquitous “suck sites”), and second, there is the more harmful “trademark.com” cases, which then resolve to a site critical of the trademark owner.
Suck site cases are less harmful because there is less risk of initial interest confusion – the likelihood that an Internet user would type the name into his or her browser thinking that the site belonged to the trademark owner. But in this era of the Internet in which search engines drive a great deal of traffic, such sites can cause great harm. However, a small consensus seems to lean toward finding that such domain names are not infringing.
The more hotly contested issue continues to be the trademark.com (including typos, hyphenations and other close variants of the trademark) cases. First Amendment considerations make it very difficult for a trademark owner to retrieve these domain names, when used for noncommercial purposes, in U.S. courts; in other parts of the world, this is not the case. But one UDRP panelist from the United States pointed out that the UDRP process is not a governmental act, and therefore he believes (correctly, I think) that the UDRP should pay no heed to these considerations. At bottom, U.S. trademark owners facing such situations should consider pursuing UDRP cases, understanding that even if they prevail, if the case ultimately lands in court, the likelihood of a successful outcome is diminished.
Other topics discussed included the new proposals for Rights Protection Mechanisms (RPMs) associated with the new gTLDs. While many of these RPMs remain in the discussion stages, they promise to bring to the fore new opportunities for trademark owners to protect their trademarks against cybersquatters who begin infringing in the new gTLD space. A good summary of all of the RPMs can be found here. The most interesting among these is the proposed Uniform Rapid Suspension System, which may bring about a means to temporarily suspend a name (that is, redirect the domain name to a web page revealing the suspension) in an expedited fashion. The devil will be in the details, as the costs and specifics of the proposed program are still up in the air. ICANN, WIPO and the other stakeholders are working on the details, and if implemented, this has the potential to provide trademark owners with another tool to combat those who damage their brands.
WIPO’s conferences are always first class and informative, and the opportunity to hear from and confer with talented, knowledgeable (and opinionated) domain name lawyers from around the world is always a pleasure and a privilege. I was able to meet and work with people from all over the world –from a representative of the Tanzanian registry, to brand managers from Sweden, to IP lawyers from China and Taiwan. And from it, we all are better able to serve our clients who do business on the Internet, which knows no national boundaries.
All mobile app developers need to know that the federal government is stepping up its regulation of data privacy and truth-in-advertising for mobile apps. The Federal Trade Commission is now actively monitoring mobile applications’ compliance with data privacy and truth-in-advertising regulations, and the House Committee on Energy and Commerce is considering a new mobile device privacy bill.
This month, the FTC published Marketing Your Mobile App: Get It Right From the Start, a short guide that provides guidance to mobile app developers concerning deceptive claims and privacy requirements. More broadly, the FTC’s focus on mobile app developers sends a message that all such developers or distributors will be subject to investigation, irrespective of how small their company is. As far as the FTC is concerned, “once you start distributing your app, you have become an advertiser,” and you will be regulated as such. The guide stresses the importance of clear and conspicuous communication to users and instructs app developers to consider the legitimacy of their statements “from the perspective of average users, not just software engineers and app experts.” It goes on to caution against burying important information behind “dense blocks of legal mumbo jumbo” and “vague hyperlinks.”
This guide can be seen as part of the FTC’s current initiative to address concerns regarding the unique ability of mobile apps to access a user’s personal information (i.e., automatically capturing their precise geospatial location, phone number, contact lists, call logs, and other unique identifiers, stored on mobile devices). In February, the agency issued a report looking specifically at apps offered for children. The report, Mobile Apps for Kids: Current Privacy Disclosures are Disappointing, warned app stores, developers, and third-party service providers to be more transparent about the issues raised by such data collection, such as sharing with third parties, connections to social media, and targeted advertising.
Additionally, the FTC has already taken action to establish that data privacy and truth-in-advertising laws apply to mobile apps. Last August, an app developer was ordered to pay $50,000 to settle FTC charges that it violated the Children’s Online Privacy Protection Act (COPPA) by failing to require parental notice and consent before collecting and disclosing children’s personal information. The following month, the agency settled its first actions addressing health claims in the mobile application marketplace. The complaints were against AcneApp and Acne Pwner, both of which claimed to treat acne through lights emitted from the user’s smartphone. The cases ended in settlements for monetary damages and injunctive relief barring the companies from making health-related claims without the backing of “competent and reliable scientific evidence.”
In the new FTC Guide, the FTC recommends that developers:
• Tell the truth about what the app can do – both in marketing materials and within the app itself.
• Disclose key information clearly and conspicuously.
• Err on the side of caution, and implement meaningful privacy-protection policies from the start.
• Only collect the information that developers really want, and require affirmative consent before collecting sensitive information.
• Offer user -friendly choices. For example, use default settings that collect a limited amount of user information, and allow users to adjust settings for increased sharing and functionality.
• Protect kids’ privacy by requiring parental consent before their information is collected and shared.
• Incorporate security measures to protect user data, especially when collecting medical and financial information.
On September 12, a new bill – the Mobile Device Privacy Act – was referred to the House Committee on Energy and Commerce. The bill, introduced by Reps. Ed Markey (D -Mass.) and Diana DeGette (D –Colo.), requires merchants, mobile service providers, and manufacturers to disclose information about mobile tracking software to consumers and to obtain users’ express consent before the software is activated. Specifically, customers must be told that the software is installed, what type of data it is collecting, the identity of all persons to whom the data will be transmitted, how the data will be used, and how the user can limit collection and sharing. Disclosures must be clear and conspicuous, and consumers must be able to prohibit further collection and sharing at any time.
If passed, this law will also require all recipients of user information to establish and implement information security policies and procedures for data collection, retention, system monitoring, and destruction. Finally, the bill requires that companies file all agreements relating to the transmission of user information with the FTC and the Federal Communications Commission. In the bill’s current form, penalties will range from $1,000 to $3,000 per violation (i.e., per user affected). Hence, a single policy error could expose large vendors to liability well into the billions of dollars, and a similar misstep could put a startup out of business.
All mobile app developers – including large players and new entrants – should review their compliance with this new FTC guidance and their overall truth-in-advertising and data privacy policies. The FTC has made it clear that it will take enforcement actions against industry participants large and small. In particular, we believe those making health claims, targeting children, and transmitting user information to third parties will continue to face significant FTC scrutiny. In general, the more personal information that an app collects from individuals, the greater the need for significant privacy projections and disclosures.
In the past couple of years, a wide variety of computer viruses and other malware have allegedly been used by one nation against another. This secretive form of warfare even briefly plastered names like Stuxnet, Duqu, Flame, and Gauss across the front pages. In partial response to the threat posed to U.S. interests by hostile foreign countries and/or individuals, different cybersecurity bills are percolating through the halls of Congress, including the SECURE IT Act of 2012, the Cybersecurity Act of 2012, and others.
No one can dispute the very real danger posed by cybersecurity threats and the potentially disastrous results if they are unleashed upon a country or upon an industrial or financial system. In a recent Wall Street Journal op-ed, President Obama wrote that “the cyber threat to our nation is one of the most serious economic and national security challenges we face.” The president also stated that “foreign governments, criminal syndicates and lone individuals are probing our financial, energy and public safety systems every day.”
President Obama then pushed for the passage of the Cybersecurity Act of 2012, which would require the sharing of information between the private and public sector, develop cybersecurity standards, and other protections. In support of that bill, President Obama wrote that “Congress must pass comprehensive cybersecurity legislation” and that “We all know what needs to happen.”
However, in early August the U.S. Senate rejected cybersecurity legislation, with Republican members concerned that the bill would impose burdensome obligations on businesses.
The president has indicated that he is considering imposing the same cybersecurity measures by executive order.
“In the wake of Congressional inaction and Republican stall tactics, unfortunately, we will continue to be hamstrung by outdated and inadequate statutory authorities that the legislation would have fixed,” Presidential press secretary Jay Carney said.
This possibility does concern us.
Although computer malware poses a real and credible danger to U.S. interests, we also need to discuss how cybersecurity is going to be achieved. The use of an executive order to bypass the legislative process is of questionable constitutionality because it may violate the separation of powers mandated by the Constitution.
A step that creates such an extensive public-private partnership and involves the government so much in private decisions to provide security at least deserves approval after full discussion by a majority of both houses of Congress. We hardly think that the threat has risen to the level of “war” that would permit the president to engage in unilateral emergency actions to protect national security.
As the tech editor of the Daily Caller wrote recently: “The failed cyber security bill, which could be revived by Sen. Majority Leader Harry Reid when the Senate comes back from recess in September, would have given federal agencies in charge of regulating critical infrastructure industries like power companies and utilities the ability to mandate cybersecurity recommendations … An executive order would be another action from the Obama administration to extend executive branch authority over a largely free and open Internet.”