The Federal Trade Commission recently filed another complaint against a company for alleged data security lapses. As readers of this blog know, the FTC has initiated numerous lawsuits against companies in various industries for data security and privacy violations, although it is facing a backlash from Wyndham and large industry organizations for allegedly lacking the appropriate authority to set data security standards in this way.
The FTC’s latest target is LabMD, an Atlanta-based cancer detection laboratory that performs tests on samples obtained from physicians around the country. According to an FTC press release, the FTC’s complaint (which is being withheld while the FTC and LabMD resolve confidentiality issues) alleges that LabMD failed to reasonably protect the security of the personal data (including medical information) of approximately 10,000 consumers, in two separate incidents.
Specifically, according to the FTC, LabMD billing information for over 9,000 consumers was found on a peer-to-peer (P2P) file-sharing network. The information included a spreadsheet containing insurance billing information with Social Security numbers, dates of birth, health insurance provider information, and standardized medical treatment codes.
In the second incident, the Sacramento, California Police Department found LabMD documents in the possession of identity thieves. The documents included names, Social Security numbers, and some bank account information. The FTC states that some of these Social Security numbers were being used by multiple individuals, indicating likely identity theft.
The FTC’s complaint alleges that LabMD did not implement or maintain a comprehensive data security program to protect individuals’ information, that it did not adequately train employees on basic security practices, and that it did not use readily available measures to prevent and detect unauthorized access to personal information, among other alleged failures.
The complaint includes a proposed order against LabMD that would require the company to implement a comprehensive information security program. The program would also require an evaluation every two years for 20 years by an independent certified security professional. LabMD would further be required to provide notice to any consumers whose information it has reason to believe was or could have been accessible to unauthorized persons and to consumers’ health insurance companies.
LabMD has issued a statement challenging the FTC’s authority to regulate data security, and stated that it was the victim of Internet “trolls” who presumably stole the information. This latest complaint is yet another sign that the FTC continues to monitor companies’ data security practices, particularly respecting health, financial, and children’s information. Interestingly, the LabMD data breaches were not huge – with only 10,000 consumers affected. But, the breach of, and potential unauthorized access to, sensitive health information and Social Security numbers tend to raise the FTC’s attention.
While industry awaits the district court’s decision on Wyndham’s motion to dismiss based on the FTC’s alleged lack of authority to set data security standards, companies should review and document their data security practices, particularly when it comes to sensitive personal information. Of course, in addition to the FTC, some states, such as Massachusetts, have their own data security standards, and most states require reporting of data breaches affecting personal information.
Following a public comment period, the Federal Trade Commission recently approved a final order settling charges against mobile device manufacturer HTC America, Inc. HTC develops and manufactures mobile devices based on the Android, Windows Mobile, and Windows Phone operating systems. This case, which focuses on device security, is the FTC’s first case against a device manufacturer.
The FTC alleged that HTC failed to take reasonable steps to secure the software it developed for its smartphones and tablet computers. According to the FTC, HTC’s failures introduced various security flaws that placed consumers’ sensitive information at risk. The FTC’s action against HTC signals the agency’s continued focus on data security and data privacy issues and use of its broad “Section 5” authority, which the FTC has repeatedly asserted against various organizations, including its ongoing litigation with Wyndham Hotels. The HTC case also reiterates the agency’s strong interest in securing mobile networks,[link to blog regarding mobile apps], now that mobile phones, which are full of sensitive contact, financial, and other personal information, have become so prevalent.
Companies may be asking what HTC actually did to warrant this FTC action. The FTC claims that HTC, when customizing the software on mobile devices, failed to provide its staff with sufficient security training, failed to review or test the software on its mobile devices for potential security vulnerabilities, failed to follow commonly accepted secure coding practices, and did not have a process for receiving and addressing vulnerability reports from third parties.
In particular, the FTC asserted that HTC devices potentially permitted malicious applications to send text messages, record audio, and install additional malware onto a consumer’s device, without the user’s consent or even knowledge. These malicious applications allegedly could access financial and medical information and other sensitive information such as a user’s geolocation and text message content.
In particular, in the case of Android devices, the FTC claimed that HTC pre-installed a custom application that could download and install applications outside the normal Android installation process. However, HTC did not include an appropriate permission check code to protect the pre-installed application from installation. Consequently, a third party application could command this pre-installed application to download and install any additional applications onto the device without a user’s knowledge or consent.
The FTC further charged that HTC’s actions actually undermined Android consent mechanisms that, but for HTC’s actions, would have prevented unauthorized access and transmission of sensitive information. The FTC’s complaint alleged that the vulnerabilities have been present on approximately 18.3 million HTC devices running Android. The complaint further alleged that HTC could have prevented these vulnerabilities through readily available, low-cost measures, such as adding a few lines of permission check code when programming its pre-installed applications.
In a precedent-setting remedy, the FTC’s final order requires HTC to develop and release software patches within 30 days of service of the FTC’s final order on HTC. The patches must fix vulnerabilities in millions of HTC’s devices, including every covered device having an operating system version released on or after December 2010. HTC must also establish a comprehensive security program designed to address security risks during the development of HTC devices. The FTC requires the program to include consideration of employee training and management; product design, development and research; secure software design and testing; and review, assessment, and response to third party security vulnerability reports.
Further, HTC must undergo independent security assessments every other year for the next 20 years. Among other requirements, the independent, professional assessment must certify that HTC’s security program operates with sufficient effectiveness to provide reasonable assurance that the security of covered device functionality and the security, confidentiality, and integrity of covered information is protected and has operated during the reporting period. HTC is barred from making false or misleading statements about the security and privacy of consumers’ data on HTC devices.
The FTC’s action against HTC has broad application beyond the mobile device and software marketplace. The agency’s action further solidifies the FTC’s role as the leading enforcer of data security standards. Once again the FTC has demonstrated that it is setting data security standards and will continue to monitor and police the marketplace when it believes companies have not incorporated what it believes are commonly accepted security features or when organizations have failed to take steps to prevent vulnerabilities.
Over the past decade the Federal Trade Commission has brought cybersecurity enforcement actions against various private companies, imposing tens of millions of dollars in monetary penalties and requiring companies to maintain more stringent data-security practices. No company has ever challenged the FTC’s authority to regulate cybersecurity in this way in court – until now. On June 17, 2013, a federal court will finally get a chance to weigh in on whether the scope of the FTC’s regulatory jurisdiction is so broad as to include setting standards for cybersecurity.
In FTC v. Wyndham Worldwide Corporation, et al., the FTC launched a civil action against the parent company of the Wyndham hotels and three of its subsidiaries for data security failures that led to three major data breaches in less than two years. The Commission’s complaint charges that Wyndham’s security practices were unfair and deceptive in violation of the FTC Act.
Unlike many other data-security FTC enforcement actions, in which the defendant has chosen to cut its losses and settle out of court, Wyndham has decided to stand and fight with a motion to dismiss. Judge Esther Salas of the U.S. District Court for the District of New Jersey is expected to rule on Wyndham’s motion on June 17.
With respect to the FTC’s unfairness claim, Wyndham’s motion asserts that the FTC is attempting to circumvent the legislative process by acting as if “it has the statutory authority to do that which Congress has refused: establish data-security standards for the private sector and enforce those standards in federal court.”
According to Wyndham, “on multiple occasions in the 1990s and early 2000s the FTC publicly acknowledged that it lacked authority to prescribe substantive data-security standards under the [FTC Act]. For that very reason, the FTC has repeatedly asked Congress to enact legislation giving it such authority.” Further, Wyndham highlights the Senate’s failure to pass the Cybersecurity Act of 2012, which sought to address the need for specific data-security standards for the private sector, and President Obama’s February 2013 Executive Order on cybersecurity that was issued in response to the Congressional stalemate.
On its face, Wyndham’s motion to dismiss seems quite strong. However, the facts that the FTC is alleging do not cut in Wyndham’s favor. The Commission’s complaint alleges that Wyndham’s failure to “adequately limit access between and among the Wyndham-branded hotels’ property management systems, [Wyndham] Hotels and Resorts’ corporate network, and the Internet” allowed intruders to use weak access points (e.g., a single hotel’s local computer network) to hack into the entire Wyndham Hotels and Resorts’ corporate network. From there, the intruders were able to gain access to the payment management systems of scores of Wyndham-branded hotels.
According to the FTC, Wyndham failed to remedy known security vulnerabilities, employ reasonable measures to detect unauthorized access, and follow proper incident response procedures following the first breach in April 2008. Thus, the corporation remained vulnerable to attacks that took place the following year. All told, the intruders compromised over 600,000 consumer payment card accounts, exported hundreds of thousands of payment card account numbers to a domain registered in Russia, and used them to make over $10.6 million in fraudulent purchases.
Unfortunately – as Wyndham notes in its motion to dismiss – hacking has become an endemic problem. There has been no shortage of stories about major cyber-attacks on private companies and governmental entities alike: from Google and Microsoft to the NASA and the FBI. And the FTC has not been shy about bringing enforcement actions against private companies with inadequate security measures.
If Wyndham prevails, the case could usher in a major reduction in FTC enforcement efforts. However, if the court sides with the FTC, the commission will be further empowered to regulate data security practices. With such high stakes on both sides, any decision is likely to result in an appeal. In the meantime, companies in various industry sectors that maintain personal consumer information are awaiting next week’s decision.
Angered by the recent tragic suicide of Internet activist Aaron Swartz, a group of hackers claiming to be from the group Anonymous, made threats over the weekend to release sensitive information about the United States Department of Justice. The group claimed to have a file on multiple servers that is ready to be released immediately.
Swartz’s suicide has served to mobilize the group Anonymous, a loosely defined collective of Internet “hacktivists” that oppose attempts to limit Internet freedoms. Anonymous is a staunch advocate of open access to information, as was Swartz. Anonymous said that Swartz “was killed” because he “faced an impossible choice.”
Swartz was facing federal computer fraud charges that carried a maximum sentence of 35 years in prison, although in reality he probably would not have been given a sentence anywhere near approaching the statutory maximum. Prosecutors told Swartz’s legal team they would recommend to the judge a sentence of six months in a low-security setting.
The charges arose from allegations that he made freely available an enormous archive of research articles and similar documents offered by JSTOR, an online academic database, through the computers at the Massachusetts Institute of Technology.
Swartz was a leading activist involved in the movement to make information more freely available on the Internet and is credited with helping to lead the protests that ultimately defeated the Stop Online Piracy Act (SOPA), a statute that would have significantly broadened law enforcement powers in policing Internet content that may violate U.S. copyright laws.
Earlier this month, Rep. Zoe Lofgren (D-Calif.) indicated that she is drafting a bill that she terms “Aaron’s Law,” which would limit the scope of the Computer Fraud and Abuse Act, a 1986 law that prosecutors used to help bring these charges against Swartz.
The hackers reportedly hijacked the website of the United States Sentencing Commission, the federal agency responsible for the federal sentencing guidelines for criminal offenses. They said that the Sentencing Commission’s website was chosen because of its influence in creating sentences that they deemed unfair. The hackers posted a message that demanded reform of the criminal justice system or threatening that sensitive information would be leaked. Anonymous also posted an editable version of the website, which invited users to edit it as they pleased.
Today is Data Privacy Day. These recent incidents serve to show that no organization – not even the U.S. Department of Justice – is immune from security breaches. Data breaches and data losses will occur and it is crucial for an organization to be prepared and have policies in place to allow a quick response when something does happen.
The legal ramifications and bad publicity that follow such an incident can be very damaging to an organization. However, by making sure that you are prepared, you can minimize your damages. Preparedness involves consultation across a range of specialties, including information technology, legal advice, and public relations. The impact that a data breach or loss can have on the bottom line of any organization is enormous and preparation is the best method to combat it.
A data breach or data loss can also have far-reaching legal consequences under international, federal and various state laws. For example, companies may not realize that if they have even a few employees or customers in a state, it may trigger a number of different requirements under state privacy laws. In order to avoid problems with federal agencies or state attorney general offices, it is best for companies to have a plan in place in advance and make sure they are already compliant with all relevant laws.
In the past couple of years, a wide variety of computer viruses and other malware have allegedly been used by one nation against another. This secretive form of warfare even briefly plastered names like Stuxnet, Duqu, Flame, and Gauss across the front pages. In partial response to the threat posed to U.S. interests by hostile foreign countries and/or individuals, different cybersecurity bills are percolating through the halls of Congress, including the SECURE IT Act of 2012, the Cybersecurity Act of 2012, and others.
No one can dispute the very real danger posed by cybersecurity threats and the potentially disastrous results if they are unleashed upon a country or upon an industrial or financial system. In a recent Wall Street Journal op-ed, President Obama wrote that “the cyber threat to our nation is one of the most serious economic and national security challenges we face.” The president also stated that “foreign governments, criminal syndicates and lone individuals are probing our financial, energy and public safety systems every day.”
President Obama then pushed for the passage of the Cybersecurity Act of 2012, which would require the sharing of information between the private and public sector, develop cybersecurity standards, and other protections. In support of that bill, President Obama wrote that “Congress must pass comprehensive cybersecurity legislation” and that “We all know what needs to happen.”
However, in early August the U.S. Senate rejected cybersecurity legislation, with Republican members concerned that the bill would impose burdensome obligations on businesses.
The president has indicated that he is considering imposing the same cybersecurity measures by executive order.
“In the wake of Congressional inaction and Republican stall tactics, unfortunately, we will continue to be hamstrung by outdated and inadequate statutory authorities that the legislation would have fixed,” Presidential press secretary Jay Carney said.
This possibility does concern us.
Although computer malware poses a real and credible danger to U.S. interests, we also need to discuss how cybersecurity is going to be achieved. The use of an executive order to bypass the legislative process is of questionable constitutionality because it may violate the separation of powers mandated by the Constitution.
A step that creates such an extensive public-private partnership and involves the government so much in private decisions to provide security at least deserves approval after full discussion by a majority of both houses of Congress. We hardly think that the threat has risen to the level of “war” that would permit the president to engage in unilateral emergency actions to protect national security.
As the tech editor of the Daily Caller wrote recently: “The failed cyber security bill, which could be revived by Sen. Majority Leader Harry Reid when the Senate comes back from recess in September, would have given federal agencies in charge of regulating critical infrastructure industries like power companies and utilities the ability to mandate cybersecurity recommendations … An executive order would be another action from the Obama administration to extend executive branch authority over a largely free and open Internet.”
SOPA and PIPA, as legislative efforts to deal with online piracy and other infringing activity, have gone the way of the Edsel. But their next of kin, a new bill known as CISPA, has made it through the House, passing 248 to 168. It too seems unlikely to become law, as the White House has threatened to veto it.
SOPA and PIPA hit the skids after major online companies and consumer activist groups mounted a host of protests across the Internet, including Wikipedia’s and Google’s blackout in January. The concerns with SOPA and PIPA were that the legislation could cripple Internet innovation. The public concern over CISPA, and the declared basis for the White House veto threat, is that it the bill would significantly threaten civil liberties.
CISPA’s stated goal is to create new channels for communication between government intelligence entities and private firms regarding potential and emerging cybersecurity threats. It allows a company to intercept emails or text messages and to modify those messages or prevent them from reaching their destination if they qualify as a cybersecurity threat. It would allow the companies and the federal government to share information with each other in an attempt to foil hackers.
Like SOPA and PIPA, CISPA includes portions that protect intellectual property. If a person is potentially infringing on intellectual property and that infringing activity is considered a threat to cybersecurity, under CISPA his website or the place where his content was posted could be blocked. Critics argue that the proposed definition of “cybersecurity” is so broad that it allows for the possibility of the restriction of communications that are not in any way threatening.
CISPA would create a system of information sharing that would involve the oversight of the Director of National Intelligence, who would appoint members of the intelligence community who would work with employees of tech companies and grant security clearances. Any information that was categorized under the cyberthreat intelligence category could not be divulged beyond the two parties without approval.
Many tech companies that actively opposed SOPA are supporting CISPA. CISPA is drawing support from such firms as Facebook, Microsoft, AT&T, IBM, Intel, Oracle, and Verizon as well as business groups such as the Financial Services Roundtable and the U.S. Chamber of Commerce.
A key difference may be that under CISPA, companies like Facebook would not be required to share any information about their users with the authorities, and if they did, CISPA would protect them from liability. The bill currently states that any sharing that occurs under the legislation “supersedes any statute of a State or political subdivision of a State that restricts or otherwise expressly regulates” the exchanges between the government and other parties.
Online advocacy groups are gearing up to protest against CISPA. The Center for Democracy and Technology, as well as the American Civil Liberties Union and the Electronic Frontier Foundation are rallying against the bill, and the number of blogs and websites calling for CISPA to be defeated is increasing rapidly.
Although CISPA’s approach is different from that of SOPA and PIPA, this bill has many of the same potential problems that those bills had. The very broad language defining a cybersecurity threat could be prone to abuse. Several amendments were added to the bill in order to appease civil liberties concerns, such as limiting the government’s use of private data and which cyberthreat data can be shared. Even with these amendments, advocacy groups remain concerned about the legislation, and the veto threat persists. It remains to be seen what will happen with CISPA, but we hope it goes the way of SOPA and PIPA. We will keep you updated as things progress.
The Congressmen’s letter is in response to the recent Path address book fiasco in which Path acknowledged – and apologized for – its collection of consumer address book information without notifying users. News surrounding Path’s activities led to Congressional concerns over the extent to which consumer data, especially contact information, is being collected and stored for future harvesting, all without the consumer’s knowledge or permission. The Waxman-Butterfield letter quotes the Guardian: “there’s a quiet understanding among many iOS app developers that it is acceptable to send a user’s entire address book, without their permission, to remote servers and then store it for future reference. It’s common practice, and many companies likely have your address book stored in their database.”
The congressmen called for Apple to address how its app policies and practices protect consumer privacy. Apple was swift to respond, and within the day vowed to release a software update to prevent data collection that would violate the company’s privacy policies.
On the heels of the Waxman-Butterfield letter (but in the works well beforehand) comes a report by the FTC: “Mobile Apps for Kids: Current Privacy Disclosures Are Disappointing.” The report title pretty much says it all. The FTC surveyed some 960 kid-based apps sold through Apple and Android to determine, from the various app’s promotion pages and websites, the extent to which the developers disclose what [child] consumer data is collected and how it is used. The FTC reported that it was disappointed with the results – that disclosures were scant or nonexistent.
Tying its authority over mobile apps with its authority to enforce children’s privacy protections online through the Children’s Online Privacy Protection Act (COPPA), the FTC warned that it will be reviewing more mobile apps directed at children over the next six months, but this time, it will be enforcing– not just surveying – COPPA compliance. COPPA requires operators of online services directed to children under age 13 to provide notice and obtain parental consent before collecting items of “personal information” from children.
Several times in the FTC report the agency suggested the need for clear, concise, consistent and timely information on data collection and usage. That means disclosures of how the app (or third party advertisers) will/may use the consumer data should be upfront and precede download so that parents can determine whether or not to allow their children to use the app. Disclosures should include any connections to other social media.
The FTC report also identified (several times) the types of data that could be collected – from contact information, to location information, to call data, as well as in-app data. App developers and third party advertisers should take into account the importance of full disclosure.
Perhaps most importantly, the FTC report and the Waxman-Butterfield letter demonstrate that the government views Apple and Android (and other app stores) not just as the marketplace for app sales, but also as the gatekeepers. The FTC report pointed to Apple and Android as providing the architecture for disclosures and suggested that app stores could incorporate icons to make disclosures more easily identifiable. The Congressmen’s letter all but accuses Apple for its app’s failings.
We have been seeing increasing backdoor regulation by the government through major online presences in a couple of places, including here and here. Since government regulators acknowledge the difficulties in keeping up with developments in new technologies, it’s fair to assume they will look to major online presences to have a hand in helping keep them up to speed and keeping advertisers and developers under wraps.
The new policy will consolidate and streamline some 60 disparate policies of Google products and services. In the overview it has provided to users, Google says that it has tried to keep the policy as simple as possible. And it is an easy-to-read, relatively brief statement that is much more user-friendly than the agreements that we regularly click through in haste to access some enticing new service.
As a part of the new policy, Google will aggregate data it collects on users across its products (with the exception of Google Wallet and Google Books) and develop a “mega-profile” on each user. That data collection includes a user’s Google searches, Gmail messages content, YouTube favorites, and contacts. It also includes location tracking.
Google touts the benefits of its new policy as creating “a beautifully simple, intuitive user experience across Google.” For instance, if you search for pizza, the Google location tracker will look for a nearby pizza place. The Google calendar combination will provide reminders, based on your location, if you’re going to be late for a meeting.
But lest we forget, the reality is that Google has acknowledged that it is collecting massive amounts of data on its users. Regardless of the usefulness and efficacy of some of its new features, users are beholden to Google (1) to securely store and (2) to defend their personal data.
This inability to opt out is one of the prime reasons that members of Congress have had questions about the new policy. Several members sent a letter to Google CEO Larry Page, asking for detail on what would be collected, how it would be used, and what could come of that data. Google representatives ended up in a closed-door briefing with Congressional members on February 2. From initial reports, it does not appear that the members’ concerns were satisfactorily addressed in the briefing. This gives reason to question what could become of individual user’s “mega-profiles.”
Google’s new policy, and all the accompanying noise, serves as a good reminder that, in the age of new technologies, we are constantly waiving our privacy rights. How often do we click through a user agreement in haste so we can have access to a cool app? How often do we reflect on whether the benefits of the new technology truly outweigh the costs?
Compare the controversy over Google’s new policy with the recent Supreme Court holding in United States v. Jones that warrantless GPS tracking of a criminal suspect violated the Fourth Amendment. Justice Samuel Alito’s concurring opinion in the case hinted at lowering privacy expectations with new technologies: “The availability and use of these and other new devices will continue to shape the average person’s expectations about the privacy of his or her daily movements.” As we press forward in an age in which it is ever easier to get the who, what, when and why of each of us, based upon our own preference for convenience and coolness, we must face the consequences: Privacy will suffer, unless Congress does something about it.
Speaking at a Dec. 15 Capitol Hill forum on children’s and teens’ online privacy, Federal Trade Commission Chairman Jon Leibowitz said that the agency is recommending that the Children’s Online Privacy Protection Act (COPPA) expand the definition of personally identifiable information.
Leibowitz explained that he supports expanding the definition of “personally identifiable information” to include geolocation information, photos, videos, IP addresses, and similar items found on computers or mobile devices.
COPPA applies to the online collection of personal information from children under 13 years old. The act applies to websites and online services that are operated for a commercial purpose and are directed at children under the age of 13 or whose operator has actual knowledge that children under 13 are providing information to the site online.
In September, the FTC announced proposed revisions to the COPPA rules, the first significant changes to the Act since it the rules were issued in 2000. The FTC has been seeking public comments on the proposed revisions since September.
According to Leibowitz, the definition of personally identifiable information should be expanded from information provided by the consumer, to also include information used by the user’s computer or mobile device. This would include information held in cookies, processor numbers, IP addresses, geolocation information, photographs, videos, and audio files. Additionally, the new definition would now include information that web site operators, advertising networks, and others use to track consumers as they use the Internet.
The proposed rule changes would also expand the definition of what it means to “collect” data from children. The new definition would make it clear that personal information is being collected not only when the operator is requiring the personal information but also when the operator prompts or encourages a child to provide the information.
The way parental consent is obtained from parents would also be changed to add several new methods such as electronic scans of parental consent forms and the use of government issued identification that is checked against a database. The rules would also eliminate the popular “e-mail plus” mechanism .
The new rules would also present a data retention and deletion requirement, which would mandate that data that is obtained from children is only kept for the amount of time necessary to achieve the purpose that it was collected for. The rules would also add the requirement that operators ensure that any third parties to whom a child’s information is disclosed have reasonable procedures in place to protect the information.
These proposed changes to COPPA will have a significant effect on online operators, particularly the expansion of the definition of personally identifiable information. We note, particularly, that the expansion of the definition of “personally identifiable information” in the children’s privacy context could lead to a general expansion by the FTC of the definition in all contexts. The FTC has cracked down on COPPA violations in the past, and these new powers will likely continue this trend.
Federal Trade Commission Chairman Jon Leibowitz delivered the keynote speech at a forum on Internet privacy on Oct. 11, 2011. He was part of a panel that discussed the protection of consumer data and the tracking of online consumer behavior. The Stanford Law School Center for Internet and Society also released a study the same day showing that data collection on the Internet is not anonymous and information about consumers is often leaked from websites.
Leibowitz emphasized that there are three key principles to protecting the privacy of consumers on the Internet. First, companies in the business of collecting and storing data need to build strong privacy policies. Data should be kept only for legitimate business needs and the more sensitive the data is, the more careful they need to be.
Second, there needs to be transparency. If data is being collected then consumers need to be told what is going on in a manner that they can easily understand. Lastly, there needs to be choice for the consumer. Consumers should have streamlined choices about the collection and usage of data based on their online behavior.
Leibowitz said there is a clear need for the development of a do-not-track mechanism for web users, similar to the do-not-call list that has been successful in blocking telemarketing calls. This mechanism would provide web users the ability to opt out of online tracking, which is used to provide targeted advertising based on a person’s online behavior.
Leibowitz emphasized that it is about providing consumers with the choice not to be tracked online, noting that if given the choice himself he would probably choose not to opt out because he enjoys the targeted advertising.
Leibowitz made clear that he does not care who creates this mechanism, but he does not think it needs to be administered by the government, though some members of Congress have proposed legislation to create a do-not-track system. (Note that the Interactive Advertising Bureau, a trade group for online advertisers, established a code of conduct that states that members should give clear and prominent notice of any online behavioral advertising collection and use. The code went into effect at the end of August.)
Leibowitz applauded Mozilla for going out of its way to provide consumers with the information to decide if they want to opt out of online tracking and said he was hoping other online browsers would soon follow. (Microsoft’s IE9 and Apple’s Safari also have do-not-track options.) Leibowitz emphasized that the FTC did not want to interfere with the normal data flow that makes the Internet efficient and did not see the need for the Internet to be a privacy-free zone, but still wanted to have a mechanism that allows for consumer protection.
Jonathan Mayer, a graduate student fellow at the Center for Internet and Society at Stanford University, and identifier of the “supercookie,” released a new study that showed that information collection from many websites is not as anonymous as many sites claim it is or consumers think it is. Identifying information from consumers was often leaked when the consumers went to various websites, though Mayer said that it was not clear that the leakage by websites was intentional and the study did not attempt to gauge this.
Mayer looked at the top 250 websites and signed up as a member on 185 of those websites. Mayer found that 61 percent of the websites leaked a user name or a user ID. Mayer stated that once an identity is provided in a pseudonymous system then it can be associated with what that person has done in the past and will do in the future. Full results of the study are available here.
The talks were sponsored by the ACLU, Center for Digital Democracy, Consumer Action, Consumer Federation of America, Consumers Union, Consumer Watchdog, Electronic Privacy Information Center, Privacy Rights Clearinghouse, US PIRG, and World Privacy Forum.