Attorney General Holder Calls on Congress to Establish Strong National Data Breach Notification Standard
By Michelle Cohen, CIPP-US
Yesterday, in his weekly video address, Attorney General Eric Holder urged Congress to create a national data breach notification standard requiring companies to quickly notify consumers of a breach of their personal or financial information. In the wake of the high profile holiday season data breaches at retailers Target and Neiman Marcus, Holder stated that the Department of Justice and the U.S. Secret Service continue to work to investigate hacking and cybercrimes. However, Holder believes that Congress should act to establish a federal notification requirement to protect consumers. Holder’s video address is available here .
Currently, at least forty-six states, the District of Columbia, Guam, Puerto Rico and the Virgin Islands have laws requiring private or government entities to notify individuals of security breaches of information involving personally identifiable information. As might be expected, the laws vary widely from state to state, particularly in the timing requirement for the breach notifications. Most laws allow delay to accommodate a law enforcement investigation.
Some states require notification as soon as reasonably practicable. Others require notification within 45 days. Yet organizations have faced lawsuits for failing to notify on a timely basis, even where there is no set standard. This presents a difficult situation for companies. Organizations need to investigate a data breach and determine the type of information affected, who was affected (and thus needs to be notified), and importantly, whether the breach is ongoing such that the company must immediately implement remedial measures.
Attorney General Holder believes Congress should set a national standard that will better protect consumers. Holder asserts that a federal requirement should enable law enforcement to investigate the data breaches quickly and to hold organizations accountable when they fail to protect personal and financial information. Holder’s video message did include a reference that this requirement should create “reasonable exemptions” for companies to avoid creating unnecessary burdens.
The Target and Neiman Marcus data breaches have certainly raised the profile of cybersecurity issues on Capitol Hill, with several bills having been introduced in recent weeks addressing data breaches. While the states certainly took the lead in protecting consumers by enacting data breach laws over the past several years, a properly-crafted national standard could provide more consistent guidance for industry and a uniform rule for consumers irrespective of their home states. Should Congress move forward on a data breach law, reasonable accommodations need to be made for companies to have time to investigate data breaches, to determine scope, persons affected, and the type of information affected. A national standard setting forth a notification deadline would also presumably alleviate the “rush to the courthouse” from the plaintiff’s bar with data breach notification timing allegations.
By Michelle Cohen, CIPP-US
On January 28th, in an effort raise awareness of privacy and data privacy, the United States, Canada and 27 countries of the European Union celebrate International Data Privacy Day. Many organizations use Data Privacy Day as an opportunity to educate their employees and stakeholders about privacy-related topics. With the recent, high-profile data breaches as Target, Neiman Marcus, and potentially, Michaels, the need for training and instruction on data security is more critical than ever before. In this vein, we’ve set forth our views on what we see as the year ahead in legal developments relating to data security and what companies can do to prepare.
Legislation Introduced but on the Move?
Data security and data breaches will continue to be the focus of regulators and Congress through 2014. In fact, Congress summoned Target’s Chief Financial Officer to appear before the Senate Judiciary Committee on February 4th and a House committee is seeking extensive documents from Target about its security program. Meanwhile, Senator Leahy re-introduced data breach legislation which would set a federal standard for data breach notifications (most states now require notifications, though the requirements differ state-to-state).
Senators Carper and Blunt introduced a separate bipartisan bill intended to establish national data security standards, set a federal breach notification requirement, and also require notification to federal agencies, police, and consumer reporting agencies when breaches affect more than 5,000 persons. Many companies have suffered data breaches and then faced civil lawsuits under various causes of actions, including allegations that they did not notify customers promptly. As a result, there may be strong support for federal standards rather than facing a patchwork of state laws. While the Target breach has certainly renewed interest in data security, and we expect Congress will conduct numerous hearings, ultimate passage of data breach legislation this Congress is still probably a longshot.
Watching Wyndham Take on FTC
As covered in this blog, various Wyndham entities have struck back at the FTC, challenging the FTC’s authority to bring an action against Wyndham for alleged data security failures. The Wyndham entities claim that the FTC may not set data security standards absent specific authority from Congress. Yet, with Congress having not set data security standards thus far, the court in oral arguments seemed concerned about leaving a void in the data security area. Wyndham’s motion to dismiss remains pending in federal court in New Jersey. Most observers think the court will be hard pressed to limit the FTC’s authority under Section 5 of the FTC Act, which broadly prohibits ”unfair methods of competition in or affecting commerce, and unfair or deceptive acts or practices in or affecting commerce” and provides the FTC with administrative and civil litigation enforcement authority. The agency has used this administrative authority with great success, bringing numerous data privacy actions that usually result in settlements by companies rather than risk further litigation expenses, penalties, and reputational damage. We think the FTC will remain vigilant in this space, including attention on the security of mobile apps.
Class Actions Jump on Breaches
Whether breaches affect Sony Playstation, Adobe, Target, or some other company, the class action firms have been busy filing lawsuits based upon data breaches. For example, by year end, at least 40 suits had already been filed against Target, with seven filed the day Target disclosed the breach. The plaintiffs use various theories – including violations of consumer protection statutes, negligence, fraud, breach of contract, breach of fiduciary duty, invasion of privacy and conversion. But, if a consumer’s information was potentially breached, yet nothing happened to the consumer as a result, does that consumer have cognizable damages? That has been a huge sticking point for these lawsuits. Yet, the class action lawyers will continue to file these suits and some companies will settle to avoid further reputational damages and litigation expenses.
Don’t Count out the States
States have taken the lead in setting data breach notification standards, and in some cases data security requirements. For instance, in March 2010, Massachusetts enacted strict data security regulations. Organizations that own or license personal information of Massachusetts residents are required to develop and implement a written comprehensive information security program (“CISP”) to protect that information. Almost all of the states have standards setting forth what types of information are covered by data breaches, who gets notified, what content goes in the notifications and, the timing of the notifications. Multiple states are investigating the Target breach; certainly less well known breaches get state regulators’ attention as well. We predict the states will continue to be active regulators and enforcers of data security and data breaches, and will likely continue to “rule the roost” while federal legislation lags behind.
Preparation and Training Still Key
We’ve said before that, unfortunately, no company is immune from data breaches. Companies cannot assume that they have the best anti-malware or security features and that these other newsworthy breaches resulted from lapses that would not apply to them. Whether it is a sophisticated hacker or, more commonly, a well-meaning but negligent employee, data loss and data breaches will occur. All organizations should have procedures in place NOW to prevent data loss and to prepare for a breach. This includes IT, human resources, legal, and communications resources. Companies should designate a “data security/data breach” team with representatives from these key departments (working with outside counsel and other privacy breach specialists when needed). The team should meet periodically to review procedures, recommend improvements, and engage in periodic training on data security.
We can’t stress here enough about employee training. An employee who, for instance, wants to finish a project at home after stopping by the gym might download information that contains sensitive personal information onto a flash drive. Let’s say the gym bag gets stolen, along with the flash drive. Well, the employee’s unlucky company may now have a huge data breach situation on its hands requiring notices to customers, state attorneys general, and potential litigation and other expenses (such as paying for creditor monitoring, now industry standard). Employees need training about securing sensitive information – from shredding documents instead of putting them in the dumpster, to encrypting information that is being taken offsite, to avoiding “phishing” scams, to having unique passwords they change periodically. According to recent reports, “password” and “123456” are still among the most popular passwords. While data breaches cannot be avoided completely, we can ameliorate some risks with better practices in our organizations.
As the Federal Trade Commission (“FTC”) continues to flex its consumer protection muscles by bringing numerous administrative lawsuits, industry and members of Congress are questioning whether there is a level playing field that allows companies to properly defend themselves against FTC charges. Or, as some say, does the FTC have the “home court advantage” in its role as investigator and prosecutor, armed with very broad authority under Section 5 of the FTC Act –leaving many companies to decide simply to settle rather than face the Goliath FTC. However, some companies have been bucking that trend recently and challenging the FTC’s authority (particularly in the area of regulating data security and FTC officials’ impartiality.
As background, the FTC may begin an enforcement action if it has “reason” to believe that the FTC Act is being or has been violated. Section 5(a) of the FTC Act prohibits “unfair or deceptive acts or practices in or affecting commerce.” The FTC also enforces several other consumer protection statutes, including the Fair Credit Reporting Act, the Do-Not-Call Implementation Act of 2003, and the Children’s Online Privacy Protection Act.
Under Section 5(b) of the FTC Act, the FTC can challenge “unfair or deceptive acts or practices” or violations of certain other laws (such as those listed above) in an administrative adjudication. The way this works is the FTC issues a complaint putting forth its charges. Many companies faced with such complaints inevitably settle with the FTC, rather than endure an administrative trial. Those companies that contest the charges face a trial-type proceeding before an FTC administrative law judge. FTC staff counsel “prosecute” the complaint. The administrative law judge later issues an initial decision. Either party can appeal the initial decision to the full FTC for review.
Many observers, including the American Bar Association, have criticized this situation — where the FTC acts as both prosecutor and judge — as inherently unfair. After the FTC’s decision, the respondent organization (or individual)may appeal to a federal court of appeals. However, at this point, an extensive record has been made and this assumes an organization or individual has the resources to devote to a federal appeal. (In addition, the FTC can also bring consumer protection enforcement directly in court rather than through administrative litigation).
The FTC’s winning record in these administrative proceedings has many observers questioning the process and the FTC’s potential impartiality. House antitrust chairman Spencer Bachus (R-Ala.) called out the FTC’s apparent lack of impartiality and fairness, stating “ a company might wonder whether it is worth putting up a defense at all.”
Just a couple weeks ago, however, medical testing company LabMD went on the offense and sought the disqualification of an FTC Commissioner. Facing an administrative proceeding relating to its alleged failure to secure patient information data, LabMD moved to disqualify Commissioner Julie Brill from consideration of its case. LabMD claimed that the Commissioner made numerous statements at industry conferences prejudging its ongoing litigation. Specifically, LabMD claimed Brill stated LabMD that had violated the law, rather than indicating that LabMD was under investigation or in litigation. The FTC opposed the disqualification. However, Commissioner Brill voluntarily recused herself from the case on Christmas Eve to avoid “undue distraction” from the administrative litigation.
As the FTC litigates in several key areas – data privacy, financial services, credit repair, telemarketing – we expect administrative litigation will increase in 2014. While some companies will continue to settle to avoid continued litigation expenses and possible further detrimental outcomes, we think others will take the LabMD route and seek relief when they believe the processes are not transparent or the FTC is exceeding its authority.
FTC Vigilant on Children’s Privacy – Rejects Proposal for Collecting Verifiable Parental Consent Under COPPA
On November 12, 2013, the Federal Trade Commission (“FTC”), in a 4-0 vote, denied AssertID’s application for approval of a proposed verifiable parental consent (“VPC”) method under the Children’s Online Privacy Protection Rule (“COPPA”). Under the FTC’s COPPA rule, covered online websites and services must obtain “verifiable parental consent” (“VPC”) before collecting personal information from children under 13. The agency’s revised COPPA rule became effective in July; among other changes, it expanded the categories that can constitute “personal information.” The FTC’s COPPA rule sets forth several acceptable methods of obtaining parental consent. Notably, the rule also allows parties to seek FTC approval of other VPC methods.
The FTC’s approval process allows organizations to present innovative VPC methods, thereby permitting flexibility and taking into account new technologies, while still ensuring that parents provide consent on behalf of their children as required under COPPA. The FTC requires that applicants seeking approval for a unique VPC provide: (1) a detailed description of the proposed parental consent method; and (2) an analysis of how the method is reasonably calculated in light of available technology, to ensure that the person providing consent is the child’s parent.
The FTC reviewed AssertID’s proposed VPC method following a public comment period. AssertID’s product, “ConsentID,” would ask a parent’s “friends” on a social network to verify the identity of the parent and the existence of the parent-child relationship (“social-graph verification”). The FTC concluded that “ConsentID” did not meet the criteria to ensure that the person providing consent is the child’s parent. The agency determined that it is premature to approve ConsentID, since AssertID did not present sufficient research or marketplace evidence demonstrating the efficacy of social-graph verification.
The FTC also questioned the efficacy of social-graph efficacy in the “real world.” The agency noted that relying upon social network users to confirm parental consent posed many problems including the fact that many profiles are fabricated (noting that Facebook’s SEC 10-Q indicates it has approximately 83 million fake accounts). In conclusion, the agency found that “identity verification via social-graph is an emerging technology and further research, development, and implementation is necessary to demonstrate that it is sufficiently reliable to verify that individuals are parents authorized to consent to the collection of children’s personal information.”
The FTC has approved and denied other VPCs. The agency’s denial of AsssertID’s application signals that while the FTC encourages the uses of new technologies to obtain VPC under COPPA, it will review new methods carefully, mandating research results and demonstrable success in a “real world” scenario rather than just a beta test. Website operators collecting personal information of children under 13 (and “personal information” now includes geolocation information, as well as photos, videos, and audio files that contain a child’s image or voice) should review their COPPA compliance, including their methods of VPC. The FTC continues to be especially vigilant in protecting certain categories of personal information, including children’s information, financial information, and health information.
A lawsuit filed in Massachusetts state court recently raised the issue of whether a former employee’s LinkedIn post announcing a new job could violate an anti-solicitation clause of a non-compete contract with the former employer.
In KNF&T Inc. v. Muller, staffing company KNF&T filed suit against its former vice president, Charlotte Muller, for violating a non-compete contract in a number of ways, one of which was a LinkedIn update which notified Ms. Muller’s 500+ contacts of her new job. Among those contacts were Ms. Muller’s former clients at KNF&T. KNF&T filed suit alleging that the update notification violated her one year non-compete contract by soliciting business from current KNF&T clients.
The court issued a narrow ruling stating that the posting did not violate the non-compete agreement because Ms. Muller’s new position in information technology recruiting did not directly compete with KNF&T’s work in recruiting administrative support specialists.
Since the court was able to resolve the case based on a differentiation in practice areas, it did not have to resolve the issue of whether a LinkedIn notification could violate the terms of a non-competition agreement. Such a determination will always depend of the particular facts of the case, such as whether the new position directly competes with the former employer, whether the individual is connected with former clients on LinkedIn, and the content of the notification.
Employees subject to a non-competition agreement should exercise caution when using social media to announce a new position. If they do make an announcement, they should consult the terms of their non-compete agreement to determine what could constitute a violation. For instance, if the non-compete only prohibits solicitation of the former employer’s current clients, the employee should be sure to exclude any such clients from the notification by selecting which groups receive the message. The time spent paring down the list of recipients is well worth avoiding a potential lawsuit.
A company that markets video cameras that are designed to allow consumers to monitor their homes remotely has agreed to settle charges with the FTC that it failed to properly protect consumers’ privacy. This marks the FTC’s first enforcement action against a marketer of a product with connectivity to the Internet and other mobile devices, commonly referred to as the “Internet of Things.”
The FTC’s complaint alleges that TRENDNet marketed its cameras for uses ranging from baby monitoring to home security and that TRENDNet told customers that its products were “secure.” In fact, however, the devices were compromised by a hacker who posted links on the Internet to live feeds of over 700 cameras. Additionally, TRENDNet stored and transmitted user credentials in clear unencrypted text.
Under the terms of its settlement with the FTC, TRENDnet is prohibited from misrepresenting the security of its cameras or the security, privacy, confidentiality, or integrity of the information that its cameras or devices transmit. The company must also establish a comprehensive security program and notify customers about security issues with the cameras and must provide a software update to customers to address security issues.
“The Internet of Things holds great promise for innovative consumer products and services,” FTC Chairwoman Edith Ramirez said. “But consumer privacy and security must remain a priority as companies develop more devices that connect to the Internet.”
The FTC’s authority to regulate and penalize companies that the agency claims do not protect consumers with sufficient data security is being challenged in federal court in New Jersey by The Wyndham Hotel Group. Wyndham has argued, among other things, that the FTC has not published any formal rules on data security and therefore cannot penalize companies that it deems have not protected consumer information. That case is pending.
This is the first time the FTC has brought an enforcement action involving the “Internet of Things,” but the FTC has already signaled it will be carefully watching how the Internet of Things develops. In particular, the FTC will be hosting a workshop in November to explore these new technologies. The agency previously sought comment from interested stakeholders on the Internet of Things – including the privacy and data security implications of interconnected devices. We expect that the FTC will continue to explore these issues, with a particular emphasis on how these devices collect and share information, particularly sensitive and personal information, such as health information.
The Federal Trade Commission recently filed another complaint against a company for alleged data security lapses. As readers of this blog know, the FTC has initiated numerous lawsuits against companies in various industries for data security and privacy violations, although it is facing a backlash from Wyndham and large industry organizations for allegedly lacking the appropriate authority to set data security standards in this way.
The FTC’s latest target is LabMD, an Atlanta-based cancer detection laboratory that performs tests on samples obtained from physicians around the country. According to an FTC press release, the FTC’s complaint (which is being withheld while the FTC and LabMD resolve confidentiality issues) alleges that LabMD failed to reasonably protect the security of the personal data (including medical information) of approximately 10,000 consumers, in two separate incidents.
Specifically, according to the FTC, LabMD billing information for over 9,000 consumers was found on a peer-to-peer (P2P) file-sharing network. The information included a spreadsheet containing insurance billing information with Social Security numbers, dates of birth, health insurance provider information, and standardized medical treatment codes.
In the second incident, the Sacramento, California Police Department found LabMD documents in the possession of identity thieves. The documents included names, Social Security numbers, and some bank account information. The FTC states that some of these Social Security numbers were being used by multiple individuals, indicating likely identity theft.
The FTC’s complaint alleges that LabMD did not implement or maintain a comprehensive data security program to protect individuals’ information, that it did not adequately train employees on basic security practices, and that it did not use readily available measures to prevent and detect unauthorized access to personal information, among other alleged failures.
The complaint includes a proposed order against LabMD that would require the company to implement a comprehensive information security program. The program would also require an evaluation every two years for 20 years by an independent certified security professional. LabMD would further be required to provide notice to any consumers whose information it has reason to believe was or could have been accessible to unauthorized persons and to consumers’ health insurance companies.
LabMD has issued a statement challenging the FTC’s authority to regulate data security, and stated that it was the victim of Internet “trolls” who presumably stole the information. This latest complaint is yet another sign that the FTC continues to monitor companies’ data security practices, particularly respecting health, financial, and children’s information. Interestingly, the LabMD data breaches were not huge – with only 10,000 consumers affected. But, the breach of, and potential unauthorized access to, sensitive health information and Social Security numbers tend to raise the FTC’s attention.
While industry awaits the district court’s decision on Wyndham’s motion to dismiss based on the FTC’s alleged lack of authority to set data security standards, companies should review and document their data security practices, particularly when it comes to sensitive personal information. Of course, in addition to the FTC, some states, such as Massachusetts, have their own data security standards, and most states require reporting of data breaches affecting personal information.
Manufacturers and marketers know that the more consumer data they have, the more they can tailor and direct their advertising, their products, and their product placement. This helps them to maximize sales and minimize costs. Thanks to the combination of cheap data storage and ubiquitous data capturers (e.g., smart phones, credit cards, the Web), the amount of consumer data out there to mine is astounding. Hence the recently-popularized term, “Big Data.”
But the misuse of data could result in government enforcement actions and, more importantly, serious privacy violations that can affect everyone.
Some of the practical challenges and concerns flowing from the use of big data were addressed recently by FTC Commissioner Julie Brill at the 23rd Computers, Freedom and Privacy conference on June 26. Issues raised include noncompliance with the Fair Credit Reporting Act and consumer privacy matters such as transparency, notice and choice, and deidentification (scrubbing consumer data of personal identifiers).
The FCRA: Those whose business includes data collection or dissemination should determine whether their practices fall within the boundaries of the FCRA. As Brill pointed out, “entities collecting information across multiple sources and providing it to those making employment, credit, insurance and housing decisions must do so in a manner that ensures the information is as accurate as possible and used for appropriate purposes.” If Brill’s comments are any indication of enforcement actions to come, businesses should be aware that the FTC is on the lookout for big data enterprises that don’t adhere to FCRA requirements.
Consumer Privacy: Brill gave some credit to big data giant Acxiom for its recent announcement that it plans to allow consumers to see what information the company holds about them, but she noted that this access is of limited use when consumers have no way of knowing who the data brokers are or how their information is being used. Brill highlighted how consumer data is being (questionably) used by national retailer Target: the somewhat funny yet disturbing news story about Target Stores identifying a teen’s pregnancy. It is a classic example of why consumers ought to have notice of what data is being collected on them and how that information is being used.
Consumers also need to have, as Brill suggested, the opportunity to correct information about themselves. This makes sense. Data collection is imperfect – different individuals’ information may be inaccurately combined; someone’s information may have been hacked; someone could be the victim of cyber-bullying, and other mishaps and errors can occur. Consumers should be able to review and correct information for errors. Finally, Brill highlighted concerns that current efforts to scrub consumer data may be ineffective, as companies are getting better at taking different data points and still being able to accurately identify the individual. “Scrubbed” data in the wrong hands could be as harmful as a direct security breach.
Brill encouraged companies to follow the “privacy by design” recommendations issued by the FTC in order to build more protections into their products and services. She further emphasized her initiative “Reclaim Your Name,” which is set to promote consumer knowledge and access to data collection. Companies that are in the business of data collection, mining and analytics should take note of the FTC’s efforts to empower the consumer against the overuse or misuse of consumer data. If you want to stay on the good side of the FTC – and on the good side of the informed consumer – work with the consumer, and provide meaningful notice, choice and consent.
Following a public comment period, the Federal Trade Commission recently approved a final order settling charges against mobile device manufacturer HTC America, Inc. HTC develops and manufactures mobile devices based on the Android, Windows Mobile, and Windows Phone operating systems. This case, which focuses on device security, is the FTC’s first case against a device manufacturer.
The FTC alleged that HTC failed to take reasonable steps to secure the software it developed for its smartphones and tablet computers. According to the FTC, HTC’s failures introduced various security flaws that placed consumers’ sensitive information at risk. The FTC’s action against HTC signals the agency’s continued focus on data security and data privacy issues and use of its broad “Section 5” authority, which the FTC has repeatedly asserted against various organizations, including its ongoing litigation with Wyndham Hotels. The HTC case also reiterates the agency’s strong interest in securing mobile networks,[link to blog regarding mobile apps], now that mobile phones, which are full of sensitive contact, financial, and other personal information, have become so prevalent.
Companies may be asking what HTC actually did to warrant this FTC action. The FTC claims that HTC, when customizing the software on mobile devices, failed to provide its staff with sufficient security training, failed to review or test the software on its mobile devices for potential security vulnerabilities, failed to follow commonly accepted secure coding practices, and did not have a process for receiving and addressing vulnerability reports from third parties.
In particular, the FTC asserted that HTC devices potentially permitted malicious applications to send text messages, record audio, and install additional malware onto a consumer’s device, without the user’s consent or even knowledge. These malicious applications allegedly could access financial and medical information and other sensitive information such as a user’s geolocation and text message content.
In particular, in the case of Android devices, the FTC claimed that HTC pre-installed a custom application that could download and install applications outside the normal Android installation process. However, HTC did not include an appropriate permission check code to protect the pre-installed application from installation. Consequently, a third party application could command this pre-installed application to download and install any additional applications onto the device without a user’s knowledge or consent.
The FTC further charged that HTC’s actions actually undermined Android consent mechanisms that, but for HTC’s actions, would have prevented unauthorized access and transmission of sensitive information. The FTC’s complaint alleged that the vulnerabilities have been present on approximately 18.3 million HTC devices running Android. The complaint further alleged that HTC could have prevented these vulnerabilities through readily available, low-cost measures, such as adding a few lines of permission check code when programming its pre-installed applications.
In a precedent-setting remedy, the FTC’s final order requires HTC to develop and release software patches within 30 days of service of the FTC’s final order on HTC. The patches must fix vulnerabilities in millions of HTC’s devices, including every covered device having an operating system version released on or after December 2010. HTC must also establish a comprehensive security program designed to address security risks during the development of HTC devices. The FTC requires the program to include consideration of employee training and management; product design, development and research; secure software design and testing; and review, assessment, and response to third party security vulnerability reports.
Further, HTC must undergo independent security assessments every other year for the next 20 years. Among other requirements, the independent, professional assessment must certify that HTC’s security program operates with sufficient effectiveness to provide reasonable assurance that the security of covered device functionality and the security, confidentiality, and integrity of covered information is protected and has operated during the reporting period. HTC is barred from making false or misleading statements about the security and privacy of consumers’ data on HTC devices.
The FTC’s action against HTC has broad application beyond the mobile device and software marketplace. The agency’s action further solidifies the FTC’s role as the leading enforcer of data security standards. Once again the FTC has demonstrated that it is setting data security standards and will continue to monitor and police the marketplace when it believes companies have not incorporated what it believes are commonly accepted security features or when organizations have failed to take steps to prevent vulnerabilities.
Beta testing is underway for Google Glass, a new technology that provides the functionality of a smartphone in a headset worn like glasses. Much like a smartphone, the Glass headset is able to exchange messages with other mobile devices, take pictures, record videos, and access search engines to respond to user queries. But unlike a smartphone, the device’s optical head-mounted display, voice recognition, and front-facing camera give users hands-free access to its features, including the ability to capture photographs and video recordings of the world in front of them.
For now, Glass is only available to developers and a small group of test users known as Google Explorers. The device is expected to go on sale to the public in 2014. In the meantime, public speculation swells, and the blogosphere is full of conflicting reports about what the device can and cannot do. Some suggest that the device will utilize facial recognition and eye-tracking software to show icons and statistics above people whom the user recognizes. A more common concern is that the device will be able to photograph and record what the user sees and then share that data with third parties without permission from the user or those whose likenesses are being captured.
Because of this lack of clarity, lawmakers around the world are urging Google to affirmatively address the myriad of privacy concerns raised by this new technology. Last month, an international group of privacy regulators – including representatives from Australia, Canada, Israel, Mexico, New Zealand, Switzerland, and a European Commission panel – signed off on a letter to Google’s CEO Larry Page asking for more information regarding the company’s plans to ensure compliance with their data protection laws.
Here in the United States, the House Bipartisan Privacy Caucus issued a similar letter of its own. In addition to a variety of questions regarding the device’s capabilities, the letters reference some of Google’s recent data privacy mishaps and ask whether Google intends to take proactive steps to ensure the protection of user and nonuser privacy.
Google’s Vice President of Public Policy and Governmental Relations (and former New York Congresswoman) Susan Molinari issued a formal response to the House Bipartisan Privacy Caucus. According to Molinari, Google “recognize[s] that new technology is going to bring up new types of questions, so [they] have been thinking carefully about how [they] design Glass from its inception.”
To address concerns about the picture and video capabilities, Molinari highlighted several features designed to “give users control” and “help other people understand what Glass users are doing.” For example, specific user commands are required to search the Internet or find directions, and the user must either press a button on the arm of the Glass or say “Take a photo” or “Record a video” in order to access those features.