A famous Homeland episode involved a terrorist gaining access to the Vice-President’s pacemaker. Accessing medical devices to wreak havoc was one of the motivations behind certain provisions of the Digital Millennium Copyright Act (aka the DMCA). The DMCA makes it “illegal to circumvent technological measures used to prevent unauthorized access to copyrighted works.” Section 1201 of the DMCA allows for exemptions to be made every three years. Recently, a number of exemptions were adopted to the DMCA’s anti-circumvention statute for numerous technologies, including personal medical devices. Although the exemptions went into effect on October 28, 2015, there were stipulations that delayed implementation until very recently. A number of safeguards remain in place, but safeguards to protect cybercrime in the healthcare context remain compelling.
What does this mean for patients who are using portable medical devices?
The exemption removes the barrier for researchers to set-up controlled experiments that can aim to improve potential vulnerabilities in the security of these devices. The exemption relates to researching medical devices and reads as follows: “Literary works consisting of compilations of data generated by medical devices that are wholly or partially implanted in the body or by their corresponding personal monitoring systems, where such circumvention is undertaken by a patient for the sole purpose of lawfully accessing the data generated by his or her own device or monitoring system.” In order to conduct research using this type of data, the research environment must meet certain criteria. Those criteria include the following: (1) the computer program, or any devices on which the programs run, must be “lawfully acquired,” (2) during the research, the device or computer program should operate “solely for the purpose of good-faith security research,” and (3) the research must not have begun before October 28, 2016.
How does this open up the field for more research opportunities?
The exemption rule allows for “good-faith research” which is defined as “accessing a computer program solely for purposes of good-faith testing, investigation and/or correction of a security flaw or vulnerability, where such activity is carried out in a controlled environment designed to avoid any harm to individuals or the public, and where the information derived from the activity is used primarily to promote the security or safety of the class of devices or machines on which the computer program operates, or those who use such devices or machines, and is not used or maintained in a manner that facilitates copyright infringement.” What this means in the real world is that security researchers can, in a controlled manner and environment, access medical devices to search for vulnerabilities so that vulnerable software can be quickly patched. The exemption allows for researchers to publicly talk about and share details of their vulnerability research without facing legal repercussions.
Why do we need this type of research?
A cybercrime-wave impacted the healthcare sector in 2016. According to TrapX there was 63% year over year growth in attacks against the healthcare sector. Many of these cyber intrusions leveraged back-doors into medical devices like X-ray machines and blood gas analyzers. These devices are vulnerable to compromise as they lack the memory space necessary for cybersecurity software and are rarely updated. The dramatic ransomware attack against Medstar which crippled their hospitals’ networks underscored the defenselessness of the sector. The culture of the healthcare sector has been to adopt technology with minimal regard to the cybersecurity of those networks. The cybercrime community took note in 2016, and the ransomware attacks against the healthcare sector served as a canary in the coal mine. The vulnerability of medical devices poses a systemic risk to the sector’s digital health.
Historically, medical device manufacturers have been resistant to allow outside security experts to look at their code for fear that flaws in their software will be revealed and expose them to regulatory scrutiny or lawsuits. More recently, some of the larger medical device manufacturers (e.g. Philips and Dräger) have published a coordinated vulnerability disclosure policy, which essentially invites researchers to look for software flaws in their devices, as well as a public statement about of how the companies will handle reported vulnerabilities. For device manufacturers it is important to note that the FDA is encouraging this type of research to increase patient safety and reduce cybersecurity threats.
Suzanne Schwartz, director of emergency preparedness/operations and medical countermeasures for the Center for Devices and Radiological Health, a division of the FDA, stated that “The FDA is encouraging medical device manufacturers to take a proactive approach to cybersecurity management of their medical devices.” On December 29, 2016 the FDA issued the final guidance “Postmarket Management of Cybersecurity in Medical Devices”. What this means is the device manufacturers may need to report post-market modifications to devices already in the field related to cybersecurity to the FDA (pursuant to Part 806 of the Food, Drug & Cosmetic Act (for device manufacturers this reporting relates to compliance with the quality system regulations)). Device manufacturers need to take into account security considerations through a product’s entire lifecycle, starting with its development to ensure proper performance and functionality if a hospital’s network is hacked. The FDA indicated that most routine updates or patches will not trigger a reporting responsibility, but the guidance leaves open the possibility that changes made to prevent or fix cybersecurity vulnerabilities will trigger reporting. As a result of this guidance, it is important for manufacturers to coordinate their cybersecurity efforts. This relatively new exemption can help foster that dialogue and introduce research into vulnerabilities to reduce the threat of future cyber-attacks on critical medical devices used by patients. In 2017, an individual’s physical well-being is going to dependent on the digital health of medical devices.
What Proactive Risk Management Steps Can Be Taken in 2017 to Increase Security?
Listed below are some proactive steps that medical device manufacturers can take to decrease the risk of cybersecurity vulnerabilities and attacks. With the advent of new research into cybersecurity, the hope is that additional technology improvements will take place to allow for even further safety and evolution of security for medical devices.
Proactive Risk Management for 2017
- Require regular penetration tests of medical devices and networks which develop and utilize them.
- Deploy a DeceptionGrid.
- Deploy User Entity Behavior Analytics
- Deploy two factor authentication (e.g. Biometrics) with contextual verification.
- Integrate Intrusion protection systems with breach detection systems.
Source: Strategic Cyber Ventures 2017
The Federal Trade Commission (“FTC”) recently released a data breach guide for businesses, along with a video and blog to help companies following the immediate aftermath of a data breach. The FTC also provides a model data breach letter to notify individuals of a breach. The agency – which views itself as the nation’s primary “privacy police” has faced scrutiny from private parties and courts for allegedly enforcing privacy and data security standards without promulgating specific rules. The agency instead favors outreach efforts, such its blogs, guides and roundtables to educate industry and the public regarding what it views as best practices.
In this vein, the Guide and the model letter are not a “safe harbor” but offer suggestions on important steps that organizations can follow once they discover data breaches. The FTC emphasizes that the Guide does not pertain to the actual protection of personal information or prevention of breaches, because the agency has already issued separate guidance documents on those subjects. In fact, the FTC also recently updated its guide on protecting personal information.
Following a data breach, the Guide suggests key steps organizations can take, which include:
- Mobilizing the company’s breach response team to prevent further data loss – the team may include legal, information security, IT, human resources, communications, investor relations, and management; companies may consider hiring an independent forensics team;
- Securing physical areas – lock any physical areas affected by a breach; consider changing access codes;
- Taking affected equipment offline immediately – monitor all entry and exit points, and update authorized users’ credentials and passwords;
- Removing improperly posted information from the company’s website, for instance in a situation where personal information affected by the breach is posted on the company’s website. The FTC also advises companies to search the Internet to see if breached information has been posted on other websites and to contact the owners of those websites;
- Protecting evidence – the FTC reminds companies to retain forensic evidence (e. do not destroy it);
- Documenting the investigation, including interviewing people who discovered the breach and making sure employees (such as customer service representatives) know where to forward information that might assist the company in its investigation;
- Examining service provider relationships, to determine if providers have access to personal information and whether provider access privileges should be changed;
- Determining whether data was encrypted at the time of the breach (note: encryption may obviate the need for data breach reporting in many states);
- Implementing a communications plan that explains the data breach to employees, customers, investors, partners, and others such as the press. The FTC recommends “plain English” answers on a company’s website;
- Following legal requirements – such as state data breach notifications and notifying law enforcement;
- Offering at least a year of free credit monitoring – while not required, free monitoring has become standard and most regulators and consumers expect to see the offer in data breach notifications.
As to data breach notification letters, in addition to following the requirements of state laws, the FTC urges companies to advise people what steps they can take, based on the information exposed. When a breach compromises social security numbers, individuals should be directed to contact the credit bureaus to request fraud alerts or credit freezes. Since some scammers pounce on data breach victims, the FTC counsels organizations to tell consumers how they will be contacted going forward. For instance, if the company will never contact individuals by phone, the company should tell consumers that – so individuals can detect telephonic phishing schemes.
The FTC encourages businesses to use the Guide and its accompanying materials to educate employees and customers, such as through newsletters and websites. However, when facing an enforcement action or a lawsuit, will a company’s compliance with the Guide offer any relief from FTC or state Attorney General penalties or assist organizations in their defense in private data breach lawsuits? Ultimately, the crux of breach liability usually relates to how it occurred, but taking swift, corrective actions following a breach should aid an organization when dealing with regulators and third parties by showing good faith actions to prevent further damages. Conversely, a company that fails to take corrective actions can exacerbate a breach and further negatively impact affected individuals and the organization.
The FTC’s Guide and accompanying materials are helpful references, particularly for smaller businesses. As a practical matter, the words of advice I give companies facing a possible data breach is to first, take the time to determine what happened, how it happened, whether the breach continues, and what you can do to prevent it in the future. While several states require reporting within a set number of days (e.g., 45), the laws allow organizations time to conduct factual inquiries, take corrective measures, and prepare to notify affected individuals. Organizations should not rush through these key steps. Second, communication is key. A company facing a breach should develop a clear, consistent statement regarding the breach, the steps being taken and a single contact point. The lack of a communication plan or a consistent message can cause a huge loss of customer and employee confidence and raise regulators’ interest. Third, when preparing data breach notifications, organizations should note that it is likely that the letter will become public due to some states’ open records laws. Numerous websites exist that track and publicize data breaches, based upon information in the notifications – often including copies of the actual letters. Companies should not assume that regulators and consumers simply file the letters away. While your organization cannot prevent the publicity, having a clear, concise data breach notification that meets each state’s requirements without providing excess data will help the company through the process and associated publicity.
In March 2015, I wrote about the ongoing dispute between the FTC and LabMD, an Atlanta-based cancer screening laboratory, and looked at whether the FTC has the authority to take enforcement action over data-security practices alleged to be insufficient and therefore “unfair” under section 5(n) of the Federal Trade Commission Act (“FTCA”). On November 13, 2015, an administrative law judge ruled that the FTC had failed to prove its case.
In 2013, the FTC filed an administrative complaint against LabMD, alleging it had failed to secure personal, patient-sensitive information on its computer networks. The FTC alleged that LabMD lacked a comprehensive information-security program, and had therefore failed to (i) implement measures to prevent or detect unauthorized access to the company’s computer networks, (ii) restrict employee access to patient data, and (iii) test for common security risks.
The FTC linked this absence of protocol to two security breaches. First, an insurance aging report containing personal information about thousands of LabMD customers was leaked from the billing manager’s computer onto peer-to-peer file-sharing platform LimeWire, where it was available for download for at least eleven months. Second, Sacramento police reportedly discovered hard copies of LabMD records in the hands of unauthorized individuals. They were charged with identity theft in an unrelated case of fraudulent billing and pleaded no contest.
Incriminating as it all might seem, Administrative Law Judge D. Michael Chappell dismissed the FTC’s complaint entirely, citing a failure to show that LabMD’s practices had caused substantial consumer injury in either incident.
Section 5(n) of the FTCA requires the FTC to show that LabMD’s acts or practices caused, or were likely to cause, substantial injury to consumers. The ALJ held that “substantial injury” means financial harm or unwarranted risks to health and safety. It does not cover embarrassment, stigma, or emotional suffering. As for “likely to cause,” the ALJ held that the FTC was required to prove “probable” harm, not simply “possible” or speculative harm. The ALJ noted that the statute authorizes the FTC’s regulation of future harm (assuming all statutory criteria are met), but that unfairness liability, in practice, applies only to cases involving actual harm.
In the case of the insurance aging report, the evidence showed that the file had been downloaded just once—by a company named Tiversa, which did so to pitch its own data-security services to LabMD. As for the hard copy records, their discovery could not be traced to LabMD’s data-security measures, said the ALJ. Indeed, the FTC had not shown that the hard copy records were ever on LabMD’s computer network.
The FTC had not proved—either with respect to the insurance aging report or the hard copy documents—that LabMD’s alleged security practices caused or were likely to cause consumer harm.
The FTC has appealed the ALJ’s decision to a panel of FTC Commissioners who will render the agency’s final decision on the matter. The FTC’s attorneys argue that the ALJ took too narrow a view of harm, and a substantial injury occurs when any act or practice poses a significant risk of concrete harm. According to the FTC’s complaint counsel, LabMD’s data-security measures posed a significant risk of concrete harm to consumers when the billing manager’s files were accessible via LimeWire, and that risk amounts to an actual, substantial consumer injury covered by section 5(n) of the FTCA.
The Commissioners heard oral arguments in early March and will probably issue a decision in the next several months. On March 20th, LabMD filed a related suit in district court seeking declaratory and injunctive relief against the Commission for its “unconstitutional abuse of government power and ultra vires actions.”
Every week, we learn about new data breaches affecting consumers across the country. Federal government workers and retirees recently received the unsettling news that a breach compromised their personal information, including social security numbers, job history, pay, race, and benefits. Amid a host of other public relations issues, the Trump organization recently discovered a potential data breach at its hotel chain. If you visited the Detroit Zoo recently, you may want to check your credit card statements, as the zoo’s third party vendor detected “malware” which allowed access to customers’ credit and debit card numbers. And, certainly, none of us can forget the enormous data breach at Target, and the associated data breach notifications and subsequent lawsuits.
For years, members of Congress have stressed the need for national data breach standards and data security requirements. Aside from mandates in particular laws, such as HIPAA, movement on data breach requirements had stalled in Congress. Years ago, however, the states picked up the slack, establishing data breach notification laws requiring notifications to consumers and, in many instances to attorneys general and consumer protection offices when certain defined “personal information” was breached. California led the pack, passing its law in 2003. Today, 47 states have laws requiring organizations to notify consumers when a data breach has compromised consumers’ personal information. Several states’ laws also mandate particular data security practices, including Massachusetts, which took the lead on establishing “standards for protection of personal information.”
Many businesses and their lobbying organizations have urged Congress to preempt state laws and establish a national standard. Most companies have employees or customers in multiple states. Thus, under current laws, organizations have to address a multitude of state requirements, including triggering events, types of personal information covered, how quickly the notification must be made, who gets notified, what information should be included in the notification, among others. State Attorneys General, on the other hand, assert that, irrespective of these inconveniences, their oversight of data breaches through the supervision of notifications and enforcement has played a critical role in consumer protection.
This week, the Attorneys General from the 47 states wrote to Congressional leaders, urging Congress to maintain states’ authority in any federal law, by requiring data breach notifications, and preserving the states’ enforcement authority.
The AGs’ key points are:
- State AG offices have played critical roles in investigating and enforcing data security lapses for more than a decade.
- States have been able to respond to constant changes in data security by passing “significant, innovative laws related to data security, identity theft, and privacy.” This includes addressing new categories of information, such as biometric data and login credentials for online accounts.
- States are on the “front lines” of helping consumers deal with the fallout of data breaches and have the most experience in guiding consumers through the process of removing fraudulent charges and repairing their credit. By way of example, the Illinois AG helped nearly 40,000 Illinois residents remove more than $27 million in unauthorized charges from their accounts.
- Forty states participate in the “Privacy Working” group, where state AGs coordinate to investigate data breaches affecting consumers across multiple states.
- Consumers keep asking for more protection. Any preemption of state law “would make consumers less protected than they are right now.”
- States are better equipped to “quickly adjust to the challenges presented by a data-driven economy.”
- Adding enforcement and regulatory authority at the federal level could hamper the effectiveness of the state law. Some breaches will be too small to have priority at the federal level; however, these breaches may have a large impact at the state or regional level.
Interestingly, just this week, Rep. David Cicilline (D-RI) introduced a House bill mandating that companies inform consumers within 30 days of a data breach. The bill also requires minimum security standards. Representative Cicilline’s bill would not preempt stricter state-level data breach security laws. The bill also contains a broad definition of “personal information” to include data that could lead to “dignity harm” – such as personal photos and videos, in addition to the traditional categories of banking information and social security numbers. The proposed legislation would also impose civil penalties upon organizations that failed to meet the standards.
Without a doubt data breaches will continue – whether from bad actors, technical glitches, or common employee negligence. The states have certainly “picked up the slack” for over a decade while Congressional actions stalled. Understandably, the state AGs do not want Congress taking over the play in their large and established “privacy sandbox.” Preemption will continue to be a key issue for any federal data breach legislation before Congress. As someone who has guided companies through multi-state data breach notifications, I have seen firsthand that requiring businesses to deal with dozens of differing state requirements is costly and extremely burdensome. Small businesses, in particular, are faced with having to grapple with a data security incident while trying to understand and comply with a multitude of state requirements. Those businesses do not have the resources of a “Target” and complying with a patchwork of laws significantly and adversely impacts those businesses. While consumer protection is paramount, a federal standard for data breach notification would provide a common and clear-cut standard for all organizations and reduce regulatory burdens. While the federal standard could preempt state notification laws, states could continue to play critical roles as enforcement authorities.
In the interim, companies must ensure that they comply with the information security requirements and data breach notifications of applicable states. An important, and overlooked aspect is to remember that while an organization may think of itself as, say a “Vermont” or “Virginia” company, it is likely that the company has personal information on residents of various states – for instance, employees who telecommute from neighboring states, or employees who left the company and moved to a different state. Even a “local” or “regional” company can face a host of state requirements. As part of an organization’s data security planning, companies should periodically survey the personal information they hold and the affected states. In addition to data breach requirements in the event of a breach, organizations need to address applicable state data security standards.
FTC seems more confident than ever in its authority to go after companies with insufficient data security measures. As of January 2015, FTC had settled 53 data-security enforcement actions, and FTC Senior Attorney Lesley Fair expects that number to increase.
Not everyone is sanguine about FTC’s enforcement efforts. Companies targeted for administrative action complain that the Commission is acting beyond its delegated powers under the Federal Trade Commission Act (the “FTCA”). So far, courts have declined to intervene in any administrative action that is not yet resolved at the agency level.
One such case involves LabMD, Inc., an Atlanta-based cancer-screening laboratory. At least nine years ago, someone downloaded onto the billing department manager’s computer a peer-to-peer file-sharing application called Limewire. Hundreds of files on the computer were designated for sharing on the network, including an insurance aging report that contained personal information for more than 9,000 LabMD customers. In 2008, a third party notified LabMD that the aging report was available on Limewire. The application was promptly removed from the billing department manager’s computer, but the damage allegedly had been done. According to FTC, authorities discovered in October 2012 that data from the aging report and other LabMD files were being used to commit identify theft against LabMD’s customers.
Ten months later, FTC filed an administrative complaint against LabMD alleging that it had failed to employ reasonable and appropriate data security measures. FTC further alleged that LabMD could have corrected the problems at relatively low cost with readily available security measures. By contrast, LabMD’s customers had no way of knowing about the failures and could not reasonably avoid the potential harms, such as identity theft, medical identity theft, and disclosure of sensitive, private, medical information. On these facts, FTC alleged that LabMD had committed an unfair trade practice in violation of the FTCA.
LabMD tried to get the administrative action dismissed on several grounds, including that the FTCA does not give the Commission express authority to regulate data-security practices. The Commission denied LabMD’s motion, explaining that Congress gave FTC broad jurisdiction to regulate unfair and deceptive practices that meet a three-factor test: section 5(n) provides that, in enforcement actions or rulemaking proceedings, the Commission has authority to determine that an act or practice is “unfair” if (i) it causes or is likely to cause substantial injury to consumers which is (ii) not reasonably avoidable by consumers themselves and (iii) not outweighed by countervailing benefits to consumers or competition. Commissioners noted that the FTCA as passed in 1918 granted FTC the authority to regulate unfair methods of competition. When courts took a narrow view of that authority, Congress responded by amending the FTCA to clarify that the Commission has authority to regulate unfair acts or practices that injure the public, regardless of whether they injure one’s competitors. According to the Commission, the statutory delegation is intentionally broad, giving FTC discretionary authority to define unfair practices on a flexible, incremental basis. For these and other reasons, the administrative action against LabMD would proceed.
Having failed to get the case dismissed, LabMD sought relief from the federal courts to no avail. On January 20, 2015, the U.S. Court of Appeals for the Eleventh Circuit dismissed LabMD’s suit for lack of subject-matter jurisdiction. The court explained that it lacked the power to decide LabMD’s claims in the absence of final agency action. FTC had filed a complaint and issued an order denying LabMD’s motion to dismiss. But neither was a reviewable agency action because neither represented a “consummation of the agency’s decision-making process.” Moreover, “no direct and appreciable legal consequences” flowed from the actions and “no rights or obligations had been determined” by them.
LabMD can challenge FTC’s data-security jurisdiction only after the Commission’s proceedings against it are final. That may well be too late. As a result of FTC’s enforcement action, the company was forced to wind down its operations more than a year ago.
LabMD is one of very few companies to test FTC’s data-security jurisdiction. In 2007, a federal court in Wyoming sided with FTC in holding that the defendant’s unauthorized disclosure of customer phone records was an unfair trade practice in violation of the FTCA. The Tenth Circuit affirmed that decision on appeal.
More recently, a district court in New Jersey gave FTC a preliminary victory against Wyndham Worldwide Corporation. In that case, the court held that FTC’s unfairness jurisdiction extends to data-security practices that meet the three-factor test under Section 5(n). That decision is currently on appeal before the Third Circuit. During oral argument on March 3rd, the three-judge panel signaled little doubt that FTC has authority to regulate unreasonable cybersecurity practices. Instead, the panel was concerned with how the Commission exercises that authority—specifically, whether and how it has given notice as to what data security measures are considered to be “unfair.”
The law of unintended consequences – a distant cousin of Murphy’s Law – states that the actions of human beings will always have effects that are unanticipated and unintended. The law could prove a perfect fit for recent efforts by class action counsel to rely upon the Federal Wiretap Act in lawsuits arising from adware installed on personal home computers.
Take, for example, the recently filed case of Bennett v. Lenovo (United States), Inc. In that case, the plaintiff seeks to represent a class of purchasers of Lenovo laptop computers complaining that “Superfish” software that was preloaded on the laptops directed them to preferred advertisements based on their internet browsing behavior. The most interesting claim included in the complaint is the assertion that Lenovo and Superfish violated the Federal Wiretap Act.
Wiretap? What wiretap?
The Federal Wiretap Act was originally passed as Title III of the Omnibus Crime Control and Safe Streets Act of 1968. These provisions were included, at least in part, as a result of concerns about investigative techniques used by the FBI and other law enforcement agencies that threatened the privacy rights of individuals. In passing the Wiretap Act, Congress was clearly focused on the need to protect communications between individuals by telephone, telegraph and the like. The Electronic Communications Privacy Act of 1986 (ECPA) broadened the application of the statute by expanding the kinds of communications to which the statute applied. But the focus was still on communications between individuals.
As is often the case, technology is testing the boundaries of this nearly 50-year-old law. The Bennett case is not the first case in which a plaintiff has argued that software on his or her computer that reads the user’s behavior violates the Wire Act. In some cases, the software in question has been so-called “keylogging” software that captures every one of a user’s keystrokes. Cases considering such claims (or similar claims under state statutes modeled after the federal Act) have been split – some based on the specifics of when and how the software actually captured the information, and others based possibly on differences in the law in different parts of the country.
One of the more interesting cases, Klumb v. Gloan, 2-09-CV 115 (ED Tenn 2012), involved a husband who sued his estranged wife when he discovered that she had placed spyware on his computer. At trial, the husband demonstrated that during his marriage, his wife installed eBlaster, a program capable of not only recording key strokes, but also intercepting emails and monitoring websites visited. The husband alleged that once intercepted, the wife altered the emails and other legal documents to make it appear as if the husband was having an affair. The motive? Money, of course. Adultery was a basis to void the pre-nuptial agreement that the parties had executed prior to their ill-fated marriage. The wife – who was a law school graduate – argued that the installation was consensual. Although consent is a recognized defense to a claim of violating the Federal Wiretap Act, for a variety of reasons, the court discredited the wife’s testimony regarding the purported consent and awarded damages and attorney’s fees to the husband plaintiff.
The Bennett plaintiffs may or may not succeed in showing the facts and arguing the law sufficient to prevail in their claim, and we know too little about the facts in that case to express a prediction of the result in that case. But we can state with confidence that the continued expansion of how the Wiretap Act is applied will, at some point, require that Congress step in and update the statute to make clear how it applies in the new internet-based world in which we now live.
It’s International Data Privacy Day! Every year on January 28, the United States, Canada and 27 countries of the European Union celebrate Data Privacy Day. This day is designed to raise awareness of and generate discussion about data privacy rights and practices. Indeed, each day new reports surface about serious data breaches, data practice concerns, and calls for legislation. How can businesses manage data privacy expectations and risk amid this swirl of activity?
Here, we share some tips from our firm’s practice and some recent FTC guidance. We don’t have a cake to celebrate International Data Privacy Day but we do have our “Top 10 Data Privacy Tips”:
3. Ensure Your U.S.-E.U. Safe Harbor Is Up-to-Date. Last year, the FTC took action against several companies, including the Atlanta Falcons and Level 3 Communications, for stating in their privacy policies that they were U.S.-E.U. Safe Harbor Certified by the U.S. Department of Commerce when, in fact, the companies had failed to keep their certification current by reaffirming their compliance annually. While your organization is not required to participate in Safe Harbor, don’t say you are Safe Harbor Certified if you haven’t filed with the U.S. Department of Commerce. And, remember that your company needs to reaffirm compliance annually, including payment of a fee. You can check your company’s status here.
4. Understand Your Internal Risks. We’ve said this before – while malicious breaches are certainly out there, a significant percentage of breaches (around 30 percent, according to one recent study) occurs due to accidents or malicious acts by employees. These acts include lack of firewalls, lack of encryption on devices (such as laptops and flash drives), and failing to change authentications when employees leave or are terminated. Many data breaches are While you are at it, review who has access to confidential information and whether proper restrictions are in place.
5. Educate Your Workforce. While today is International Data Privacy Day, your organization should educate your workforce on privacy issues throughout the year. Depending on the size of the company and the type of information handled (for instance, highly sensitive health information versus standard personal contact details), education efforts may vary. You should review practices like the confidentiality of passwords, creating a secure password and changing it frequently, and avoiding downloading personal or company sensitive information in unsecured forms. Just last week, a security firm reported that the most popular passwords for 2014 were “123456” and “password.” At a minimum, these easily guessed passwords should not be allowed in your system.
6. Understand Specific Requirements of Your Industry/Customers/ Jurisdiction. Do you have information on Massachusetts residents? Massachusetts requires that your company have a Written Information Security Program. Does your company collect personal information from kids under 13? The organization must comply with the federal Children’s Online Privacy Protection Act and the FTC’s rules. The FTC has taken many actions against companies deemed to be collecting children’s information without properly seeking prior express parental consent.
7. Maintain a Data Breach Response Plan. If there were a potential data breach, who would get called? Legal? IT? Human Resources? Public relations? Yes, likely all of these. The best defense is a good offense – plan ahead. Representatives from in-house and outside counsel, IT/IS, human resources, and your communications department should be part of this plan. State data breach notification laws require prompt reporting. Some companies have faced lawsuits for alleged “slow” response times. If there is potential breach, your company needs to gather resources, investigate, and if required, disclose the breach to governmental authorities, affected individuals, credit reporting agencies, etc.
8. Consider Contractual Obligations. Before your company commits to data security obligations in contracts, ensure that a knowledgeable party, such as in-house or outside counsel, reviews these commitments. If there is a breach of a contracting party’s information, assess the contractual requirements in addition to those under data breach notification laws. The laws generally require notice to be given promptly when a company’s data is compromised while under the “care” of another company. On the flip side, consider the service providers your company uses and what type of access the providers have to sensitive data. You should require service providers to adhere to reasonable security standards, with more stringent requirements if they handle sensitive data.
9. Review Insurance Coverage. While smaller businesses may think “we’re not Target” and don’t need cyber insurance, that’s a false assumption. In fact, smaller businesses usually have less sophisticated protections and can be more vulnerable to hackers and employee negligence. Data breaches – requiring investigations, hiring of outside experts such as forensics, paying for credit monitoring, and potential loss of goodwill – can be expensive. Carriers are offering policies that do not break the bank. Cyber insurance is definitely worth exploring. If you believe you have coverage for a data incident, your company should promptly notify the carrier. Notice should be part of the data breach response plan.
10. Remember the Basics! Many organizations have faced the wrath of the FTC, state attorneys general or private litigants because the companies or its employees failed to follow basic data security procedures. The FTC has settled 53 data security law enforcement actions. Many involve the failure to take common sense steps with data, such as transmitting sensitive data without encryption, or leaving documents with personal information in a dumpster. Every company must have plans to secure physical and electronic information. The FTC looks at whether a company’s practices are “reasonable and appropriate in light of the sensitivity and amount of consumer information you have, the size and complexity of your business, and the availability and cost of tools to improve security and reduce vulnerabilities.” If the FTC calls, you want to have a solid explanation of what you did right, not be searching for answers, or offering excuses. Additional information on the FTC’s guidance can be found here.
* * *
Remember, while it may be International Data Privacy Day, data privacy isn’t a one day event. Privacy practices must be reviewed and updated regularly to protect data as well as enable your company to act swiftly and responsively in the event of a data breach incident.
In August, the Federal Trade Commission (“FTC”) released a staff report concerning mobile shopping applications (“apps”). FTC staff reviewed some of the most popular apps consumers utilize to comparison shop, collect and redeem deals and discounts, and pay in-store with their mobile devices. This new report focused on shopping apps offering price comparison, special deals, and mobile payments. The August report is available here.
Popularity of Mobile Shopping Apps/FTC Interest
Shoppers can empower themselves in the retail environment by comparison shopping via their smartphones in real-time. According to a 2014 Report by the Board of Governors of the Federal Reserve System, 44% of smartphone owners report using their mobile phones to comparison shop while in retail store, and 68% of those consumers changed where they made a purchase as a result. Consumers can also get instant coupons and deals to present at checkout. With a wave of a phone at the checkout counter, consumers can then make purchases.
While the shopping apps have surged in popularity, the FTC staff is concerned about consumer protection, data security and privacy issues associated with the apps. The FTC studied what types of disclosures and practices control in the event of unauthorized transactions, billing errors, or other payment-related disputes. The agency also examined the disclosures that apps provide to consumers concerning data privacy and security.
Apps Lack Important Information
FTC staff concluded that many of the apps they reviewed failed to provide consumers with important pre-download information. In particular, only a few of the in-store purchase apps gave consumers information describing how the app handled payment-related disputes and consumers’ liability for charges (including unauthorized charges).
FTC staff determined that fourteen out of thirty in-store purchase apps did not disclose whether they had any dispute resolution or liability limits policies prior to download. And, out of sixteen apps that provided pre-download information about dispute resolution procedures or liability limits, only nine of those apps provided written protections for users. Some apps disclaimed all liability for losses.
Data Security Information Vague
FTC staff focused particular attention on data privacy and security, because more than other technologies, mobile devices are personal to a user, always on, and frequently with the user. These features enable an app to collect a huge amount of information, such as location, interests, and affiliations, which could be shared broadly with third parties. Staff noted that, “while almost all of the apps stated that they share personal data, 29 percent of price comparison apps, 17 percent of deal apps, and 33 percent of in-store purchase apps reserved the right to share users’ personal data without restriction.”
Staff concluded that while privacy disclosures are improving, they tend to be overly broad and confusing. In addition, app developers may not be considering whether they even have a business need for all the information they are collecting. As to data security, staff noted it did not test the services to verify the security promises made. However, FTC staff reminded companies that it has taken enforcement actions against mobile apps it believed to have failed to secure personal data (such as Snapchat and Credit Karma). The report states, “Staff encourages vendors of shopping apps, and indeed vendors of all apps that collect consumer data, to secure the data they collect. Further those apps must honor any representations about security that they make to consumers.”
FTC Staff Recommends Better Disclosures and Data Security Practices
The report urges companies to disclose to consumers their rights and liability limits for unauthorized, fraudulent, or erroneous transactions. Organizations offering these shopping apps should also explain to consumers what protections they have based on their methods of payment and what options are available for resolving payment and billing disputes. Companies should provide clear, detailed explanations for how they collect, use and share consumer data. And, apps must put promises into practice by abiding by data security representations.
Consumer Responsibility Plays Role, Too
Importantly, the FTC staff report does not place the entire burden on companies offering the mobile apps. Rather, FTC staff urge consumers to be proactive when using these apps. The staff report recommends that consumers look for and consider the dispute resolution and liability limits of the apps they download. Consumers should also analyze what payment method to use when purchasing via these apps. If consumers cannot find sufficient information, they should consider an alternative app, or make only small purchases.
While a great “deal” could be available with a click on a smartphone, the FTC staff urges consumers to review available information on how their personal and financial data may be collected, used and shared while they get that deal. If consumers are not satisfied with the information provided regarding data privacy and security, then staff recommends that they choose a different app, or limit the financial and personal financial data they provide. (Though that last piece of advice may not be practical considering most shopping apps require a certain level of personal and financial information simply to complete a transaction).
Deal or No Deal? FTC Will be Watching New Shopping Apps
FTC Staff has concerns about mobile payments and will continue to focus on consumer protections. The agency has taken several enforcement actions against companies for failing to secure personal and payment information and it does not appear to be slowing down. While the FTC recognizes the benefits of these new shopping and payment technologies, it is also keenly aware of the enormous amount of data obtained by companies when consumers use these services. Thus, companies should anticipate that the FTC will continue to monitor shopping and deal apps with particular attention on disclosures and data practices.
Last week the Federal Trade Commission (“FTC”) charged the operators of Jerk.com with harvesting personal information from Facebook to create profiles for more than an estimated 73 million people, where they could not be labeled a “Jerk” or “not a Jerk.”
In the complaint, the FTC charged the defendants, Jerk, LLC and the operator of the website, John Fanning, with violating the FTC Act by allegedly misleading consumers into believing that the content on Jerk.com had been created by registered users of the site, when most of it had been harvested from Facebook. The FTC alleged that the operators of Jerk.com falsely claimed that consumers could revise their online profiles by paying a $30 membership fee. Additionally, the FTC asserted that the defendants misled consumers to believe that by paying for a membership, they would have access to the website that could allow them to change their profiles on the site.
Facebook profile pictures and profile names generally are public. Facebook rules allow for developers to upload the names and pictures in bulk. However, Jerk.com allegedly violated Facebook’s policies in the way it mined data from people’s profiles. At the time, Facebook’s rules only allowed an app developer to keep a person’s profile picture for 24 hours. The complaint stated that Fanning registered several websites with Facebook and used Facebook’s application program to download the data needed to create the fake profiles on Jerk.com. The FTC is also seeking an order barring the defendants from using the personal information that was obtained and requiring them to delete the information.
This action is another indication that the FTC is closely monitoring companies that the FTC believes are scraping data on consumers from other sites and deceiving customers in their business practices. The complaint notes how Jerk.com profiles often appear high in search engine results when a person’s name is searched. “In today’s interconnected world, people are especially concerned about their reputation online, and this deceptive scheme was a brazen attempt to exploit those concerns,” said Jessica Rich, Director of the FTC’s bureau of Consumer Protection in a statement.
Companies should monitor their practices for obtaining data from other websites to ensure that they are in compliance with the terms and conditions of websites where they obtain data. Organizations should be cautious about how they use this data, including being careful about making any representations and disclosures that could be viewed as deceptive by the FTC or a state attorney general.
By Michelle Cohen, CIPP-US
After recovering from high-profile data breaches at Target and Neiman Marcus, signing up for free credit monitoring and analyzing our credit reports, a new Internet villain recently emerged: the “Heartbleed Bug.” The Heartbleed Bug is a security flaw present on Open SSL, popular software run on most webservers. This open source software is widely used to encrypt web communications. The Heartbleed Bug affects approximately 500,000 websites, including reportedly Yahoo, OK Cupid, and Tumblr. And, in addition to websites, the Bug may impact networking devices such as video conferencing services, smartphones, and work phones.
The danger of the Heartbleed Bug lies in its ability to reveal the content of a server’s memory. Then, the Bug can grab sensitive data stored in the memory, including passwords, user names, and credit card numbers. Adding insult to injury, the Bug has existed for at least two years, giving hackers a huge head start. News reports and some websites have urged users to change their passwords. Others have warned individuals not to change their passwords until a website has indicated it has installed the security patch that “cures” the Bug. Several sites offer tools to “test” whether an indicated website is vulnerable to the Heartbleed Bug, including one by McAfee. In terms of priorities, users should focus on sites where they bank, conduct e-commerce, e-mail and use file storage accounts.
Further intrigue comes from the fact that a recent Bloomberg report alleged that the National Security Agency (“NSA”) knew about the Bug for at least two years, but may have utilized the vulnerabilities to access information. The NSA has denied it had knowledge of the Bug.
While we have yet to see a “rush to the courthouse” following the announcement of the Heartbleed Bug, we anticipate lawsuits and enforcement could follow where organizations do not act in response to the Bug by installing the necessary security patch. Companies (including our clients in the Internet marketing and I-gaming industries) should investigate whether their websites, apps, or other services (such as cloud services) use Open SSL – then take immediate efforts to oversee the installation of the security patch. Organizations should also advise users of the status of the Heartbleed Bug fix and encourage users to change their passwords, with different passwords across different services.