Recent reporting by The New York Times and The Guardian discovered that personal data, collected from more than 50 million Facebook users under the auspices of academic research, was illegally obtained by the political research company Cambridge Analytica — who helped write the playbook that propelled Donald Trump into office. While this mishandling of user data differs from the massive breaches that Facebook and other major companies have suffered in recent years — in that many users were aware that they were sharing their data with a third party — it raises a number of questions about the lengths to which companies should go to protect their customers’ personal data and who should be held responsible when these breaches occur.
Drexel University’s Chief Information Security Officer, Pablo Molina, DLS, who is on the board of the Electronic Privacy Information Center and a thought-leader in technology ethics, provides some context to Facebook’s latest crisis, its ethical implications and how it could lead to a call for greater protection of personal data online:
Does Cambridge Analytica’s acquisition of user data from 50 million Facebook users constitute a data breach?
Yes, it does. Legally speaking, however, it depends on the state laws and federal regulations under which the case is examined. Conceptually, the case is clear: unauthorized access to information without the users´ consent.
What is the difference, in terms of ethical implications for Facebook, between a situation like this and a hack that leads to user data being leaked?
In both cases, the culprits gain unauthorized access to information without the users´ consent. In the case of unknown hackers, Facebook would try to prevent the leak. In the case of Cambridge Analytica, Facebook made the leak possible, both by action and by omission.
How does Facebook prevent exploitations of its data? At the time it allowed researchers to access user data, but how could it have prevented misuse like this?
Facebook could have, and it should have, prevented the abuse of its data. In his first interview about the Cambridge Analytica incident, Facebook CEO Mark Zuckerberg explained that his company has “a basic responsibility to protect people’s data.” The company failed. In the same interview, Zuckerberg outlined the simple steps that they will now take to prevent new abuses.
Once Facebook uncovered the data breach (or in this case, the researchers’ breach of contract in how they handled the data) what are its responsibilities to users?
Facebook failed to behave with transparency and to inform its users and the regulators in a timely manner.
How far does Facebook now have to go to ensure the data is destroyed and has not been transferred/shared beyond Cambridge’s malfeasance? How far should it have gone to ensure that it was being used the way the researchers claimed it was going to be used?
The lack of transparency by Facebook and Cambridge Analytica makes answering this question impossible at this time. We will have to wait until the Federal Trade Commission, and possibly some state attorneys general, launch their investigations.
Facebook is ultimately responsible for what its business partners, and those who pay for access to Facebook’s data, do with that information. Facebook can exercise control over the information through a combination of technical means, business processes and legal instruments.
Did Facebook have an obligation to let users know as soon as they found out about this?
Yes, it did. Depending on the federal regulations or state laws under which the case is examined, Facebook will pay fines or settlements for this delay.
Now that the breach is public, what is Facebook’s responsibility to users? Other than preventing researchers from accessing data (which Facebook has already done) what preventative measures or policies should be put in place?
First, Facebook should apologize honestly. Second, it should offer compensation to users. Third, it should rebalance its priorities.
Facebook can exercise control over its users’ data through a combination of technical means, business processes and legal instruments. Perhaps Facebook needs to recruit and train its executives and technologists on ethics, privacy and compliance.
Finally, Facebook must get its act together when it comes to incident response. When something bad happens to the data, Facebook must act promptly.
Now that this breach is public, what recourse do users have? How does this vary by state or international laws?
United States privacy laws do not grant the right of private action to consumers. Therefore, the only redress available is for some state attorneys general and the Federal Trade Commission to open investigations in this case.
Yet, in privacy breach cases it is difficult to prove economic harm to consumers. We know that what Facebook and Cambridge Analytica did is wrong; however, we cannot prove how it affected our rights, freedoms or financial situation. Some enterprising lawyers may seek class action law suits with a small probability of winning them but with the prospect of reaching hefty settlements.
By comparison, on May 25, 2018, the new European Union General Data Protection Regulation goes into effect. So a similar case, with data from European Union citizens or residents, could translate into multi-million-dollar fines.
Facebook CISO Alex Stamos has come under fire for this and other recent problems at Facebook tied to Russian interference with the election and “fake news” – while these situations are unprecedented, what is a chief information security officer’s role in situations like this and how is the position changing in light of these new and growing problems?
CISOs bring much needed expertise to the organizations where they work. However, they often lack the explicit power to question business practices. The contributions of the CISO come from information sharing and influence. With this limited arsenal, and in the face of unprecedented situations and competing needs to maximize profit and minimize expenses, I believe that the job of a CISO today is quite challenging.
For media inquiries, contact Britt Faulstick, firstname.lastname@example.org or 215.895.2617