Why Privacy by Design and Default are Critical for Emerging Technologies
I am a long-time student of the business book, Crossing the Chasm, having read all three editions and helped a number of companies through their own journey across the Chasm. As I’ve been learning more about the growing volume of data breaches and security challenges with emerging technologies like IoT, biometrics, medical devices or video collaboration, the concept of Privacy by Design has come into conversations more recently. The question that comes to my mind is, in that journey from early adopters to late adopters, when should privacy and security be built in? It turns out that it’s not that simple to decide. But the ramifications can be damaging, costly, or both, if mis-timed. The continuing saga of Zoom Video Communications (Zoom) security and privacy challenges bring this home.
What is Privacy by Design (PbD)?
Privacy by design is an approach to systems engineering that seeks to ensure privacy protection of the individual by designing in aspects for privacy from the very beginning of the development of a product or service. Under the EU data protection law, GDPR which came into effect May 2018, companies are encouraged to implement technical and organisational measures (TOMs), at the earliest stages of the design of the processing operations, in such a way that safeguards privacy and data protection principles right from the start (“data protection by design”). By default, companies should ensure that personal data is processed with the highest privacy protection (for example only the data necessary should be processed, short storage period, limited accessibility) so that by default personal data isn’t made accessible to an indefinite number of persons (“data protection by default”’). Privacy by Design has been around much longer than this, however.
The first thoughts on “Privacy by Design” were expressed in the 1970s and were incorporated in the 1990s into the RL 95/46/EC data protection directive. According to recital 46 in that Directive, technical and organisational measures (TOM) must already be taken at the time of planning a processing system to protect data safety. The Privacy by Design framework was published in 2009 and adopted by the International Assembly of Privacy Commissioners and Data Protection Authorities in 2010.
Do US Companies Prioritize Privacy and Security?
The concept seems simple on the surface, yet historically, privacy and security are seldom prioritized in product development among tech innovators or entrepreneurs, particularly in the mad rush to MVP and market launch. I spoke on this topic recently with a panel of privacy and security experts at the MIT Enterprise Forum. As entrepreneurs we tend to think in terms of Minimally Viable Product or MVP when we first launch a product or app, and typically instantiating privacy or security at that point is thought to take up too much time or investment and risk of “holding us back” in a competitive space. Yet that leaves a window of vulnerability. If product developers and tech evanglists wait too long, they might risk a data breach, or run into issues trying to add the necessary layers of privacy and security for later stage adopters.
A high profile example of Zoom Video Conferencing provides a useful case study for analysis. I wanted to walk back Zoom’s history and see if I could pinpoint where they might have made certain decisions. Not having interviewed them, I can’t know exactly what their internal conversations were, but there is plenty of digital evidence for some high level analysis.
A Case Study: What did Zoom miss about Privacy by Design?
While Zoom had said publicly that “privacy and security are top priorities”, the evidence belies that idea. Further, they had claimed their issues were only a result of a rapid 60 days of rapid adoption catalyzed by the pandemic of 2020. Yet, by 2020, they were a fully funded, public company with a mature released product, and many of the issues – lack of clear privacy policy for one, should have been addressed well before the time they filed an IPO. They launched their TeleHealth product all the way back in 2017 which would have required both privacy and security measures. When it came into force May 2018, the GDPR would have applied to them. And the January 2020 California Consumer Privacy Act, CCPA, would have applied as well, since they obviously had more than 50,000 customers. The ability to hijack conference meetings were revealed in Threatpost as far back as 2018.
When I developed a timeline, I saw they hired their first CISO in 2018, around the time they hired their first CIO. Since they had an IPO planned in less than 12 months, it seems that they hired their first CIO a little later than typical for that stage of market maturity.
Could the newly hired CISO have found all the flaws and fixed them in the short window before the IPO? Probably not. In most cases, conducting a full assessment, building a plan, and implementing it is typically be a 6 – 18 month endeavor. Here’s a timeline that I was able to create from online publicly available information.
- Apr 2011 Zoom Founded
- Sep 2012 Launched Beta Version
- Jan 2013 Raised $6M Series A
- Sep 2013 Raised $6.5M Series B
- Feb 2015 Raised $30M Series C
- Oct 2015 Released Version 2.5
- Jan 2017 Raised $100M Series D, “unicorn” valuation
- Apr 2017 Launched Telehealth solution for doctors
- Mar 2018 Hired first CIO, Harry Moseley
- Mid 2018 Hired CISO (no media announcement)
- May 2018 the GDPR goes into effect
- Nov 2018 Threatpost Headline: Flaw Lets Hackers Hijack Meetings
- Apr 2019 IPO valued at $16B first day of trading
- Oct 2019 CIO states: “privacy and security top priorities”
- Jan 2020 CCPA goes into effect
- Mar 2020 onboarded as many users as entire 2019
- March 31 FBI issues warning about Zoom video security
- April 8 DOE bans schools from using Zoom
- April 15 Multiple shareholder lawsuits over privacy issues
- April 16 Hires external security advisory team, code freeze
Retrofitting Privacy by Design: Costly Damage Control
For Zoom, this became a costly hit to brand reputation, customer trust, and rapid remediation. The good news is that the company did a good job of jumping on the issue and putting immediate plans in action. They quickly hired a “SWAT” team of outsourced security experts (some of the best in the world) and put in place a Security Advisory Council. CEO Eric Yuan said the company would undergo a “comprehensive review” with external experts and users to “understand and ensure the security of all of our new consumer use cases.” A feature freeze was put into effect, and Zoom shifted its engineering resources to “focus on our biggest trust, safety, and privacy issues.”
This was a necessary, but a very costly effort. Not only was the team of outsourced security experts quite expensive, but the reputational damage will have long term and costly implications, as they now have to regain trust not only of prospects and current customers, but also the FBI, DOE, other US agencies, and entire countries that have banned Zoom. They also have to contend with the costly and long term effects of legal battles of multiple shareholder lawsuits.
The point I want to make here is that this is not just about Zoom. They just happen to be a current and very public case study. In earlier posts I’ve talked about Ring doorbell who experienced challenges with privacy and security after they were acquired and gained wider consumer adoption late last year. And there have been more recent headlines from a wide variety of industries.
How can we do better by implementing PbD up front?
A number of experts have weighed in on the answer. Bottom line, don’t wait until the public catches up with you. Build your roadmap to incorporate early expansion of privacy and security measures. Do this before you hit mass adoption, and well before any IPO aspirations. Waiting longer will be much more costly in the long run.
Emad Georgy, CTO consultant and advisor, advises this: “While the cycle of audit-remediate-move-on still exists and has opportunities for improvement, the age of remediation is declining. Companies that want to succeed now must build privacy further upstream into the design. In addition, they must also explore, when security and privacy issues arise, how did we get here? What is the root cause and why am I only remediating now?”
Zoom partner Sri Srinivasan, SVP/GM, Team Collaboration Group, Cisco in UCToday wrote:
“Interoperability and openness should never be a trade-off with security, and our users shouldn’t believe they need to sacrifice one over the other. Interoperability and security can and should work in unison, and this requires today’s software companies to work with some basic norms on how we collectively secure our mutual customers.”
Zoom hired Alex Stamos to build up its security, privacy and safety capabilities as an outside consultant. Alex Stamos, adjunct professor at Stanford University’s Center for International Security and Cooperation and former chief security officer at Facebook, says this on his blog:
“Zoom has some important work to do in core application security, cryptographic design and infrastructure security…I encourage the entire industry to use this moment to reflect on their own security practices and have honest conversations about things we could all be doing better.” – April 8 Medium.
About the author: Kathleen Glass is the global VP of Marketing, for 2B Advice, data protection and privacy compliance experts.