Amir Efrati is Co-Founder and Executive Editor of The Information, a subscription-based publication for tech executives. Previously he was a reporter with the Wall Street Journal for nine years. Amir’s reporting on the dangers of self-driving cars have earned him numerous journalism awards, and he is a frequent contributor on CNBC, PBS, BBC, Fox and NPR.
As the Co-founder and CEO of Alation, Satyen lives his passion of empowering a curious and rational world by fundamentally improving the way data consumers, creators, and stewards find, understand, and trust data. Industry insiders call him a visionary entrepreneur. Those who meet him call him warm and down-to-earth. His kids call him “Dad.”
Satyen Sangani (00:02): Innovation rarely happens out in the open. When you’re developing the next big thing, you have to protect your trade secrets. But often, innovations have both positive and negative consequences. Let’s take aviation as an example. When a plane crashes, there is almost always a devastating loss of life. That’s why the aviation industry has strict shared safety protocols. It’s why we have black boxes that give investigators insights into what happens in the moments leading into an accident. With these protocols in place, everyone learns from tragedy, not just the party responsible.
(00:36): This tension between corporate responsibility and the public good is everywhere. When does a secret become something the public has a right to know? Is it Facebook’s responsibility to warn you that your teenage daughter’s Instagram account puts her at risk for depression and anxiety? Should you know when there’s a self-driving vehicle in the lane in front of you? Where does our responsibility as corporations collide with our rights as citizens? And what responsibilities do companies have when there’s no clear statute or law that identifies the losses related to your product?
(01:07): Today, we’re going to look at these issues and more with journalist Amir Efrati. Amir is the Executive Editor and co-founder of The Information, a subscription based technology newsletter for texting executives. His reporting on Uber’s corporate misstep and on the dangers of limits of self-driving vehicle development won consecutive Best in Business awards from the Society for Advancing Business Editing and Writing.
(01:31): Previously, Amir spent nine years at the Wall Street Journal reporting on technology, health, and criminal justice. He was the first to report on Bernie Madoff’s arrest and broke major news about the epic fraud. And federal judges cited his reporting on egregious criminal sentencing. His TV and radio appearances include CNBC, TBS, BBC, Fox, and NPR. Amir is one of the most knowledgeable journalists in tech today, and I can’t think of anyone better to have this conversation with. So let’s dig in.
Producer Read (02:11): Welcome to Data Radicals, a show about the people who use data to see things that nobody else can. This episode features an interview with Amir Efrati, Executive Editor and co-founder of The Information. In this episode, he and Satyen discuss the Facebook Papers, controversies surrounding self-driving cars, and the relationship between data-driven decisions and transparency. Data Radicals is brought to you by the generous support of Alation.
(02:37): Alation empowers people in large organizations to make data-driven decisions. It’s like Google for enterprise data — but smarter. Hard to believe, I know, but Alation makes it easy to find, understand, use, and trust the right data for the job. Learn more about Alation at alation.com. That’s alation.com.
Satyen Sangani (03:00): So Amir, I think a lot of folks that might be listening to this may not necessarily be in the tech industry and therefore may not have enough or as much acquaintance as I would with The Information. Can you tell us a little bit about the magazine and how you decided to start it?
Amir Efrati (03:18): Yeah, we’re a technology and media news publication that publishes articles that you can’t read elsewhere, which means focusing on essentially proprietary information, proprietary stories or analysis that help people make better decisions, be more informed about a variety of industries. We think about it as “the Economist for tech,” something like that. But with a lot of original reporting, really a focus on original reporting above all else.
(03:55): So we started in 2013, so I guess this is eight years now that we’ve been doing it and have grown considerably. The whole company is probably getting close to 50 people. A majority of whom are on the editorial side, but we’ve got reporters primarily based in the Bay Area and New York with some others scattered in place is like Seattle and London, as well as a Hong Kong bureau that’s very, very important to us where we have people covering the Chinese technology scene, as well as how American companies, including Apple and Amazon do business in China.
Satyen Sangani (04:38): Steve Jobs talked about that reality distortion field that every technology founder or visionary would have to articulate in order to be able to get people to follow them. And of course, in the journalism profession, you have to, on some level, believe in that, but also have a skeptical eye in understanding those stories. How do you think about that balance? Because that’s got to be an interesting balance to play where you’re both trying to deconstruct things, but not tear them down.
Amir Efrati (05:12): I come from a very old school reporting background. I was at the Wall Street Journal in the pre-Rupert Murdoch era. It’s still a great publication, but I really, really enjoyed my time there and the kind of upbringing that I had. And it was very traditional in the sense of if you’re a reporter or journalist, you’re not an activist. You’re not actively rooting for one side or another. You’re just trying to understand what’s going on. In the course of understanding what’s going on, you will inevitably reveal hypocrisies that are just human nature and inevitably reveal fraud, which is also part of human nature and you see in every industry.
(05:51) I mean, as long as there are humans running things, there will always be jobs for journalists because there’s constantly a dissonance between what is publicly known and what is privately known. And so, yeah, hopefully that explains it.
Satyen Sangani (06:05): Which leads us actually to one of the interesting stories that you’ve talked about recently within The Information around the Facebook Papers. That’s the story where what’s publicly known is different from what’s privately known. And it’s a story where that company had a lot of specialized information. Can you maybe just start by telling the audience a little bit about what the Facebook Papers are if they’re not acquainted with it and you know how that story came to be?
Amir Efrati (06:31): By now, I’m sure many of your listeners would have heard of Frances Haugen, who was a Facebook employee working on the civic integrity unit, meaning a unit that was trying to improve the way Facebook managed discussions around elections. Things like misinformation about elections or politicians and how they were using the platform in a variety of ways, some of which are very unsavory. (07:05): And so we actually have been covering that unit and some of the research that it did, including determining that blatant misinformation — I’m not talking about opinion, but blatant misinformation by a politician — is much more likely to be believed or at least somewhat more likely to be believed by a layperson versus other users who are spreading blatant misinformation. And there were a lot of questions about what should they do about that and how do they deal with politicians who up until that point were essentially given a blank check to say whatever they wanted, because obviously Facebook is a platform that does prize and has prioritized freedom of speech.
(07:48): So we were covering that unit. We did not know about Frances. She became very disenchanted with what was happening across the company, as it was responding to and reacting to a variety of crises that had intersected with the platform. She eventually linked up with a Wall Street Journal reporter named Jeff Horwitz who worked with her, and she took a whole slew of photographs of all kinds of internal research and internal discussions about every topic under the sun and gave that to him. They published a series based on it, which touched on everything from this kind of blank check that I talked about, which is essentially giving a free pass to some users, including politicians, to say whatever they wanted, even if it violated Facebook’s rules around violence or hate speech and things like that. It gave them a free pass to say those things versus other users who were being blocked, and nobody knew about that. That was one element which kind of followed our reporting, but they got into other areas. I think the most publicized arena was the Instagram research; there’s still plenty of debate about that and what came out of the Wall Street Journal’s reporting on the Instagram research, because it did show that for a sub-segment of Instagram users who were teenage girls who already felt bad about themselves, Instagram made them feel worse.
(09:16): But I think the series did make quite clear that there was not a lot of transparency around the decisions that Facebook was making about who can say what on the platform. Then the second really big area was pounding home the point that content moderation or efforts to reduce hate speech or violence or blatant misinformation … there was just not a lot of effort happening outside of the West and outside of the United States to try to control this kind of violent or hate speech. There was just a big difference in how much effort Facebook was putting in into it, whether it was in Arab countries or other Asian countries. Countries where Facebook was relied on a lot more for societal discourse and certainly relied on more by politicians to get the word out, there was just a lot less moderation and rules and enforcement of rules. I’m really glad that there’s more transparency around that.
Satyen Sangani (10:26): When events like the capital insurrection occur, transparency seems like it would become a necessity.
Amir Efrati (10:31): Frances Haugen, this whistleblower, she felt like the Journal’s series wasn’t enough, so she wanted to create a consortium of publications that would get access to all the same documents and try to make sense of them or come up with stories about them. She did that. We were not part of the initial consortium. Actually, we wrote an article about the consortium before it was even a known thing. After it came out, the first set of stories were around the January 6 insurrection and to what extent Facebook could have prevented some of the groups around that insurrection from forming or planning around.
(11:11): We ended up sifting through a lot of these documents to really understand how Mark Zuckerberg runs the company and how he’s really the central decision-maker for just about everything, certainly when it comes to speech issues and how they try to reduce harmful speech, harmful content as Facebook defines it, as well as some of the newer frontiers, like virtual reality (VR), and the difficulties that Facebook — or Meta, I should say — will inevitably have in controlling harmful content and abuse on something like VR, where people are interacting with each other in real time, verbally … really difficult to deal with that. People are going to be people.
Satyen Sangani (11:53): I think it’s super interesting because Facebook, for our audience, which is a set of people who would like to transform their companies to being more data-driven and be able to understand more about their consumers and understand more about their businesses so ostensibly they can influence behavior, in some ways, in the exact same way that Facebook is currently doing. You’ve got kind of this interesting tension where on one hand, somebody could say, “Well, look, what Facebook is doing is obviously bad. They’re horrible people. They know they’re making one-fifth of girls on their platform clinically depressed.”
On the other hand, you’ve got another potential argument, which is, “Well, hell, this is capitalism — laissez faire. Let’s let them do whatever the law permits them to do.” And there’s this massive gray area in the middle. But then there’s a whole bunch of people who are in a position inside of Facebook, inside of these companies, where they’ve got to make decisions between these two realities. What would you say to them? How do you think about advising them on doing their jobs when they have this great power, but this great responsibility that comes along with it?
Amir Efrati (13:01): You talk about the gray area. I think you can make an argument that Facebook is a net positive for most people and most users. They really enjoy it. They form groups around gems and rocks. They might even meet their significant other through that. I’m actually thinking of a very specific example in my own personal circle. That’s not being talked about. There’s a lot of evidence to suggest that people do like Facebook or Meta and its products, but because Meta has fumbled so often its public relations around the objectively harmful things that do sometimes occur, I am actually really glad that some of this material came out, as uncomfortable as it is for that company. I think more transparency is better, especially for a platform that’s used by billions of people.
(13:54) But to your other point about analytics and research, it really, really brought home the idea that Facebook has absolutely invested a lot in trying to understand its own products and the impact they have on the world. I think the conflict is whether they act enough on or prioritize it enough. This is the gray area that we’re talking about. If you look at the Facebook Papers, and unfortunately, I don’t think we’re yet able to share them directly, broadly, but I’m sure at some point they’ll be publicly released in full because they’ve gone to Congress, it really is amazing how much detail the researchers did and continue to do research on all the ways their products work. It is really, really astounding, and I think that’s something that even Frances herself was talking about as something that’s very encouraging, because at least they were looking into these things.
Satyen Sangani (14:51): Data gives us the power to influence human behavior in an incredibly targeted way. It’s a concept historian and public intellectual Yuval Harari has dubbed “hacking humans.” But how much influence is too much, and what is the responsibility of organizations to recognize that influence?
Amir Efrati (15:10): This is a never-ending conversation, and I think at the end of the day, it starts with: how are we educated, and how do our brains develop today, whether we are teaching enough people to be able to think critically or to be able to listen to a variety of viewpoints? Then you can certainly get into discussions around how we are arbitrarily and unnaturally being divided when just in the United States, the difference in the wants and desires of somebody who defines themselves as liberal and the wants and desires of somebody who defines themselves as conservative or Republican — not that different, but we are totally divided and being taken advantage of. That is certainly a depressing aspect of our society. It really doesn’t need to be that way. People do not realize that they are not actually that different. But certainly when it comes to these online platforms, the worst of ourselves comes out. People are very quick to jump at each other, to view each other as two-dimensional. And that’s not Facebook’s fault: that is human nature, to your point.
Satyen Sangani (16:12): And then on top of that you’ve got this other issue, which is influencing people’s thoughts and behavior. And in many cases, for the people that this podcast is for, that’s their jobs, they’re marketers. And so, they want to try to influence people’s behaviors. And so, one, there is this aspect of an unnatural influence and a not-quite-human interaction. And two, there is this aspect of, well, how much influence is too much influence, and what is the limit, and how do I even determine what the line is and where the line is, and who tells me where the line is? Because there’s no law against this stuff. Those are questions that I think all of us have to struggle with in our jobs because, on some level, we’re all trying to change people’s minds.
Amir Efrati (16:55): This is one reason why I like the office. Human interaction, face-to-face interaction, it’s everything. You’re never going to convince me that not being together doesn’t cause problems that otherwise wouldn’t happen.
Satyen Sangani (17:12): Social media has changed our lives. Self-driving cars claim they will do the same, but how much do we really know about them? Amir has been covering this section of the automotive industry since its early days. And like Facebook, it’s something we don’t know nearly enough about.
Amir Efrati (17:27): One of the reasons I got interested in self-driving car development is because Google was doing it, so I actually got a ride in one of their prototypes with Chris Urmson, who was there at the time, back in the day when they were saying this is going to be a solved problem by the time my kid is 16 and has a license to drive. They won’t need to drive. Because it did look tantalizingly close, and it was a very logical assumption to make because you’re like, “Okay, you put a bunch of sensors on a car. Those sensors can see way better than humans can, obviously, so it stands to reason that the cars can be safer.” Unfortunately, that did not turn out to be the case. As I started to follow it very closely, it became very clear, very early on, even in 2017, that the claims that developers were making were just didn’t comport with reality.
(18:18): They were saying we don’t have to take over very often, and our cars are nearly perfect. And that was just BS. A lot of people built very compelling demos. I don’t want to call it demoware. Although, I think in many companies there was demoware, which was a system built for a very specific track and a very specific demonstration, but not something that could be used in a general sense to automate a vehicle on the road. Part of the reason why I was writing, I don’t want to say critically, but just very honestly about what was happening there, including publishing data on how these companies were rating themselves and their own technology, is because there was this dissonance between what was privately being shared and what was publicly being announced. So that’s an easy thing to do right there: let’s just narrow that gap because that doesn’t make any sense.
(19:13): People are making life decisions, going to work for companies based on completely false assumptions. That’s not good for anyone. Investors are investing pension money and other people’s money in companies based on what’s publicly announced and not what’s privately known. That’s not good, either. There was just a notion of being realistic about it.
Satyen Sangani (19:33): Amir believes that companies in the self-driving car business must be more transparent with each other.
Amir Efrati (19:38): And I think there was a lot of frustration, and still a lot of frustration, in the field of self-driving cars, where there’s not a lot of data sharing happening between companies about best practices, and what they could do better, and how they can avoid having a weak link that ruins it for everyone else. And by that I mean, if you look at a parallel, like the aviation industry, one of the things the aviation industry did was come up with a set of safety standards, which are very difficult to do with self-driving cars, because we’re in such an ascent stage, but they at least came up with a framework for discussing really serious problems that had come up. I think there’s a lot of frustration, because there’s a lot of low-hanging fruit that could be picked off to solve some basic problems so that nobody is ever put in harm’s way the way they were.
(20:25): That’s one one really interesting thing is all these data silos that have been created because of competition. But because it’s something that touches public roads, public infrastructure, I think it really brings it into the realm of, there’s a duty of transparency here. Because this is stuff that’s on my street right now. I’m in downtown San Francisco talking to you. Right now, at this moment, there are probably at least two dozen self-driving car prototypes running around. The rain has stopped. The atmospheric river has ended. So there are prototypes running around all over the place. I think it behooves these companies to explain to the public what is actually going on. Thankfully, for the most part, they are operating these things safely. But anyway, it was a really fascinating realm to look into. And I am deeply supportive and really hope that these companies solve the problem, but the problem still remains. And it’s very unclear that experimenting in this way on the public roads is a good idea. They’re certainly a question mark as to whether it’s a good idea in putting it into regular drivers’ hands.
Satyen Sangani (21:26): It’s an interesting case study though, because unlike the Facebook Papers case where you’ve got lots of understanding and analysis and knowledge, in this case, you’ve almost got the exact opposite problem, where you’re really still trying to understand the problem set of what the real world looks like and what the interaction is between this technology and the real world. And it feels like here you’ve got almost an analytical process and data-transparency problem that, because of the profit motive, people are not willing to even do that level or have that level of transparency. It feels like, in some sense, the problems are a little bit farther upstream in the analytical process, but still hard because people don’t quite know where to share, where not to share, what their enterprise would not tolerate.
Amir Efrati (22:21): They won’t even share what they measure.
Satyen Sangani (22:25): Right.
Amir Efrati (22:25): I think there is a parallel with Facebook there. Again, we’re learning about metrics. We’re learning about ways to measure the impact of a product and how to form data around it. What’s interesting about the Facebook Papers is there’s an area of subjectivity around speech, and freedom of speech, and what should people be allowed to say. And it’s not lost on me. And it shouldn’t be lost on anyone else that the theories around the lab leak with COVID-19 being blocked early on in the pandemic now looks like a horrific decision. And I’m not saying I was supportive of it at the time. I had no idea. I didn’t know what to think about it. I think over time, the idea, the notion that this could have come out of a lab became a lot more sensible. But now looking back at it, it makes you go, “Maybe restricting that speech is just bad form.”
(23:20): And we should be worried about that. What else do we not know? There are plenty of similarities with self-driving cars in terms of what we don’t yet know. Self-driving cars, we at least know that we shouldn’t kill people with these vehicles. We know that we shouldn’t get into a situation where we might kill people or might hurt people. You can certainly start to establish a measurement around that. That’s a lot more tangible and physical in nature. But both are fascinating realms, where there was really not a lot of transparency. And when there is more transparency, I think everyone’s better off.
Satyen Sangani (23:57): Amir and his team have also begun to pay close attention to the cloud for some very important reasons.
Amir Efrati (24:03): We care a lot about cloud software, cloud infrastructure. It is one of the most under-covered underappreciated areas, I’m trying to hire more people to cover it, because I’m quite interested in it. Trillions of dollars of decisions being made, and again, great example, anytime we give people a little bit of transparency into the very, very difficult data problems or cloud infrastructure problems that companies have, our readers go crazy for it because they would not know otherwise.
(24:37): They would not know that Uber’s internal systems went down for a whole day and why that happened, and the engineering culture that led to that, although it has improved remarkably since we wrote that story in 2015, to more recent stories around the costs of using cloud providers like AWS, costs that are sometimes not well-controlled by the companies and they get hit by a lot of surprise bills. So how do they keep a lid on that?
(25:04): When we write about how a company signs a really, really big cloud computing deal, let’s say with Google Cloud or with AWS, but then proceeds to not spend any money after that, because it’s actually very difficult to get their IT department to agree with some of their developers, and to be able to use some of the tools of the cloud providers, because maybe the company isn’t ready for microservices or something like that, that is massively instructive. So I think that there’s a lot more transparency that I think we could all benefit from. I think a lot of CTOs and CIOs are very reluctant to share their heartaches, which is, I guess, an opportunity for a publication like ours.
Satyen Sangani (25:45): And I think an opportunity, I mean internally within these companies, because what’s so fascinating now is that there’s so much complexity, and hell that’s even true inside of Alation. I mean, we have software that depends on other third-party software. And if there is a bug in that third-party software that may have a security breach, we’ve all heard about this thing called Log4j, that affects and has ripple effects that you may not necessarily anticipate when you’re building the software. And so then we all hire analysts internally to better understand our businesses internally, even though in theory, we should know all about them.
(26:18): And you’re kind of doing the same thing, but from the outside, so how do you think about this world of specialization? I mean, Jessica wrote about it and talked about what could be. I think there was this notion of the end of journalism, but how do you think about this problem of hyper-specialization? What are the implications, how do companies need to respond? How would you want listeners to think about this problem? Because understanding this very complicated world seems like it isn’t getting any easier, and yet that’s kind of the task that’s at hand for all of us.
Amir Efrati (26:55): It’s kind of amazing how few CEOs outside of, let’s say Silicon Valley, really understand how the company’s infrastructure works, and what does it mean to have a hybrid cloud, or use both internal data centers and data center providers like AWS or things of that nature? They just don’t know it. So it can create a lot of challenges if they’re not adequately outsourcing or allowing key decisions to be made by the people who really understand it.
(27:29): So I guess in the journalism context, the story that you’re talking about was a column that was very well read, and it was making a very simple point, which is that whether you’re talking about what we’ve just discussed, trying to make sense of the impact of the Facebook app, or trying to understand the development of autonomous vehicles, or trying to understand what is cryptocurrency and Web3 applications, these things are at this point, very complex, very specific. And if you don’t know how to go down that rabbit hole and be able to ask the right questions — and be able to have an appreciation for the problems that people are trying to solve — then you’re in for a lot of pain because you can’t apply super-general lenses to
(27:29): So I guess in the journalism context, the story that you’re talking about was a column that was very well read, and it was making a very simple point, which is that whether you’re talking about what we’ve just discussed, trying to make sense of the impact of the Facebook app, or trying to understand the development of autonomous vehicles, or trying to understand what is cryptocurrency and Web3 applications, these things are at this point, very complex, very specific. And if you don’t know how to go down that rabbit hole and be able to ask the right questions — and be able to have an appreciation for the problems that people are trying to solve — then you’re in for a lot of pain because you can’t apply super-general lenses to everything; you really do have to start to specialize, as you pointed out, to be able to write and think properly about all these things.
Satyen Sangani (28:32): “With great power comes great responsibility.” It’s an iconic line from the original Spider-Man movie. They’re the dying words of Peter Parker’s Uncle Ben. Those same words become a superhero’s guiding principle.
(28:45): And as I reflect on my conversation with Amir, maybe this phrase should be Data Radicals’ guiding principle as well. Technology has given us the power to aggregate and quantify the world around us. We can create innovations that change our world for the better, but we also need to remember that our innovations can have consequences that we don’t originally intend. And even if it never comes to pass, it’s helpful for teams to think through both the potential successes and the potential failures from introducing a new technology.
(29:13): Thank you to Amir for joining us for this episode of Data Radicals. This is Satyen Sangani, co-founder and CEO of Alation, signing off. Thank you for listening.
Producer (29:22): This podcast is brought to you by Alation. Chief data officers face an uphill battle. How can they succeed in making data-driven decision making the new normal? The State of Data Culture Report has the answer, download to learn why successful CDOs partner with their chief financial officer to drive meaningful change. Check it out at alation.com/dcr3.
Season 2 Episode 19
Tech journalist Matthew Lynley unravels the intricate landscape of large language models (LLMs), including their applications and challenges, as well as the race for dominance in the AI space. The founding writer of the AI newsletter Supervised, Matthew shares his views on the trends, rivalries, and future trajectories shaping the GenAI landscape.
Season 2 Episode 13
The heartbeat is human history’s earliest data tool, and measuring the progress of over 1 million Orangetheory Fitness customers begins with tracking heart rate data. In this episode, Ameen Kazerouni, the company’s CTO, explains how taking small steps can initiate a resilient journey — for both fitness and data transformation.
Season 2 Episode 5
Seeing is believing — or is it? Today, Photoshop and AI make it easy to falsify images that can find their way into scientific research. Science integrity consultant Dr. Elisabeth Bik, an expert at spotting fishy images, addresses the murky world of research, the impact of “publish or perish”, and how to restore trust in science through reproducibility.