Jeremy Kahn is the AI editor at Fortune and the author of Mastering AI: A Survival Guide to Our Superpowered Future. Previously, he spent eight years at Bloomberg as a technology reporter. Jeremy's work has been featured in the New York Times, the International Herald Tribune, Newsweek, The Atlantic, Smithsonian, the Boston Globe, and Portfolio.
As the Co-founder and CEO of Alation, Satyen lives his passion of empowering a curious and rational world by fundamentally improving the way data consumers, creators, and stewards find, understand, and trust data. Industry insiders call him a visionary entrepreneur. Those who meet him call him warm and down-to-earth. His kids call him “Dad.”
0:00:03.4 Producer 1: Welcome back to Data Radicals. On the show today, Satyen interviews Jeremy Kahn, AI Editor at Fortune. In this conversation, they discuss his latest book, Mastering AI, and Jeremy shares insights on AI's impact on the economy and how we work. They also explore the transformative potential of AI across industries. How should it be regulated? How will that change society? And why are AI creators asking for regulation even while the companies they lead lobby against it? From war to taxes, we're diving deep into the role that AI will play now and in the future. Stay tuned.
0:00:37.7 Producer 2: This podcast is brought to you by Alation, a platform that delivers trusted data. AI creators know you can't have trusted AI without trusted data. Today, our customers use Alation to build game-changing AI solutions that streamline productivity and improve the customer experience. Learn more about Alation at Alation.com.
0:01:01.9 Satyen Sangani: Today on Data Radicals, I'm joined by Jeremy Kahn, AI Editor at Fortune, where he is spearheading the publication's coverage of artificial intelligence. Prior to Fortune, he spent eight years at Bloomberg as a technology reporter and a senior writer for Bloomberg Markets magazine. Jeremy's work has been featured in the New York Times, the International Herald Tribune, Newsweek, the Atlantic, Smithsonian, the Boston Globe, and Portfolio. He's also the author of Mastering AI, A Survival Guide to Our Superpowered Future, and he is the lead author of Fortune's Eye on AI Newsletter. Jeremy, welcome to the show.
0:01:35.6 Jeremy Kahn: Oh, it's great to be here.
0:01:36.4 Satyen Sangani: So why don't we jump right into the book. You obviously just most recently wrote it. What's the book about? Why'd you write it?
0:01:43.8 Jeremy Kahn: Yeah, so the book, which is called Mastering AI, I wrote it to be a primer sort of for a non-technical audience to try to explain how we arrived at this moment with AI and where this technology is likely taking us. I wanted to explain yes, how we got here and to give readers a sense of what the future might hold and how AI, I think, is going to have all kinds of impacts, both positive and potentially negative on our lives and our thinking, on the way we work, the way industries are organized, and society sort of writ large.
0:02:15.9 Satyen Sangani: And as you've written the book and as we reviewed it, you seem like you are leaning into the idea that this can be a massively transformational technology where you talk about potential to unravel social fabric, jeopardized democracy. Can you sort of illustrate for us how some of those scenarios would play out, maybe even both positive and negative, but give us an image of for what that future looks like?
0:02:37.2 Jeremy Kahn: I think this is gonna be a tremendously transformative technology, and I think there's some really big positive effects, particularly I think we are gonna see a huge uplift in labor productivity, and I don't think we're gonna see sort of mass joblessness from this technology. I think actually this is a technology that could enable people to sort of be lifted back up into the middle class. And one of the ways that would work is if you're able to create a whole series of AI copilots for knowledge work that are designed to help professionals complete certain professional tasks, things that they would do in their job.
0:03:08.0 Jeremy Kahn: Copilot for lawyers, a copilot for accountants, a copilot for people who work in banking for financial advisors. I think what you're going to see is that there are people who have less training and less experience and may not have the academic credentialing that's currently necessary to work in these professions, be able to enter these professions with the help of an AI copilot.
0:03:25.6 Jeremy Kahn: And the example I like to use here is sort of with accounting, where currently we do not have enough trained accountants in the US. There's a huge crisis, particularly for public company accountants and their states are considering lowering certification standards to deal with this already. And there's just a huge lack of people to go into the profession. But I think what you can do is potentially take people who have a two year associate's degree in bookkeeping and with an AI copilot potentially upskill them to the point where they could take on some of the public company accounting work that needs to be done.
0:03:58.0 Jeremy Kahn: And I think it's true across most of the professions and a lot of knowledge work that the issue is not that we have too many people, the issue is that we simply do not have enough people actually in these professions. And at the same time, we have a lot of people who were squeezed out of the middle class in past decades who used to have good jobs sort of in manufacturing or sort of lower-level service sector jobs who've been pushed out and often have ended up in retail and hospitality work, which is less secure and tends to pay less well.
0:04:24.3 Jeremy Kahn: But I think some of those people with the help of AI copilots can upskill and sort of climb their way back into the middle class, which I think would be a hugely positive effect. So that's like one of the hugely positive impacts. You mentioned about the threat to democracy. So I do think there's a risk from this technology to democracy, and that comes largely through the ability for the technology to supercharge kind of misinformation and disinformation campaigns. It's just easier to create at great scale misinformation content that is potentially more convincing.
0:04:52.6 Jeremy Kahn: I'm also quite concerned about the use of chatbots potentially in the political sphere because we're finding that chatbots are incredibly persuasive and that could be a tool for both good or in the wrong hands, really, really bad purposes. And I worry about how politicians may start to use these AI chatbots, particularly if we allow in the future, for instance, any advertiser to potentially shape what it is that people are receiving when they ask questions of a chatbot.
0:05:16.5 Jeremy Kahn: I think we need to be very careful of any business models that would allow that to happen, or at least if we do allow advertising to influence the responses that chatbots give, that should be completely transparent that there's been that advertising relationship. And I certainly worry about that in the political sphere. If the Democratic party or the Republican party can pay money to open AI or to Google, and in order to get chatbot responses to political questions shaped a certain way, I think that's potentially very dangerous and could help undermine democracy to some degree.
0:05:44.4 Jeremy Kahn: Obviously the use of deepfake technology increasingly convincing synthetic media of various kinds and voice deepfakes as well is really problematic. I think it's gonna be very hard for people to tell what is authentic and what is inauthentic. And it's gonna be very difficult for, I think news media and fact-checking organizations to keep pace with potentially the volume of false content, which you might see.
0:06:06.9 Satyen Sangani: Yeah, I mean in some ways there's a little bit of a contradiction, right? Because on one hand, you've got sort of this upskilling of these folks who in the auditing example or maybe in a medical example, have the ability to leverage these bots and these agents in order to be able to do their work. But a lot of what comes from in the long tail of training from year two to year seven, if you're a physician, is discernment. Your ability to be able to sort of look at a body of information, say, ah that particular thing is off, it may not be right. And yet in many of these other examples, deepfakes and the like you also are sort of almost playing into people's inability to be able to discern. And it strikes me that the big question is this question of how will people know what is true?
0:06:45.9 Satyen Sangani: And that can work in both directions. It can work in the direction of maybe being useful because people are empowered, but also maybe being horrible because people won't have the discernment if you're an auditor that hasn't had the appropriate training in knowing when to dive in deeper into some financials that otherwise might look fishy.
0:07:02.8 Jeremy Kahn: Yeah, and I think again, I think we're going to have to be careful in how we train and create such systems. And by the way, I very much think that the way this is going to go and I think it would be better for us if it does go this way, is that we have copilots that are not generalist systems necessarily, but they might have a general LLM kind of, at the heart of them, but that will have had essentially quite a lot of potentially other models appended to them quite a lot of front end, architecture so that they are really designed to answer questions specifically for people in that profession and to really help them those particular professionals.
0:07:38.0 Jeremy Kahn: So I don't think the chatbot that we have for medicines for to help a doctor, that copilot, I don't think that that should be the same copilot that's going to help an accountant. I think that should be actually different systems, even if at the heart of them they use the same model. I think there should be quite a lot of different architecture around that to try to shape the responses. And one of the things it could do is say, is to actually have that capture a lot of that what's often called sort of tacit knowledge that experts have. I think through the fine-tuning of these systems, there will be ways to kind of capture a lot of that tacit knowledge and actually have copilot systems that are able to relay that to a person who's less of an expert and will be able to coach them as they would be coached by someone who is much more experienced in that field.
0:08:20.8 Jeremy Kahn: Yes, it's true that a lot of sort of professional discernment is tacit knowledge that people often the practitioners themselves can't actually quite articulate what it is that led them to think that that doesn't quite look right or that something's off here or in this particular circumstance it's better if we don't use this particular argument with this judge if you're a lawyer. But I really think the systems will eventually be able to capture that knowledge and will be able to sort of present that discernment to folks. On the other hand, I do think we need to continue to teach critical thinking. I think schools out of, from primary school onwards should be teaching people that they have to think critically about the information they're receiving. What is the source of that information and can it be trusted?
0:09:00.7 Jeremy Kahn: And I think we're gonna also, certainly in the medical context, perhaps in other fields too. I really think we need realistic validation of copilot systems. And as I mentioned the book, when it comes to systems that are designed to help doctors, I'm concerned about systems that were already approving the FDA's sort of approval system does not necessarily force these vendors to show that there's actually kind of clinical effectiveness of these systems. Often they can get a system approved based on some testing on historical data, but you don't have to necessarily prove that your system in the clinic is going to improve patient outcomes. And I think it'd be much better if the standard was does it improve patient outcomes? And I think with these other copilots, a similar kind of standard of does it actually produce better work across the board for people, not just can it pass the CPA exam for instance?
0:09:46.2 Jeremy Kahn: I wanna see that studies that actually show people working alongside these systems, are they actually both more productive, but also are, is the critical standards for in in the case of an accountant maybe detecting fraud met. And of course, if it's not, I think that's a problem and we should hold the vendors to those kind of standards.
0:10:05.4 Satyen Sangani: Yeah, I guess the question then becomes who does that work and who determines what the outcomes are? I mean, in the medical case, I mean now go to the doctor. What I often find is that the doctor is super incentivized to sort of move very quickly on from my visit because they have 15 more visits behind me. They often will even claim things in notes that didn't actually occur. So they'll say, oh I did this and this and this with a patient and I advise them of this.
0:10:31.3 Satyen Sangani: And I'm like that really never happened. And what you see I think in that is the invisible sort of insurance company who's basically saying, look, we will pay you X for a dollar, like per visit. And so in that case, the question is what's the outcome? Is the outcome a successful negative payout for the insurance company? Is the outcome long-term sort of health of the patient? Is the outcome that the person doesn't come back and report a similar finding? That all is sort of messy work and, and you could see a world in which the complexity defaults to whoever has the money and whoever has the power. How do you think about that? Is there a role for regulation or?
0:11:05.4 Jeremy Kahn: I think there's definitely a role for regulation and I think, again, I think we need some policing of business models that might create incentives that for systems that would not function as in the public interest. And I think ultimate government's gonna have to play some role. But I do think also there's industry standards that that might work for specific, again, specific copilots for specific industries. I think the industry should set a standard, and I don't think any of these professions have an interest necessarily in a degradation of standards, even if there are certain financial incentives right now that doctors may in practice be doing certain things as you suggest in order to be able to bill more from the insurance companies. But I think if you went to any medical standard setting body and said is that appropriate practice? They would say no. So I think we can kind of rely on some of these professional standard-setting bodies to potentially do some of this work and set standards for, well what do we want out of these copilot systems?
0:11:57.2 Satyen Sangani: Yeah. And how do you see that regulation evolving in practice? I mean there's the EU AI Act and they're obviously very quick to legislate. It seems like the United States is the literal exact opposite where we just you know. What other have no will or no desire or both. What do you see as the evolution of this? Will it come from Europe? Will it come from some other jurisdiction? Will it come from the internal community?
0:12:18.3 Jeremy Kahn: Yeah, I think it's gonna come from a couple places. I do think, again, I think when you talk about knowledge workers, to the extent there are professional bodies, they themselves may play a role in this, but yeah, the EU AI act is a great example. I don't know that the EU got it exactly right. I think people who say that the EU AI act is going to be too onerous on small businesses may have a point, but in general, if you look at the ACT and some of its sort of high level principles, I think they were very sensible ones, which is you should take a risk-based approach.
0:12:46.8 Jeremy Kahn: It should be a lot about deployment use cases except when the model is so general that you have to do something about the model itself. But a lot of that act is designed to look at use cases and designed to affect the company that's deploying the technology, not necessarily the technology vendor. It puts the onus on the company deploying the technology to have done certain things to make sure that however they're using that system in practice: what is the risk of how they're using that system, what steps have they done to assess those risks? If those are deemed to be sort of high risk categories, what have they done to mitigate those risks? I think that's all very sensible. And I think you may see, because Europe is a relatively large market, that companies will adopt this as a kind of de facto standard as they have with Europe's GDPR privacy standard, where it's kind of become a de facto global standard.
0:13:36.3 Jeremy Kahn: But I don't know, we'll see. Because also you already see the example of some of the AI vendors saying that they're not gonna roll out some of the product features that they have already announced that they're rolling out in the US in Europe because they're concerned about how they're gonna comply with this act. And I think, so it's possible that some companies will say, actually, yeah, we can do without Europe. And then it'll be kind of an interesting thing to see what happens. You might get quite a fractured landscape or marketplace for these systems.
In the US it's a big problem because politicians are very interested in doing something. The question is what – and can they agree on something that to get out actually pass legislation. And I think it should happen at the federal level. There's just been an effort in California to pass laws independently that would affect a lot of the companies based there.
0:14:18.3 Jeremy Kahn: But I think in the US we have this issue where the states are starting to take action because of the lack of action by the federal government. So, and I think that's problematic. I don't think you want a system where you have every state with its own AI act and different laws to comply within every state. So I do think we need to have some action at the federal level when we're gonna see that happen, I don't know because there has been a lot of lack of will. And even though there was some bipartisan efforts in Congress that looked like they were maybe gonna pay off last year, and I think there's some agreement on both sides of the aisle that there should be some rules and regulation passed around AI. I think there's gonna be quite a lot of debate about what the specifics should be and so far nothing's actually come forward, but we'll see what happens.
0:14:57.1 Jeremy Kahn: I'm pretty convinced that we will see at some point in the next few years government action. It may be that the US is behind other countries though.
0:15:04.1 Satyen Sangani: Yeah, and the shape of that I think could be pretty variable and could have a lot of different impacts. Newsom just vetoed SB 1047. That was a, particularly, certainly in speaking to other entrepreneurs that are in the AI landscape, there was a lot of negativity from the entrepreneurial community around that because there was some, I think, form of personal liability that would've been enforced. How do you feel about that? I mean, what's the general reaction based upon others that you've spoken to about that regulation and do you think it should have passed?
0:15:33.7 Jeremy Kahn: Yeah, I think, SB 1047 was not a bad piece of legislation actually. I think it sort of made a lot of sense. It was trying to head off what the most catastrophic risks. A lot of the provisions of the law would only take effect on an incident that caused more than $500 million worth of damages. So you're talking about pretty catastrophic effects. And there was also a model size threshold in that proposed law, which would've meant that a lot of smaller models would've been below that threshold and they would've had no effect on them.
0:16:03.9 Jeremy Kahn: I think the concern people have with that though was twofold. Yes, you mentioned the personal liability. So a lot of people were upset that it wasn't just the companies that would be on the hook, but actually there would be personal liability including criminal liability for individuals at the companies if they were shown not to have taken appropriate steps to try to assess and mitigate risks. And then there was an incident that caused that kind of damage.
0:16:26.0 Jeremy Kahn: I think the particular concern was over open source AI systems because the open source community said, Hey we can take reasonable steps to do safety assessments and to put in some guardrails, but once we kind of put this out there in the world, we have no idea what a user was gonna do with it. And what if they modify the system and something goes wrong? Am I gonna be criminally liable for that?
The other problem I think people had was it was not clear what happened with fine-tuning. So if you took a large language model from even one of the proprietary providers, but you fine-tune that model and then deployed it in your business and something went wrong, the question was, well, who's liable then? Is the company that did the fine-tuning going to be subject to these strict liability rules?
0:17:06.0 Jeremy Kahn: And there was a reasonable interpretation of people that yes, they would be, and I'm not sure that was really the intent of the people who drafted the law. And I'm not sure that's really fair to, particularly for the problem, it turns out it actually has to do with the initial training of the model and the initial architecting of the model. I'm not sure it's fair to hold a company that essentially buys that model and then fine-tunes it liable for what happens.
So I don't know, it wasn't a perfect piece of legislation, but I think it was sort of a step in the right direction. And I think it had, again, on a principle-based level, got a lot of things right. It was a reasonable step. I would've thought. But yes, I mean it's been vetoed now and the only thing I think which was really unreasonable about it, is that it was California acting alone.
0:17:47.0 Jeremy Kahn: Again I think it doesn't really make sense to have 50 states each with their own AI act. I think we really want this to be done at the federal level. That makes a lot more sense to me.
0:17:55.8 Satyen Sangani: Yeah, but it seems consistent. I mean, the privacy example that you use with GDPR seemed to flow through to first California, and now there are some states popping up following California as an example. And it seems like many of the red states are far more interested in sort of lagging on sort of regulatory apparatus, which by default means that then the blue states like California tend to be the ones that lead.
One of the problems with these bits of legislation though, are that the complexity in these corner cases often are what kill, what if 87% of the bill is great, or 85% of the bill is great, you have this, you know.
0:18:32.0 Satyen Sangani: Residual 15% where all the details are being worked out. But you know, sort of really thinking through personal liability, causality, complexity, who's liable? You know, is it the fine-tuning example? How do we know whether the particular form of fine-tuning influenced the model to sort inordinately, I mean these are hard questions and you could see a world in which it's gonna, I mean, even as the technology evolves, it's gonna take us a ton of time to evolve because we don't really understand causality with these models. How do you think about that? I mean, do you feel like we will get to an answer or do you feel like this could just, I don't know, like what, I mean obviously you don't have a you don't have a crystal ball, but how do you think about that?
0:19:09.6 Jeremy Kahn: Yeah, I think, look, I think we need a kind of flexible approach to legislation. You know, it is a nascent technology and I think we have to be prepared to act now, but iterate later. And generally that's not been a way that a lot of lawmaking works, particularly at the federal level actually, because it's often very hard to change things, although slightly better at the federal rulemaking level. So it might be something where the best law we could pass at the federal level would be one that sort of defers a lot of the details to an agency, which can then take on expert advice.
0:19:40.1 Jeremy Kahn: And there can be a process by which the regulations or the implementing regulations are adjusted over time as we figure out what works and what doesn't work and where do we get this right or wrong without having to go back to Congress to pass a new law. 'cause I think that you don't that's a really lengthy process and a really fraught process and we probably don't wanna have that, but we probably want some sort of law that establishes some general principles and then hands it off to a federal agency to work out the details through a rulemaking process that's more flexible.
0:20:11.6 Satyen Sangani: How much do you think the election will impact this regulatory apparatus?
0:20:14.5 Jeremy Kahn: Well I hugely basically I think it's gonna be a huge impact. I mean, you've already basically, you've already had the republican camp say that if Trump is elected, they're going to sort of cancel the Biden administration's executive order on AI, which is the only a little bit of federal regulation that exists on AI. They've basically kind of fallen into this camp among the group that calls, sort of calls themselves effective accelerationist, where they’re big fans of pushing this technology as far as possible, as fast as possible. And they really don't want any regulation. So I think you wouldn't see very much regulation if you get a Trump administration.
0:20:49.3 Jeremy Kahn: And I think they would be very opposed to regulation. The only thing I think you might get, which is kind of interesting, is at the same time you might see a federal effort actually to push the technology forward where you might actually see the government get much more involved in funding a kind of Manhattan project towards AGI, I only say this because, I dunno if you saw it, Ivanka Trump went to X and posted that she was a big fan of this monograph that an ex OpenAI guy named Leopold Aschenbrenner has published, which is talking about sort of the race for super intelligence.
0:21:19.1 Jeremy Kahn: And he really posits this as this US versus China contest. And he sort of talks about the need for the US to kind of get behind a Manhattan project in order to beat China to AGI. And it sounds like there's people around Trump with you, China in the zero-sum game and that there's this new Cold War and I think they wanna win that new Cold War. And I think they're attracted to this idea of some sort of Manhattan project to beat China to AGI. So that's interesting. I think that might be a difference.
Whereas I think if you see Harris get elected, it's gonna be a lot of continuation of what we've seen with Biden, which I think will be some efforts at regulation. I think there will be an extension of perhaps the executive order to cover more companies or to go with the FTC to take more action.
0:22:00.7 Jeremy Kahn: I think there also might be an effort with the support of the administration to introduce a sort of comprehensive bill into Congress. Whether that would pass, I think would depend on the shape of Congress after this election. So yeah I think they're less attracted to the idea of this kind of grand race for AGI I think they're also are concerned about what China's doing and concerned about the geopolitics of this, but I don't really see them getting behind a federally sponsored effort to kind of push the technology forward. So I think they think the private sector is doing a pretty good job already and that there's no real need to like have a taxpayer funded effort to build giant data centers or something. So I think there will be a big difference between a potential Harris administration and a future Trump administration. It'd be very interesting to see what happens. But yeah, I think the election's gonna be pretty determinative.
0:22:48.9 Satyen Sangani: Yeah, I can imagine that it would be. And you speak obviously to some of the major actors in the world of AI and you know, did so obviously as a part of the book, but also separately in your reporting, what are the attitudes, like, how are the alt then and you know, the like feeling about all of this there's obviously been a lot of reporting around OpenAI specifically in terms of how cavalier it has or has not been. It's becoming a commercial company as opposed to a nonprofit. You can talk about OpenAI specifically or not, but I think just the question of like, where is the general tech zeitgeist? Is this one of the reasons why it's more in the Republican Trump camp? Or how do you see people's feelings around this?
0:23:25.2 Jeremy Kahn: Yeah, it's interesting 'cause I think if you talk to a lot of people at the leading AI labs, they actually are generally will tell you they're in favor of regulation. That they actually think that there should be some kind of federal rulemaking around this. A lot of them are in favor of an AI agency specifically to, to look into this technology. They're in this weird position where they kind of know they're in this competitive race and they're actually kind of looking for some outside force to come in and actually kind of intervene in those dynamics and set some ground rules. And I think they don't feel they can do it themselves. And I think they get a lot, so Altman's interesting 'cause he's gone before Congress and said like, basically please regulate us. I think this needs to be regulated. And then if you look at their lobbying as a company, they've lobbied against regulation, which is kind of interesting.
0:24:09.9 Jeremy Kahn: But I actually think that, that seems a lot of people think, oh, you're such a hypocrite. But actually I think it's more that they feel like, no, this is like, as a company we have a fiduciary responsibility in some ways to keep pushing forward. But as individuals we really wish that the government would step in and it's like they want a set of rules for all of them that will, I think, maybe temper some of the extreme competitive dynamics that are pushing them constantly forward on advancing models. I think almost it really is one of these things where they're sort of beg begging the regulators to stop them from doing the thing that they're sort of committed to doing. Which is, it's a really weird dynamic.
0:24:45.3 Satyen Sangani: Yeah, well. It's like the state of nature, like the life is nasty, brutish and shortened. Like everybody's sort of just hacking away at each other because it but Altman in particular, I mean like OpenAI is obviously the primary player. In this game. And you know, the strength of the primary player is that the lack of rules accrues to probably that player more than any other you would think.
0:25:05.9 Jeremy Kahn: Yeah, I think that's true. But you see the same dynamic. I talked to Demis Hassabis at DeepMind and it's kind of similar. I think he feels like there's a lot of pressure competitively to match what open AI is doing, but he kind of wishes that pressure weren't there and if there were some rules around it that would be a bad thing. And I feel like a lot of Dario Amodei also at philanthropic is kind of the same way. It's sort of like they're continuing to push forward all these models as fast as they can and yet I think he kind of wishes that the government would do something.
0:25:34.8 Satyen Sangani: Yeah. Although they've been probably amongst the group the most thoughtful about whatever ethical AI is and what it might look like and investing in behind explainability.
0:25:45.6 Jeremy Kahn: No, that's true. And they've, the constitutional AI, which they've helped develop is a really interesting idea about how you would give a system guardrails and Yeah, no, they absolutely are more, they seem more committed genuinely to AI safety than some of the other labs.
0:26:00.0 Satyen Sangani: Talk about constitutional AI, explain the concept and talk about where it's been applied and how it's been used. 'cause I think it's a really interesting concept.
0:26:05.7 Jeremy Kahn: Yeah. So the idea is that you would give an AI system some, an actual written constitution and it would have to assess its outputs before providing them against the written constitution and try to assess whether it had complied or not. And it turns out that this works relatively well that actually having the model check its own output against a set of principles for compliance with those principles actually works just kind of weird. It becomes even more important if you start thinking about AI agents that can kind of take action in the world. Like how are you going to make sure that they follow human intentions writ large and don't take certain actions that we would not ever want them to take. And one of the ideas as well was just give it a written constitution that prohibits certain things and see if it can check its actions against that.
0:26:49.3 Jeremy Kahn: So anthropic developed this, they wrote a constitution. They have used this as part of the training of Claude, their LLM. And it seems to work, Claude does seem actually less prone to jumping guardrails, less susceptible to prompt injection attacks than other systems because it seems to always check itself against this constitution. It doesn't work 100%. It turns out you can get Claude to do things it's not supposed to, but it's much harder than with some of the other systems. And it seems that this is a somewhat robust way of doing this. But then it raises this question of what is in the Constitution and who gets to write the Constitution. And Anthropic, when they first did this, anthropic wrote the Constitution, but they realized that that was kind of problematic. Like if this is gonna be the constitution of some future super powerful AI, is it right that a single company gets to write, you know, construct this.
0:27:37.8 Jeremy Kahn: So they actually did some polling of Americans to see what principles would people want in a AI cons... And it was interesting 'cause unfortunately people were pretty divided, as you might imagine, in America. So it turned out that it turned out about two thirds of people agreed on a set of general principles for what they wanted to see in an AI constitution and how they generally wanted AI outputs to be. But the other third, the problem was that the other third vehemently disagreed with the two thirds about those principles. And the one third was much more libertarian. They really felt that they wanted an AI system that would present information truthfully, irregardless of whether that was offensive to anyone. And that would not go out of its way to sort of tailor its answers to any particular group.
0:28:21.1 Jeremy Kahn: They wanted essentially kind of a system that would weight freedom of opportunity and individual freedom against any particular group dynamics. Whereas the other two thirds were sort of interested in outputs that for instance, wouldn't necessarily offend minority groups or that would kind of take the collective more into account in providing answers. Which is kind of interesting. And it does sort of show that when you, if you have a very divided country, then it's an issue about how're you gonna write these constitutions Also, if you have a product that you're gonna roll out to lots of different parts of the world that becomes an issue because of course certain cultures are more communitarian, some are more individualistic and you know, who gets to decide, I think is a really interesting question.
0:29:02.0 Jeremy Kahn: And or could you have a system where people could individually provide a constitution to their own LLM or each user, could they provide a, their individual constitution to the system? I don't know. I mean right now that's not how it works. It's just when the model's trained, it's given a particular constitution a document to check things against. But potentially there'd be these other approaches I suppose.
0:29:23.0 Satyen Sangani: Yeah, I mean the libertarian approach is obviously super consistent with the crypto, like decentralized, the market's always efficient, information's always right view, which is like obviously an interesting perspective, this idea though of a constitution. So we'll try to link to that by the way, in the show notes along with SB 1047. For those folks who would love to do a little bit more research on that as you sort of think about regulation though, you talked a little bit about this idea of copilots, but there is this world in which job displacement does exist. In fact I was talking to the CEO of an AI company that I know and he was talking to a potential client who was very powerful and rich billionaire. And he... And this guy says to him produce for me AI that literally I like, I want less people working here and I just simply don't want to employ people. So produce for me AI that actually displaces jobs. So there are people who are thinking this and so in this world where there are some less benevolent folks who are thinking of these things, you have this idea of this kind of Pigouvian tax of sorts where you would actually tax the AI using businesses that are displacing jobs. Tell us a little bit about that concept and how it works.
0:30:29.9 Jeremy Kahn: It's the idea of a robot tax and I, it was an idea of like, how do you encourage businesses to think of this as a complimentary technology to human labor and not just a substitute for human workers? And I think that's really essential because a lot of the risks from this technology really come from the idea that it's going to directly replace humans in the workforce on a one-to-one basis that a lot of the things that could go wrong with AI technology kind of assume that there's no longer a human in the loop on, on a lot of these systems, that it's not acting as a copilot, that it is kind of acting as an autopilot. So I think we wanna do whatever we can to encourage businesses to think about this as something that is a copilot that's complimentary to their workers that can actually extend their abilities, potentially offer them a chance to do new things, move into new areas, not as something to just fire workers.
0:31:13.8 Jeremy Kahn: So I was thinking about how do you encourage that? And you know, some economists of course had come up with the idea of a robot tax and I think what I was proposing was a kind of robot tax, but you want it to essentially hit firms that are clearly using automation to replace workers while also growing. So my idea was, yeah, if it would only affect firms that were, had their profits growing and which were deploying AI systems while at the same time firing workers that way, you wouldn't penalize a company that was a failing business that felt like it had to lay off workers anyway to save the business. You know, they would not be subjected to this tax.
0:31:46.1 Jeremy Kahn: But a company that was profitable and which was growing profits, but yet laying off its workers while deploying this technology would be affected. And I, it was just a simpler formulation. I mean, some of the economists I talked to had very complicated formulations for robot tax where you would actually have to figure out the competitive advantage of the automation system versus people. And it just seemed like it would be very difficult for a tax agency to assess you know, whether that had happened or not. Whereas this was a fairly slightly cruder measure, but an easier one to use.
0:32:14.9 Satyen Sangani: Yeah, but it sounds really hard because I mean, if you displace people it's hard to attribute directly back to the AI as being the sole reason for doing it.
0:32:24.3 Jeremy Kahn: No, well that's why, that's why I was saying it wouldn't have to be that. It just is if, if you're just, if you're deploying AI technology and you're firing people at the same time your profits are growing. When I, and I'd also, it would have to be mass layoffs. I'm not talking about firing an individual here or there, but yes, I mean I did not in my formulation think that the tax authority was somehow gonna have to figure out causation and there'd sort of be this assumption that the people were losing their jobs due to automation if you were making a big effort at automation and continuing to grow the business and yet shedding lots of workers.
0:32:55.3 Satyen Sangani: Yeah, it's still hard though. I mean it's funny, my son has a sesame allergy and you know, one of the things that the FASTER Act sort of advocated for was the premise that you would label every food that contains sesame. And so what interestingly happened was that all of the manufacturers basically, or not all of them, but many of them basically said, okay, we're just gonna throw sesame flour into every single thing 'cause we just don't wanna have to deal with the idea of cleaning our lines. And like, and so there were these like sort of negative externalities from the event and you could see something similar.
0:33:23.9 Satyen Sangani: But I certainly understand the intent and I think the premise though that the idea of putting people out of work is a bad thing seems conceptually right to me and like a bad sort of motivator in general. So I guess maybe just right now today there's models, a recent strawberry model came out from Open AI. Where are we right now with these models? Like what are the issues that we're contending with? Give us kind of the today blow by blow and do you feel like there's sort of leveling off of the innovation curve or do you feel like we're still in early innings and it's gonna continue to accelerate pretty quickly?
0:33:55.4 Jeremy Kahn: Well I definitely feel like we're still very early in this and there's still lots of gains to be made. And I mean I think oh one just kinda shows you that, I mean that that model which open AI debuted is much better at reasoning and kind of logic tasks and mathematics. But this has some drawbacks. Like it takes longer to provide an answer, it uses more computing power and as a result open AI is charging quite a lot more for tokens for that model than other models. It's also not fully integrated right now with the rest of kind of chat GPT, which actually it really results in some stupid stuff. Like if you ask it a hard question and it provides a good response and then you, like you happen to absentmindedly type "thanks" or tell the model thanks, it then does this again where it thinks it does a whole reasoning process about how it should respond to your thank you, which is completely wasteful and dumb and it, which what we really want is something that's gonna kind of a model where your question goes to a small model that assesses which of the models is most likely to answer that question best and then just feeds it to the most appropriate model. And I'm sure that's coming very soon.
0:34:55.1 Jeremy Kahn: Yeah, I think the problem, what we have now with this technology is still, it's very still bit fusty to use it. It can be very hard to figure out what prompts are gonna get you the best answer still. There's still quite prompt engineering is still more sort of art than science right now I'd say. And also the hallucination rates while they've come down and techniques such as retrieval, augmented generation, rag and Google has this thing called retrieval interleaved a generation where they actually have several different steps of retrieval and reasoning about the documentation that's been retrieved and then generating off of that, these techniques have brought hallucination rates down considerably, but they're still not to nothing. I mean there still is hallucination problems with the technology and I think that's frustrated a lot of businesses and they're not quite sure where they can deploy this safely and how to deploy it at scale.
0:35:42.2 Jeremy Kahn: And I think that's why ultimately I think we will solve a lot of these problems. And I do think all the sort of AI startups that are working on these specific industry copilots for law or for accounting or for medicine and for other professions architecture, I've heard of some design, they will kind of solve some of these problems 'cause they actually will figure out the kind of right, what are the right meta prompts to get the performance outta the model and architect away some of this so that it just becomes a much smoother process where you can just say, well look, I'm trying to design a building and this is the parameters of the building. Here's the square footage, here's a plot of land. Like can you give me some ideas for what I could put on this space and have it just do it without having to give it a very long prompt yourself saying you are an architect and you are doing the following and you know, please provide your output in the form of blueprint and please it just, you won't have to do all that prompting. I think we're gonna have software firms that will have figured out through the design of specific user interfaces, how best to do this.
0:36:39.7 Satyen Sangani: Yeah, for sure. Switching gears a little bit, you've talked a lot about sort of AI's impact on war and AI... War is always an interesting use case. You know, it's one of the things that sort of pushes technology forward. It shows the limits of what it can do. In your case, what you stated was that like, look there's a theory that AI will push us faster towards total war. Can you tell us what total war is, first of all? And then talk about sort of why that would be the case.
0:37:05.0 Jeremy Kahn: Yeah, so the idea of total war is a war that involves all of society, where there is essentially not just one front line, but the home front is involved as well, and you have the entire society kind of mobilized towards the war effort. That's what total war is. So people talk about World War II, it's probably the last total war that the US experienced. And what Ukraine is going through right now, for instance, is for Ukraine sort of total war. But it's very different than what we saw in Iraq or Afghanistan, which were very limited kind of regional conflicts where the whole society was not really involved in that conflict and our industrial base was not completely yoked to producing weapons for that war effort. So why does AI push us more towards total war is interesting.
0:37:45.3 Jeremy Kahn: It has to do with the idea that increasingly what is going to be at the front line is going to be weapon systems that are run by AI software, and that there'll be essentially just fewer boots on the ground at the front line, and therefore fewer targets to kind of effectively hit there. And then the question becomes, well, if you have fewer of these targets to effectively hit on the front line, what can you do? And the answer is probably to reach further back to the sort of headquarters and then further back to the homeland of your attacker or your enemy.
0:38:17.4 Jeremy Kahn: And I think that dynamic pushes you towards total war faster. And it wouldn't even be like, these systems might have autonomy. So it's not even like you're trying to hit, like today, you can have drones that are flown from thousands of miles away. And that gives you some, if you're fighting an enemy that's flying drones from thousands of miles away, one thing you might think about is trying to hit the center where those operators are.
0:38:40.3 Jeremy Kahn: If you have weapons that have autonomy, there's not even like these operators to hit. Instead, you have to start thinking very quickly about, well, how do I attack the political command and control of my enemy? How do I try to attack my enemy's industrial base so they can't keep building these systems and deploying them to the front line? How do I disrupt the training of future AI models? How do I disrupt the, you know, the production of chips that they need? So it just pushes you down this direction of hitting targets that are much more remote from the battlefield and towards this kind of total war conception.
0:39:08.4 Satyen Sangani: I mean, it almost feels from the outside in, like that's a bit of what's happening in Israel, that there's most sort of, obviously they're deploying robots and drones at the front lines, but at the same time, you know, they're taking out leaders of Hezbollah in different territories. They're pushing in very aggressively. It feels like there's a lot of game theory going on in the background where they're sort of playing out multi-part games and they're fighting on multiple fronts. Does that seem like a reasonable interpretation in that context or not quite what you're talking about?
0:39:37.4 Jeremy Kahn: I think that's not quite what I'm talking about, although it's interesting to the extent that Israel is thinking about what is, you know, if it's Hezbollah or Hamas, you know, what are their supply chains and how can we degrade those as much as possible? And it also is true that they were, you know, to some extent because of the, certainly with Hamas, I think they ran out of, you know, kind of military targets and to some extent quickly. And then they started to have to look at, you know, they also wanted to take out Hamas's political leadership. But I don't know, that also, I think had to do with the unique circumstances of that conflict in the sense that they, they felt, the Israelis felt they were facing this very existential threat and therefore they, they really wanted to sort of completely demolish. And you know, I think they even said that was their stated goal in, with regard to Hamas was the complete elimination of the organization.
0:40:20.3 Jeremy Kahn: So that of course pushes you towards attacking the political leaders. What I'm talking about would be different. I think it would be due not to the political aim being to dismantle, total dismantling of the enemy at the outset. It would just be kind of the consequence of not having a human enemy to strike at the front line would just kind of, and facing all these automated weaponry would just push you further and further down the path of going after supply chains and going after political headquarters because that would be the only way you could have a human impact that might end the war. Otherwise you're just facing these automated systems on the battlefield and the enemy might not care very much if you destroy a few of them as long as they have more that they can bring to bear.
0:40:58.6 Satyen Sangani: Yeah. So you have to go to the source in order to be able to address the conflict from the outset.
Two future facing questions, one of which is like totally within domain, the other one of which is maybe more subjective. You talk to a lot of people, everybody talks about sort of AGI, you mentioned sort of ASI, the singularity of like, when do we see those things? What's the range? Is it like, you know, three years or is it more like self-driving cars, which we know we thought we were going to get a while ago and now, you know, just always seems 10 years away.
0:41:25.5 Jeremy Kahn: Yeah. Well, I think it's probably more like self-driving cars, to be honest. I don't think we're that close to AGI, I think, you know, in the next five years, which is really the timeline I was kind of looking at in the book. I don't think we're going to achieve AGI, but you know, there are people I talked to at the leading labs who are absolutely convinced it's, you know, within certainly possible within five years. You know, some of them are absolutely convinced it will happen within five years.
0:41:46.9 Jeremy Kahn: Some even think within two to three years, it's quite possible. And part of the problem is we haven't defined AGI that well. And so it becomes a bit of a moving target and people have different ideas about what's required. And I would think that an AGI, you know, something that actually equals human level intelligence, even for the average person would have to have a certain efficiency of learning.
0:42:08.5 Jeremy Kahn: And that's something that these systems don't have and there's, and doesn't seem to really be on the horizon. The systems still need to be fed tons and tons and tons of data in the pre-training until they get to the point at which, yes, you can do a kind of zero shot example once they've been trained, but the actual training involves the exposure of the system to many, many more examples than a person would need to kind of learn a subject. And I think that kind of lack of learning efficiency is one of the problems with the current technology and something, you know, we would have to solve to kind of get to AGI. And then ASI, which is artificial superintelligence is based on the idea that once you get to AGI, you could have this intelligence explosion where the system starts to self-improve at a dramatic rate and you get this kind of exponential takeoff. Again, I don't really see that happening anytime soon. I actually think it will be much more like self-driving cars.
0:42:57.4 Jeremy Kahn: But the interesting thing about self-driving cars is not that we don't have widely deployed self-driving cars yet. It's more that what we do have though is increasing levels of driver assistance that are getting us, you know, by baby steps ever closer to self-driving. And I'm pretty convinced since we have self-driving cars already in a few places, we probably will have, you know, self-driving cars eventually in a lot more places. And we're also going to have much more powerful driver assistance so that maybe in good weather, most of us don't have to worry too much about driving. And then, yeah, if it rains, we still have to know how to drive because if it rains, we're going to have to drive or something. But I think it's going to be a lot like that with these systems, you know, where they will be a kind of copilot technology that will provide a kind of gradually creeping level of autonomy.
0:43:38.4 Jeremy Kahn: And, you know, we're going to get more and more powerful models, but because of things like hallucination rates, I think they're not going to be in a position to totally replace us anytime soon. And we'll just get sort of more and more assistance with things and more sophisticated assistance and more accurate assistance and kind of gradually over time. And eventually we may achieve AGI. I think AGI is probably possible. I just think it's not imminent.
0:44:01.5 Satyen Sangani: So this has been a really fun conversation and I know like super enjoyed it. We haven't talked a lot about your day job, which is to work at Fortune. Tell us a little bit about your editorial strategy there as far as AI is concerned. What are you excited about covering? Where do you expect to spend more of your time and energy and focus? Because I think that's going to be super instructive for our listeners who are interested in where the puck is going.
0:44:23.3 Jeremy Kahn: Yeah, well, I'm very interested to see what happens with the rollout of more and more reasoning capabilities. I'm interested to see who matches kind of OpenAI's 01 model and what that looks like. There's all these agentic AI systems that are going to be rolled out in the next two years. I'm very interested in what happens with those and some of the issues around how those systems are actually going to really discern human intent. How is that going to work? That interaction of when you send a system out to do something for you, I think I'm pretty sure within the next two years, we'll have something where if I want to go on vacation, I can just say to my AI assistant, I want to take a vacation to Rome, go out and research some options and book my holiday for me. The question will be, it might come back and be like, oh great, I booked you this holiday. You've got first class tickets to Rome and I've got you in this five star hotel and it's $1000 a night. I hope that's okay 'cause I already paid for it.
0:45:12.4 Jeremy Kahn: And then the user is going to be like, wait a second, I didn't want you to do that.
0:45:15.3 Satyen Sangani: Unless you're Eric Adams.
0:45:16.4 Jeremy Kahn: Unless you're Eric Adams, exactly. Then it's fine. And somebody else is paying for it. So it's all okay. But no, it's going to be this issue of how do we prevent systems from doing that? What are the steps that the vendors that are creating these agents have architected in so that we can be sure that they follow human intent? But at the same time, you don't want it to have to constantly come back to you and say, is this okay? Is this okay? Is this okay? 'Cause that'll get annoying. So there's this kind of interesting trade off between usability and risk that I think we're going to have to negotiate with these agents.
0:45:50.0 Jeremy Kahn: So that's something I'm definitely watching. I am watching like, is there some dark horse, completely different type of deep learning or even different fundamental architecture that's going to kind of come out of nowhere and potentially displace the current models? I don't think that's necessarily going to happen, but there are some interesting kind of inklings of other things at the margins and, you know, be interesting to see if one of those emerges.
0:46:09.3 Jeremy Kahn: So that's something I'm watching. Regulation is another area that's, you know, super fascinating and we're keeping very close eye on, unfortunately.
0:46:16.5 Satyen Sangani: Really cool. Well, Jeremy, this has been a whirlwind and I was super excited to come again and I'm just as excited to getting out. It's great to get your knowledge to all of our listeners and also just to get the perspective. It's really helpful and it's fun to watch you sort of do what you do. Thank you for taking the time to speak with us.
0:46:31.5 Jeremy Kahn: Thank you so much. This has been great. Thanks Satyen.
0:46:35.4 Producer 1: What a fascinating conversation. As a journalist, Jeremy has a unique perspective on the opportunities and challenges presented by the rise of AI. He's bullish on the possibility of productivity gains and as the public's need for key jobs like accountants grows, one could see the value of AI copilots filling that gap. But he's also a realist. Risks like job displacement and democratic disruption should not be taken lightly. And total war in the age of AI is a sobering, scary possibility. Our takeaway? Whether you're an AI newbie or trailblazer, watch the news. There's a complex interplay between innovation, ethics, politics, and societal impact, and it's going to affect us all. Thanks for listening, Data Radicals. Keep learning and sharing. Until next time.
0:47:17.4 Producer 2: This podcast is brought to you by Alation. Your boss may be AI ready, but is your data? Learn how to prepare your data for a range of AI use cases. This white paper will show you how to build an AI success strategy and avoid common pitfalls. Visit alation.com/AI-ready. That's alation.com/AI-ready.
Season 2 Episode 27
In this unique episode, we introduce "Saul GP Talinsky," an AI iteration of Saul Alinsky, the pioneering force behind community organizing and the influential author of Rules for Radicals. The dialogue bridges the past and present, highlighting how modern data analytics culture echos Alinsky's ethos of empowerment and societal change. Through the lens of data, Alinsky's AI counterpart illustrates the transformative potential in both grassroots activism and corporate realms, advocating for a future where data-driven insights fuel innovation, challenge traditional paradigms and foster more just and equitable decision-making.
Season 2 Episode 26
Edo Liberty, CEO and founder of Pinecone, introduces the impact of vector databases on AI, likening them to Esperanto for algorithms—a universally understandable language that transforms intricate data into an easily interpretable format for AI systems. Unlike traditional databases' clunky, one-size-fits-all approach, they make AI smarter, faster, and infinitely more useful. As the fabric of AI's cognitive processes, vector databases are the hidden engine behind the Generative AI revolution.
Season 2 Episode 9
Ashish Thusoo has been on the leading edge of a data culture, whether it’s as a founder of a data lake startup, developing the Hive data warehouse at Facebook, or in his role as GM of AI/AML at Amazon Web Services. This discussion traces the evolution of data innovation, from big data to data science to generative AI.