Welcome back to Generally Intelligent! We’re excited to relaunch this podcast on Substack, and in video. Our episodes still feature thoughtful conversations on building AI, but with an expanded lens on its economic, societal, political, and human impacts.
Matt Boulos leads policy and safety at Imbue, where he shapes the responsible development of AI coding tools that make software creation broadly accessible. His work centers on understanding what technological power means for individual liberty and advocates for the legal and institutional frameworks we need to protect our freedom. Matt is a lawyer, computer scientist, and founder.
In this conversation, Matt and Kanjun discuss:
AI’s four core challenges
Empowering bad actors
Transferring power from labor to capital
Reducing resistibility
Psychic damage of disempowerment
Governing lawless digital spaces
Why abundance is not enough without liberty
Freedom as deep enablement and deep protection
The role of technologists in shaping society
Timestamps
03:13 The complex landscape of AI conversations
06:11 Understanding AI's core challenges
08:57 The transfer of power from labor to capital
11:51 Resistibility and human agency
15:00 The dual nature of technology
18:01 The invisible dynamics of digital spaces
23:51 Lawless spaces
27:01 The future of work and economic stability
40:05 Privacy laws and digital rights
44:07 Code as regulator
54:12 Interoperability and user control
01:07:05 Aggregates vs. individuals
01:14:43 Bottom-up vs. top-down automation
01:20:49 Optimizing for increased ability rather than increased productivity
01:23:11 Economic implications of AI
01:26:54 Building systems for empowerment
01:29:22 Freedom as deep enablement and deep protection
Transcript
Kanjun Qiu (00:21)
Welcome back to Generally Intelligent. My name is Kanjun Qiu. I'm the CEO of Imbue. And we have with us Matt Boulos, our Head of Policy.
We started this podcast back in 2020 when we were trying to understand from researchers how far this generation of LLMs would go. The podcast has succeeded far beyond what we expected. Many of our early guests went on to have huge impacts on the field, and AI has gone from this niche thing to a household name everyone's talked about.
But since it's become so ubiquitous, we've started to realize something strange. The conversations in public are really weird. We have one AI CEO saying they're going to replace all of our jobs, but they're distributing intelligence, so that's good. And another AI CEO who's worried that it's going to kill us all, but it'll also give us tutors in India and new medicines, and so that's okay.
But where is the serious conversation about the real costs and benefits of this technology, the real economic, societal, political, and very human impacts that AI is going to have on our lives?
Generally Intelligent—this podcast and this conversation—is the start of that. We want this to be a space for us to have serious cross-disciplinary conversations about AI so that we can make changes. We can talk about different economic mechanisms, different ways to build technology, so that we can create the future that we want.
Because today, it's not too late. We can still change how this technology shapes society. And if we wait too many years, that's not going to be the case anymore.
So let's dive in.
Matt Boulos (02:45)
You've put a lot of thought into thinking about what are the core challenges that AI brings. Why don't you walk us through what you think are what you see as the main areas that we need to take seriously if we're going to address AI's impacts?
Kanjun Qiu (03:44)
Sometimes it's so overwhelming because people talk about all of these different problems as a whole smorgasbord from sycophancy to how AI might take all of our jobs to how it might take over the world. So, a way that I think about the problems is to bucket them into four categories based on the mechanism of action by which the system is acting.
One, empowerment of bad actors. The core mechanism is, the power of actors who might do damage goes up. It's a technology that gives a lot more capability, and now various people who couldn't wield this capability before can.
And I actually lump both AI takeover—AI systems taking over and dominating humans—as well as terrorism in that category, because the mechanism of action is the same. If AI is taking over, that just means that AI is taking a lot of this power and then doing negative things to humans. And same with terrorists or authoritarian governments.
The reason why it's helpful to think about that mechanism of action is that it's very generative for solutions. When I think about actors who are anti-social, in the solution space, there are a couple of things I can do. One, I can prevent anti-social actors from getting that power. Let's look at which actors exist—governments, individuals, the AI systems themselves—and then look at how we can prevent them from getting power. That might be all forms of know your customer laws or safety research or things like that.
On the flip side, another way to make things more resilient is to make the world safer to bad actions like this. Maybe in that camp is better surveillance of the creation of biological artifacts so that we can prevent viruses, or inventing a universal antiviral that would actually remove a whole class of dangerous problems. I actually think this category is very well talked about often and the important thing is that it is just one category and that many solutions actually solve for many different of these actors and the problems that they pose.
Kanjun Qiu (06:47)
The second category I think of as transferring power from labor to capital, the capital-L Labor to capital-C Capital in the Marx view. As labor becomes less powerful because we are less valuable and capital gains power, what happens?
Most of us are in the labor class. We do not own the factors of production. We work for wages. And this is a technology that’s starting to do things that we currently do wage work for. So what happens to all of us who work for wages?
There's the immediate, somewhat alarming effect of that: losing jobs. But there's, to me, the long-term, somewhat alarming effect of this, which is that you have this constant power transfer from labor to capital that is forever.
Matt Boulos (07:51)
There is something really quite striking if the ability to be productive depends on capital. This is a really abstract way of saying, I show up to work and the capital I’m bringing is my laptop, but for the most part, I'm bringing the labor. Imagining this world, maybe the day's gonna come that the laptop matters way more than I do, and it’s a question of who owns it.
Kanjun Qiu (08:41)
That's a really good way of putting it: the transfer of power from labor to capital is equivalent to the transfer of usefulness from me to my laptop. So what happens in a world where the laptop's way more useful than I am?
Matt Boulos (08:54)
I've never looked suspiciously at this thing before.
Kanjun Qiu (08:57)
What happens in that world is not just economic. It's not just that I get paid less. Maybe I wield my laptop so therefore I still continue to get paid some. But in theory, the company owns my laptop, so I may not get paid at all.
But the second effect of it is political. Part of the reason why we have political power is because our government depends on us to fund it. And there are a lot of countries that don't depend on humans. They depend on natural resources like natural gas or oil: the UAE, Russia. And they have a lot less incentive to treat their people well in the same way as we maybe in America do. So I'm somewhat concerned about the kind of loss of political power that we'll have because of our loss of economic power.
Capital can now just use capital—use AI—to produce more capital, and there's this reinforcing loop.
Kanjun Qiu (09:59)
The third category, which I haven't heard that many people talk about, is your idea of resistibility. In political philosophy, there's this idea of resistibility: how well can you resist laws that don't serve you?
In America, we have fairly high resistibility. The civil rights movement was a good example of that, where you could actually disobey, have civil disobedience, and then change the laws. There are countries that have very low resistibility, like China as a surveillance state. And one thing that we're concerned about is going into a future where the resistibility of humans against automated systems—either controlled by themselves or controlled by other people—is much lower. We lose our power. So, the core mechanism here is a transfer of power from people to automated systems and the people who control them.
There are a lot of examples of low resistibility today. For example, we have very little ability to resist our social media notifications. We can turn them off, but we also have very little ability to resist our social media algorithms or news algorithms or control the news that we see, that we want to see. There are ways of opting out, but I would consider it a fairly low resistibility environment.
And as we go into a future that has a lot more automated systems—agents that are doing things automatically—that's something that's really important to consider. Now, other people are going to have agents that do things like spam call you constantly, or try to convince you on a website to buy something you don't need, or try to convince you to give them data that they can resell. Especially given what we see about current capabilities, it’s not clear that we have anything in place that addresses that.
The fourth category is how it affects us as people to live in a society where we don't have very much power—we don't have power economically; we don't have power to resist things. We end up disempowered and, in the best case, infantilized.
That is scary because there is a deep sense of learned helplessness that happens as we lose power. There's a great study by Lisa Kahn about how college grads who graduate in a recession have lower wages for the rest of their lives relative to college grads who graduate just a year or two after, which is super crazy. You term this “psychic damage,” which I really like: damaging our own perception of how capable we are and can be in the world. And I think this is really sad. There's this spiritual damage that we don't talk about, which is about human potential, about what people can be. What we want is for AI to expand our potential and expand what humans and humanity can be, but there are all of these effects that, in this default path that we're on, seems like it's going to go against that.
Matt Boulos (13:21)
I want to run with the last thing you said because... I guess I have to come out with this. I'm old enough that I have a memory of my life that's pre-internet.
Kanjun
That is old.
Matt:
I remember being in grade school and Mr. Gen, the computer teacher, knew that I was really bored, so he would pull me out of class and then we would pretend that I was learning, but we were just trying out new software. One day he's like, “Let me tell you about the thing called bulletin board systems. There's somebody else on this computer.” I’m like, “Where are they?” And he's like, “They're in another country!” It was wild and so hopeful.
I think often these days about my parents calling their parents after they immigrated to Canada. They'd get these crap calling cards that would cut out and were choppy and super expensive. Now, I FaceTime my mom and she's like, “I have to go now.” I'm like, “how about we hang on for a little while longer?” We're taking for granted the ability to see each other, hear each other.
So you have all this incredible potential, and it's beautiful, and it's real. Our machines can augment us and we love tools. I don't understand why you could say, “I love my pen that I write with, but I don't like my laptop.”
But at the same time, we know that technology has been this very mixed force in our lives, from the capacity to surveil people to predatory mechanisms around how we communicate. One of the things that I have felt is really important to bring to the conversation is that talking about AI as good or bad is almost silly. It's like saying trees are good or bad. You plant it right next to your foundation, that was a bad move; you step out into a clean city, and it's the most wonderful thing. There are, of course, limits to that analogy, but there is something really profound about taking the complexity seriously.
Kanjun Qiu (15:47)
I was really struck by a quote that you said many years ago where you had just read something by Marshall McLuhan, and you rephrased what he said: that we adopt technology for its benefits and then we suffer its consequences. So, the important thing is to think about those consequences that we might suffer and see if we can get more of the benefits and less of the suffering.
Matt Boulos (16:17)
What was brilliant about McLuhan was that he'd clued into this dynamic where we adopt a technology and that it changes how we work and interact and our own capabilities so that it is no longer possible to detach and disentangle.
Kanjun Qiu (16:35)
We’re part of it; it's part of us.
Matt Boulos
Take something like social media. The public narrative on it is actually condescending and not correct. We're not just a bunch of dumb-dumbs who are sitting around swiping things because we don't have anything better to do—or at least not completely. The thing that's really happening is that our lives, our social lives, are on here. I get to see my friends' kids. I'm able to get a diet of things that matter to me or that entertain me. There's nothing at all wrong with that. We don't bear responsibility for the fact that these things are wildly addictive. But if our lives moved onto these platforms, then we're now in this really stuck position because if the platforms don't behave, then we are subject to that.
When I started law school, I decided to run an experiment. I had no phone, no computer; I had nothing. I'm like, “I'll be a nerdy monk and see how this one goes.” And one of the interesting things that happened was that I realized that nobody wants to call your landline to invite you to parties. Everyone wanted me to go to the parties, but they'd tell me about it the day after. They were like, “you weren't there!” “Well, you didn't invite me!” They're like, “oh, yeah.” Everyone was texting each other, and it was a simple thing. You can't just exit. That was significant. And then I got a phone.
Kanjun Qiu (18:01)
This is really important. You can't exit technology. We can't just be like the Amish because this technology is now so prevalent and so entwined in our social interactions and our lives.
Matt Boulos (18:19)
Let me give you the silliest example. My son is in a preschool that has a bit of Mandarin immersion, and I know no Mandarin at all. He's now singing “bá lǒ bò,” and it turns out there's this children's song about pulling radishes out of the ground. My son’s just marching around the house singing this song, and so we're like, “what's going on? Why did we enroll him in a language school that we don't understand?”
What do we do? We just hopped on an LLM and said, “okay, my son is singing this, can you tell me what this is?” All of a sudden, that world opens up, and it's beautiful. I want to challenge the idea that that is us conceding something. Why should I be trying not to do that?
Kanjun Qiu (18:56)
This is to something you said earlier, which is, when we get on social media platforms it’s very positive. But I was saying this earlier about resistibility, where it's very hard to resist these systems. What's going on here? It is something about the power inherent in technology.
Fundamentally, what is AI? It is doing computation. Computation is the same computation that our brains are doing. It's taking inputs, perceiving them, running them through some model of the world, outputting some things, and those outputs can be turned into actions.
There's something about social media where it is an AI agent. It's making decisions about what actions are being outputted, like what I see on my newsfeed. And as a result, I get different inputs.
So, I'm getting this other input or information that changes my model of the world. To your point about this symbiosis, the technology is making some decisions that is causing me to get different inputs into my world model and now my world model is getting morphed or transformed in a different direction. And that can be very positive. For example, you learn about your son and the song that he's singing and that expands your world model and helps you see reality more clearly. Perhaps the areas that I feel concerned about are the places where it actually kind of changes the way you see reality in a way that is more twisted.
Matt Boulos (21:00)
It feels manipulative. But I want to take a step back, because I've been like, foam finger, go technology, go. But this point about technology and power, thinking about its mechanisms are really important. Going back to my son singing his song: I went to this machine and asked it a question and it came back with an answer that I couldn't have gotten five years ago, or at least not so easily, not so smoothly.
But we're only talking about one part of that transaction. We're talking about me asking that and getting an answer. We're not talking about: Is this being logged? Does this system now know that my son is in a Mandarin immersion class? Are we gonna get like Mandarin worksheets offered up to us at the next interaction?
There's something about this digitally mediated world that is foreign and dangerous. And I think that's worth probing into.
Kanjun Qiu (22:11)
What do you think it is?
Matt Boulos (22:13)
I have a couple of different mental models. Let me play with two.
One is something that I call lawless spaces. Imagine a part of town where the police don't go. The rules, the norms—people are working them out, but they're not governed like the rest. And you cross that threshold into those places. Soon, things become possible. Maybe there's just a free spirit. It feels like the chaotic early days of the internet: people who relished in anonymity, not because they were up to trouble, but because there was something liberating in that. You can imagine that creating a heady atmosphere.
Kanjun Qiu (22:59)
Like the Wild West.
Matt Boulos (23:10)
Exactly. Your bank is like, “why shouldn't I be there too?” So your bank sets up shop, except when you step out of the bank holding a bag of coins, someone whacks you over the head and takes your coins. That space doesn't have the same rules, the same governance. It's not a perfect analogy, but there's a lot to be said for that.
I was talking to somebody about privacy and they're like, “That ship sailed.” And I said, “Well, why?” If we were having dinner and some dude comes up to you and just stands right next to you while you're talking and he's just writing down what you're saying, you give him a slap, send him out of the restaurant, right? We have that but we don't see it in the digital context, so we haven't learned to govern that.
Kanjun Qiu (23:47)
Do you think digital spaces are lawless because they're not visible?
Matt Boulos (23:51)
I think that's a huge part of it. The next thing I want to talk about is, what are the things that make the digital space really particular? One is that most of what happens in them is actually invisible to us. If I go to the neighborhood oracle and give them 10 bucks and say, “My son is singing this song, can you tell me what this is?”
He's gonna sit there, like, “Oh man, I know what it is!” He's gonna grunt and groan, write something down, and send me out. I go to an LLM and it’s a magic box. You and I may know how in theory it should work, but we don't actually know how it's implemented. We don't know what a year from now, five years from now is gonna happen.
Kanjun Qiu (24:44)
And you don't know what's being logged; you don't know what the company is doing with the data. There's a lot you can't see.
Matt Boulos (24:52)
There are really particular characteristics to just the digital world. It is easier to log than to not log.
Kanjun Qiu (25:00)
And it's safer in a lot of ways.
Matt Boulos
And there's an expectation. If something doesn't work, the customer is like, “I did X and it didn't work.” And you're like, “I have no idea what you did, I have no logs.” And they're like, “What are you doing? Are you junior grade developers?”
It is easy to log data. It is cheap to collect data. It is lucrative to collect data. Even before advanced models like LLMs, we could crunch through stupidly large amounts of data. So you have these reinforcing mechanisms that take us to really perverse outcomes.
We talk about surveillance. Surveillance enables an astonishing amount of bad stuff. Resistibility is something that you reach for when you're in conflict: something has gone wrong, and you have to resist it. But preceding that is legibility: do you even know who I am?
Kanjun Qiu (26:01)
This prompts for me: let's imagine we were making a world in which we're making everything digital into a physical manifestation. What I'm hearing you say is, we have ended up in this really weird default digital world, especially going into this AI future. It's weird because some defaults are weird. One default is that we log all data. A second default is that companies—I, running this company—can process that data however I want. Another weird thing is that now we have these AI systems and they can do lots of new magical things with that data. For example, take a photo and know where I am. As the person at the other end, I can't see any of it. And as a result, I actually don't have a mental model of it being a problem at all.
Matt Boulos (27:01)
We also didn't have an emotional response. We’re wired as human beings to recognize these things, and we can't react.
Kanjun Qiu (27:08)
We talk about something like surveillance and it's such an abstract concept. But if you were to turn surveillance into a physical manifestation, like this guy who is writing everything down in your conversation next to your table, then it would be like we have five people following us around everywhere and they're all logging different things about our lives and they're changing other stuff in our life based on what they're logging.
Matt Boulos (27:29)
This is where it starts to get wild because on one track, often when people are talking about the productivity benefits of AI and the labor impact, we're talking about labor substitution. But there's another way of thinking about the impact of AI within the labor context, which is that new work is being created. Let's take something like credit scores. Largely opaque systems, the financial services industry benefits from it, and the good faith argument is that we all benefit. If I'm an untrustworthy borrower, you shouldn't have to be paying rates to subsidize me, so we stratify on the basis of reliability or whatever terms they use to describe it, like credit worthiness.
But then you could start to shift the granularity of that. We could just collect all sorts of stuff. We could also experiment. We could collect data even if we’re not sure if it's relevant. Deny me a loan—who cares? I'm one data point. My life gets crushed, but they don't know about it because their system did it, and just move on and experiment. That dynamism that becomes possible is going to be potentially quite pernicious.
Kanjun Qiu (28:55)
When you say dynamism, what do you mean?
Matt Boulos (28:58)
You could have systems that are not stable anymore. There isn't a credit score. There's an algorithm that's constantly rewriting the rules. Why not? As long as it’s goal seeking against minimizing defaults, it doesn't matter how unfair it is. Often when we talk about unfairness—putting on my lawyer hat—we often talk about things like disparate impact, protected categories, that sort of thing. But what happens when it's arbitrary, what happens when it's large categories of society, what happens when it's not easily pinpointed? Again, the bad stuff is happening behind the veil, so we don't know.
I want to connect that to something earlier that you were talking about, when you were talking about the economic impacts, and you were saying, that destabilizes society. But also, when you live in a world where you are subject to all of these forces and you're helpless against them, it's not good for a person to feel that way. Think about the worst parts of childhood where adults are not taking you seriously, not letting you do something that you ought to be able to do, and then that becomes the dominant mode of adult life.
Kanjun Qiu (30:18)
It’s very disempowering.
Matt Boulos (30:22)
And this is preceding oppression. That by itself is destructive. And then you add to that malicious intent or malicious oversight, and it isn't a surprise that we live in an angry moment in our society. I don't have a lot of patience with the tech community sort of sitting around saying, “How could this be?” Well, I mean, you've been bloody architecting it for the last two decades. There is a reason why people feel disempowered because they are disempowered.
Kanjun Qiu (30:53)
They have no power to change a lot of things.
Matt Boulos (30:56)
How could you change any of these things? The thing with AI—and I think it's really important that we ground it—is that we have to recognize that all of these dynamics are in play. Then, we can ask how do you design, and how do you get to empowerment? Because we could also just sit here and be angry and walk away, but that's not going to help.
Kanjun Qiu (31:12)
Two things that came up as you talk about this. One is, narratives today about what kind of future is okay for humans. I think a lot of the futures that the tech industry talks about today are actually very disempowered. One type of future is like, we're going to live in a stable utopia where everyone's going to have anything at their fingertips and it's going to be okay. But it does not consider seriously these dynamics where people are being controlled by technology and the people who control technology.
A second thing that you pointed at was this notion of utopia being like “permanent undergrad” where you can be free and intellectually curious and it's really fun. But an undergrad is not an adult with the ability to fully manage their lives.
The kind of freedom that you're going for is for humans to be truly able to be fully adult and in the world themselves without being pushed upon by other forces, with the ability to push against those forces.
Matt Boulos (32:42)
Absolutely. What do we really want from our lives? It’s to be able to realize our capacities.
Kanjun Qiu (32:56)
And that involves growth and change and creation and being pushed down.
Matt Boulos (33:01)
Absolutely, and having a chance in all of it. One of the things that I've noticed within— again, not to take a piss on the tech community — but we'll talk about what is an ideal future, or what's an ideal life for someone to have, and that's just somebody projecting what they thought was interesting to them.
I have so many people in my life for whom the specifics of their job don't actually matter that much, as long as they can take care of their family and support their community in ways that are really meaningful to them. Those are rich, beautiful lives. And when the structures around a person erode that is when we start to see this real frustration emerge.
Kanjun Qiu (33:53)
People are frustrated because they feel like they don't have any levers to change the situation of their lives, and they don't like the situation they're in even though the world is abundant and they're fed. There's something missing about their sense of autonomy or freedom or their ability to make change. What I heard from lawless spaces is, it's partially a lack of legibility and partially a lack of levers of action. And if you had legibility for everyone and levers of action for everyone to be able to change their life circumstances, the institutions that aren't serving them, then maybe those two things would allow us to be able to have a little bit more autonomy and self-determination in our lives.
Matt Boulos (34:48)
It's kind of hard in our present moment to think about what a stable political or legal regime looks like in general, but there is a simple fact that we've, for centuries now, have figured out that it's not cool to steal somebody's money. It's not just that theft is wrong, but that the state can't do it, even if it's useful to the state. We say that's not right.
In conservative circles people talk a lot about debanking where just banks just turn you off digitally. It's not a frequent occurrence, but it happens, and has happened in response to political events. What's wild to me about it is that it just could not have been a thing a few decades ago; the bank would have to have literally stolen your money.
Kanjun Qiu (35:54)
Like a bank run.
Matt Boulos (35:56)
Or simply, you'd go to your bank branch and they’d be like, “we're not going to give you your money,” which is what debanking looks like. And you would say, “you stole my money!” Whereas debanking now is either just hitting a switch and they can access your money, or just saying, “here's your money, you're out of the financial system” in a way that is only possible in a digital world.
I have one belief that our laws and rules haven't caught up to digital reality, then AI accelerates digital reality to all of its conclusions.
Kanjun Qiu (36:32)
What I hear from what you're saying is, the digital world is enabling all these mechanisms like being able to turn off my access to my funds and the laws haven't caught up. The last 2,000 years of development in the legal system have been about physical reality.
That physical reality is actually happening in the digital world. You're giving all these physical analogs to it that are really interesting because they let us see the physical reality of what's happening, but somehow we haven't mapped that physical reality to the digital world. What would be required to make lawless spaces more lawful? Why have we not caught up? Is it a lack of knowledge? Is it the lack of visceral sense of what's going on?
Matt Boulos (37:37)
Each of the things you said feels to me like it's playing a part. We both understand computers really well, but when I hop on a website, it is not occurring to me that they are tracking the things that I'm doing.
Kanjun Qiu (38:05)
True! The other day I hit ‘accept cookies’ and then I was like, “what happens when I accept cookies? Oh shit, it can track me across multiple websites — that's crazy!”
Matt Boulos (38:13)
I drive people nuts when they look over my shoulder, because I always not only reject cookies, but I open the thing to make a point of deselecting everything. And the hilarity is, often these are just pop-ups that don't do anything and it just collects your data anyway.
Kanjun Qiu
That's kind of depressing. Thanks.
Matt Boulos
Yeah, it really is. You're welcome.
There's one sense that it's not tangible, no matter how sophisticated you are.
The other thing is, it is new. In world historical terms, we're talking about living in this regime for 10 years. It is not that long, right? Google trying to figure out how to monetize was something that happened basically in our adulthood. That's nuts. And then going from web to mobile, the introduction of apps — all this has happened really, really fast. Part of it is we haven't caught up.
The other somewhat more cynical thing is that, it turns out lawless spaces are awesome because they're so lucrative. If you can do stuff like surveil people and track them and price fix and all of the rest, you can do all sorts of astonishing things.
If you deal with it now, it's a lot easier than down the line. One easy answer is a good privacy law. We had good privacy rules 10-15 years ago. It wouldn't be so painful for the large tech platforms to unwind these privacy practices.
Kanjun Qiu (39:49)
But now it's entrenched. You have to change your entire infrastructure.
Matt Boulos (40:08)
We're talking infrastructure, business models, identity as an entity, and the market cap of these things. I don't want to grant sympathy to the surveillance practices, but this is a huge thing that we're going to have to ask of them. But we do have to ask it.
But there is an interesting question of what rights do we have that we have failed to translate, just as a practical matter? We already have these legal rights and we haven't brought them to these spaces. And then what are the new things that we have to figure out?
Larry Lessig's notion of code as regulator is really fun. What he does in this setup is that he points out that in every period of time, there's some regulating force that you have to contain if you want to protect liberty. In his construction, one that I share, we're progressively trying to increase liberty as a society. But he points out that in the time of John Stuart Mill, you were worried about majority opinion — democratic opinion — but it can trounce minorities. So then we start to establish the notion of rights, and constitutions become vital to that, because if you just leave it to the majority, then that's actually sometimes not great. Then you have the Civil Rights Act, and suffrage movements, and so on.
What he was pointing out that I thought is really interesting is that the new thing is gonna be code. Code is going to operate — this was in 2000 that he wrote this piece — as regulator. And the argument there is that…
Kanjun Qiu (41:49)
Code is encoding laws.
Matt Boulos (41:50)
Yeah, code is going to determine how a sphere of life is going to play out. So then the question we now need in response to that is, what things in that space need to be addressed?
Kanjun Qiu (41:57)
I have this hypothesis that technology shapes our governance system — the way that technology is built and what makes it powerful. There's this theory that the reason democracy happened — I'm sure this is just one of many reasons — was because we went from a world where knights were the most powerful thing to a world where muskets were the most powerful thing. When you have knights, you have a lot of upfront investment in armor, you have to have horses and stables and all these well-trained people. That's a very centralized form of power. Technologies at that time resulted in this centralization because of the nature of those war technologies.
Then the musket was invented and now, knights and armor are not that useful. In fact, you actually want a lot of people who have muskets. So now people matter because of this new war technology that gives power to people.
We talk a lot about how AI, and the core four problems that I talked about, are fundamentally about power and transfers of power from one entity to another entity. We call it problematic when it gives power to entities that are not what we've determined to be morally right. In that lens, thinking about lawless spaces and what this upcoming technology is starting to enable, is there a nature to AI that shifts things one way or another?
Matt Boulos (44:07)
I have two responses. One is, there's also just law as law. What is it about this moment that we leave ungoverned? I find a lot of these free market arguments, the accelerationist camp, is essentially bullshit. All you're saying is just, we don't want regulation. So let's just say that. There's nothing else there; it isn’t a richer argument.
Kanjun Qiu (44:12)
Because lawless spaces are great.
Matt Boulos (44:39)
Lawless spaces are lucrative. They do yield huge amounts of opportunity. I'm not saying let's clamp down — that's how you shut everything down. It often does not make sense to intervene. It also does not make sense to intervene before you understand a space, because then you will have spent your political capital.
Think of even American politics, with all this craziness right now. There is political capital that can move you towards some privacy bill or things like that. And if you do the wrong thing, that capital's not waiting for you to go do it again. So you have to be disciplined about that.
But at the same time, you can't just say no rules. Or if you do, then that's ideologically encoded and you ought to own the rest of your argument.
Kanjun Qiu (45:20)
If there are no rules, we're buying into a particular society.
Matt Boulos (45:23)
And do we want that? Is that a fair thing to ask of others? If you want to impose that, then you should also expect resistance to it.
Kanjun Qiu (45:30)
‘Law as law’ is interesting because it actually argues against my argument that technology shapes society in this fundamental way. Maybe what you're saying is you could make laws that change that distribution of power.
Matt Boulos (45:44)
Something could be wrong, and whether or not the temptation to that wrong thing is great, it's still wrong. But then, if you don't want to eat the muffin, don't put it in front of you. And we have both. Law needs to set the boundaries of what's acceptable or unacceptable, regardless of what the temptations are. But the nature of the technology is going to shape those temptations. Back to the point about how surveillance is the easier default model.
So when it comes to what we do with these technologies…
Kanjun Qiu (46:15)
It makes some things easier than others.
Matt Boulos (46:17)
Yes, absolutely. A perfect example is going to be something around labor. And I want to bait you into this conversation. Labor impacts are going to be real. We don't even know what those are going to look like. There will be things that employers and companies can and can't do and shouldn’t do. Right now, we know the power a company has over, for instance, a warehouse worker that has their work determined by an algorithm. It's also worth pointing out that they don't have a capricious boss who can be an asshole and make their life hell. The algorithm is governing things both good and bad. But do we then say “this is the shape of the technology” and back away? Or do we recognize that this starts to introduce things that weren't possible before and we need different rights and rules?
Most of our labor laws are predicated on humans interacting with other humans — more powerful humans, but they're human interactions. Whereas a machine can surveil your every motion and then dock your pay for scratching your nose at the 15 minute mark. And we don't really have mechanisms for that, because we couldn't have conceived of that as being an active problem. It would have been nonsensical to have rules.
Kanjun Qiu (47:37)
This is very interesting because it speaks to actors in the world and the power that they have, and this new actor which is an algorithm or an AI agent. Like what you're saying is, right now we have laws and they govern your capricious boss, they govern you, they govern your corporation which is considered an actor, legally. So the only actors we have are humans and human institutions in the world before AI.
We have laws that limit the power of humans to harm each other and we have laws that limit the power of corporations to harm humans and vice versa. But now there's this rise of this new power, which is AI systems. AI systems have power because they can process information and turn information into action and action is power. Effective action is power.
To the extent to which an algorithm can govern what I am allowed to do as a warehouse worker, that is power that the algorithm has. Now you're saying, okay, we have this new power. What do we do with it? We're not doing anything with it.
Matt Boulos (48:54)
Societal norms will change it, our behaviors will change it, the technology itself will change, and therefore that power will morph. It's just so odd to me that you say, okay, then we're done. We've never done that in human history.
Kanjun Qiu (49:12)
We need to figure out what to do with this power. It might be partly because this is the first time a technology is its own power in a way. We've never had technologies in the past that make decisions.
Matt Boulos (49:26)
Not to dunk on people who are trying to do good work, but a great disservice was done by the AI safety community on this point. By talking about runaway systems as much as they did, they created this special category of worry, this incredibly low probability event, and we don't actually know what its dynamics are going to look like. Whereas this notion that systems can make their own decisions, but they're doing it for someone. You don't go spend millions of dollars to develop a system and you're like, I will let it go. You're doing it to manipulate the stuffing out of your viewers so you can sell more ads to people to buy flip-flops so you get your cut on the ads, and on. And across the board, in every domain that these autonomous systems are going to function, they're going to do so for a purpose, for an owner, a controller. When we talk about them being autonomous, it is about the ability to delegate to systems.
Kanjun Qiu (50:41)
It's the ability to delegate human power to systems to encode that power. I as a manager can now encode my power in a system.
Matt Boulos (50:51)
That's right. And that is an astonishing amount of power, the multiplicative: the fact that you could do so at massive scale, you can do so quickly, and it can adapt. And then one of the things that I, in this new power thesis, argue is that when that happens, it is very hard as a human on the other side of that to know how the decision was being made and so there is a default to accept that.
Kanjun Qiu (51:22)
And they have no levers over the decision at all. No legibility, no levers.
Matt Boulos (51:26)
Exactly. You have exactly no window into what is going on, no means of recourse. And as more and more of these sorts of things happen, we'll feel very powerless. It's an incredibly sad example, but in the context of war, this is what we are seeing. We are seeing, particularly in the Middle East, an example of AI systems doing the targeting.
People have not classified this as autonomous systems gone amok because humans built a system for that purpose. Yet, when we talk about, we're worried that AI systems will kill people — they are killing people. Explicitly, they are killing people, and they're being designed to do that. And let's be honest, when people are talking about national security implications for AI, yes, you're talking about economic competitiveness, but also you're talking about the fact that you want to have AI systems that can do that.
The ultimate act of power is to take someone's life. We already have that extreme happening right now and being realized. But it's the same dynamic where a human is delegating or sets of humans are delegating the thing that they want done to a system, and the system can carry that out. Because it is a system carrying it out, the context and the entire execution of that looks completely different. Where is the appeal, where is the chance to challenge it, where is saying that's wrong, where is the record? Where is even the idea of knowing how that decision was being made?
In my day to day, when I'm using AI systems, it's fun or productive. I don't care how it came to the decision. I'm just like, is this right, can I work with this?
My primary LLM use right now is trying to count calories. So I take a photo of what I ate and then I try to negotiate with it to lower the calories so I could eat more food.
Kanjun Qiu (53:19)
There's actually a huge difference here. This calorie counter is an AI system that is under your control that you're using to serve you. The war system that you're talking about is a system under one person's control that's being used to control someone else or harm someone else. Those two things are two different types of systems. You might argue that actually what we want is more systems under our control that affect us, and ideally don't affect other people too much.
Matt Boulos (53:51)
Imagine if my calorie counter determined what I could eat.
Kanjun Qiu (53:53)
Then it would be controlling you.
Matt Boulos
It would be awful. It's not perfect, and sometimes it goes completely off the rails in either direction, and that's fine-ish because it's within my domain. It's an irritation; it's not a risk.
Kanjun Qiu
This is something I've been thinking about with our product. We try to make systems that allow people to make software. I often talk about open software or an open software commons or malleable software — the fact that software should be built to be modified by the end user. A lot of people are like, “Who cares? I don't want to modify my software. I'm perfectly well served by my software. There's no problem, except sometimes.” And I realized the core idea is not that the software should be built to be modified. That's an instrumental thing. Instead, it's that software should not control me, ever.
Matt Boulos (54:53)
People might say that they don't want to change things, but often that's because the decision space has been so narrowed for them. One of the things that's really interesting to me as we work on interoperability, and as we're rallying a community around this, is how many startups just never got to a place where they could fight for interoperability because their mere existence would not be feasible in the current regime.
Kanjun Qiu (55:22)
Talk more about interoperability.
Matt Boulos (55:24)
One of the main things that we're championing and pushing for is interoperability legislation. The idea, at its simplest, is that a platform should not be able to discriminate on the basis of how you access your own data and services that you use.
Kanjun Qiu (55:45)
You should be able to get your data and have it be yours.
Matt Boulos (55:47)
Yes, and you should be able to use a tool of your choosing to interact with another system. Just as you could go buy bananas, or you say, “hey Matt, can you go get me bananas from the supermarket?” You couldn't have a supermarket saying, “no, only Kanjun,” right? And yet, that's our online world.
Kanjun Qiu (56:08)
Let’s make it concrete. LinkedIn says, I'm not allowed to use someone else's account to use LinkedIn. I can't use a bot, I can’t use Tweetdeck. It's monopolistic.
Matt Boulos (56:22)
Exactly. And the platforms do this for good reason. It consolidates their control around the points of input and access, but the consequence of that is pretty severe. The two things that are happening is one, we are moving towards a world in which these AI systems are going to be more and more useful, so we are going to share more and more data. We don't even have any indication that these things might not be handling our data soundly, so we're going to talk to them. I'm going to say, I'm injured or I'm sick, can you please go make an appointment for me? And we don't know whether that data is going to be held with any sort of responsibility or not.
The other is that there are all of these wonderful things that could be built if I could just access my digital life. What interoperability does is it gets two really critical birds with one stone, which is one, if I can access my own data, then I can decide where it goes. I can control that, I can check up on it. But the second and more critical is that if it's possible to build software that interacts with my richer digital life, then I'm not attached to these parasitic platforms and agents and we can build alternatives. You can seed a whole other tech ecosystem around the idea that we're in charge, it's our data.
Kanjun Qiu (57:44)
It's our software, it's our data. We make it. And we can sometimes interact with these platforms, but we can use our own interfaces.
It is becoming possible that we can make our own software, and make our own wrappers or systems that access Twitter data and download it. Then I can make my own algorithm and process it in a different way, so I can get just my friends and I can derank inflammatory stuff. That's just starting to become possible.
The software that exists today, because it's so expensive to produce, is incentivized to make that money back. Not because the creators are bad, but that's the incentive structure. As a result, it's either selling to us, or it's selling us to something. Those are the two options. Then occasionally, you have someone who's incredibly generous who makes software for free. It really feels like it should be flipped on its head, that most software that exists — AI systems that exist, we lump it all in the same category — should be software that is serving us, not selling us things or selling us to things. And it is ours. And that it doesn't require enormous acts of generosity to create software that doesn't do that, that is just for us or for other people. It should be easy. It should be what the default world is.
Matt Boulos (59:21)
If you think of deep, rich, sustaining communities, they're very generative, they're very productive. If you think of the art that emerged from religious communities and the invention of different structures, the social structures, different aid structures — it is a peculiarity. And I wonder if it is also a peculiarity of just how young software is in world historical terms. We're talking just a couple of decades in which you have the prevalence of software. But the point you're making about the fact that the cost to make software will go down, and the stuff that we'll make will start to look different.
Kanjun Qiu (1:00:02)
It could, if you get things like being able to access all of your data. Network effects are real. Right now, these big platforms have network effects. I can't just move to another social media platform and be able to interact with all of my friends. That sucks. I can't move off of Uber or Airbnb marketplaces and social media platforms. And no matter how cheap software gets to make, network effects are still there.
Matt Boulos (1:00:08)
To your point about the cost to make software, one analogy, and I know it's not perfect, is like in manufacturing, you spend a lot of money to make a mold. So if you're making plastic chairs, you spend maybe a couple million dollars to make that mold, and then you make as many $5 chairs as you possibly can off the mold for it to pay back. There are different analogies we can use to describe what's happening.
Kanjun Qiu (1:00:52)
But now you can manufacture software in a way.
Matt Boulos (1:00:57)
And there's something like, I could just make the chair, and then that starts to change how we think about it.
Kanjun Qiu (1:01:01)
It's almost like the opposite of that analogy because now I can make my own version of that chair really cheaply with no mold, with LLMs.
Matt Boulos (1:01:08)
That's right. It's important that we try to bring all of these developments in AI together because you are having these incredibly powerful foundation models. You're having a shift in our ability to do things like code or do data analysis, where the cost to do those things are now going down. And that, marshalled well, is a real gift. But of course, that's going to matter to labor a lot.
I want to bring us to labor for a couple reasons. One, because I don't know that the mental models that at least the tech community or the AI community uses to talk about labor are right. But also I think we are in for something, we're in for something that's kind of shocking. What you do about that is not so obvious to me.
So let me lay out my grievances. There is this idea that AI just gets more and more intelligent, and the critical part to this argument is to never say what intelligent means. And then to say, well, if it gets more intelligent and work is an exercise of intelligence, therefore all labor gets replaced. And then, on that basis to then make this big jump to saying, okay, here's what we need to do now that nobody is useful anymore, nobody's economically productive. And then somebody inevitably raises their hand and says, what about cutting down trees? And like, we'll get robots for that.
The idea is that the end game is zero economic contribution on the part of individuals. Machines do everything, or you have a tiny, tiny sliver who run the machines. And then we jump to all of these ideas around, okay, well, are we all gonna be lying on the beach, and our benevolent billionaire overlords are gonna feed us mango smoothies…
Kanjun Qiu
Like WALL-E.
Matt Boulos
Yeah, exactly, or is it gonna be something else? My confrontation, and it deserves a confrontation, is that what this does not account for is that a significant swath of labor, whether it is within a job or just a purpose of jobs, are around decisions and risk.
Say, if I make baseball caps and I need to go buy the fabric for the caps, and I have three potential vendors who will sell me the thing. So now we have a procurement bot. What's this decision gonna be? It's gonna be on the basis of some factors like cost, shipment time, whatever data exists. How you or I would make the decision on that is we'd probably meet the person who runs the fabric company or the representative and say, he seems shifty, we're not doing that. And then just using our gut, but, critically, owning the responsibility for the decision and the course correction.
Why am I saying this? Because of that layer, and then there's a human interaction layer, where we can automate the very easy stuff.
But it should be pointed out that managers do not like having people on payroll. They’d gladly fire everybody if they could keep revenue at the same line. Attempts to automate labor have been around for the entirety of my career. A lot of that was this quasi-automation of going to lower cost countries, and then the idea of robotic process automation. We have seen all of these things. What you notice is, certain categories automate really easily, certain categories that ought to be automatable don't automate easily, but critically, you have humans in the mix.
Why does this matter? Because if humans are in the mix, what you really are looking at is not like a 95-99% unemployment rate. You're looking at just a deeply inflated one in which there are winners and losers in a society. And it looks completely different. All of these solutions of the ‘nobody has a job’ imply that we're all in the same boat, but we're not going to be in the same boat.
Kanjun Qiu (1:05:45)
You're saying there's going to be this stratified effect, where different people are affected in different ways by job loss, like all industrialization in a way. Software engineers probably will be impacted quite a lot because code is actually very automatable because it's in this closed loop system.
There are maybe two things that make humans useful. One is liability and the second one is information. So in this example about baseball caps, you were like, okay, if I mess up procurement, I am to blame. Or if I'm a doctor and I mess up the surgery, I am to blame. There is a person who's liable and they can be legally held accountable. If the machine is ultimately to blame, this is actually really annoying, I can't hold this machine accountable — they can't be punished, I can't fire them. I guess I could get a different machine, but then I'm the one who's responsible, that sucks.
Matt Boulos (1:06:51)
But also, back to all the dynamics we were talking about before, I go to my doctor. It's not that I'm sitting there saying, I will sue you if you muck this up. Rather, there is this mechanism where that person gets up in the morning and says, I am responsible to my patient. The machine is responsible to no one, and the person who owns it is not thinking about individual responsibilities, but is probably thinking about aggregate ones. Any of us who've kicked around the business world know that these things then just become measures of risk, not even of obligations to individuals.
Kanjun Qiu (1:07:33)
This is really important: aggregates versus individuals. There's a great book called Seeing Like a State. And the core idea is that when you're governing a state, you have to collect data and that data gets collected in aggregates. And because you can only see data in aggregates, you take actions that actually make individual lives a lot worse, but make the aggregates look better. Here, you're saying managers might make decisions in aggregates that make individual lives a lot worse or individual impacts on patients a lot worse, but in aggregate it looks a lot better. And it's really important to point out that when we look at individuals, we're looking at anecdotes, and that's a really different type of information than when we're looking at aggregate measures where we're looking at statistics.
Even I as CEO struggle with this. It's why kings go and disguise themselves as a villager and go talk to villagers to get the anecdotes, because I as CEO get really bad anecdotal information from people. Instead I get a lot of aggregates, and that actually makes it really hard for me to make good decisions.
One way in which humans are really valuable is that we are able to be responsible for an individual person, individual case, individual situation. It's really about time scale. I don't think all of our jobs will be automated in 10 years. But I think in 50 years, that's still within my lifetime. That's not super crazy. Look at the change that's happened the last 50 years — or a hundred years. The implication is, if we are building these systems, and they are going to have these effects where a lot of people lose their jobs and it's easier for the managerial class to do things, then the challenge is, okay, not all jobs will get automated immediately, but how do we build a society where people are free and have power? Because there is this leakiness from labor to capital of power.
Matt Boulos (1:09:51)
Why the time horizons matter to me is because longer time horizons are where the substitutive activities start to come in. We start to generate new economic activity. I'm really wary of something going to happen on a 10-year time horizon. That's just insane.
Kanjun Qiu (1:10:12)
Probably programming will get automated on a 10-year time horizon.
Matt Boulos (1:10:21)
The non-engineer’s perspective on this one is that I think we're gonna see a stratification of skill level. Hot take: I think we're gonna see an emergent category of developers who are not particularly ‘high-skill’ — I hate using low-skill, high-skill, but just not the sort of people inventing a new programming language. Like the guy who would make your website, things that LLMs can do very easily. But until software kicks in to make it easy for a layperson to use the LLMs to do that, they're going to act almost as a translation layer. So they're not really going to be developers; they're going to be more of, I know enough about what a web stack looks like that I can turn a web stack into something. That's going to flare up and then drop, sort of in the way that web developers were hot, and then it became either a highly skilled front-end role, or you have Webflow and Squarespace.
Then I think what we're going to see is the artisanal middle is going to go away. Then the really high-caliber engineers who understand how systems work become absolutely vital. They're augmented by these systems, but they are basically CTO-ing everything.
Kanjun Qiu (1:11:43)
There are a lot more CTOs. I think it's not unreasonable. And I challenge your non-engineer hat because you are one of our active users of our product, which is a coding tool. Maybe a simple model for thinking about this is like, I think there's always a Pareto front of task difficulty and how well the task works.
As tasks get more difficult, it requires a lot more capability or skill to make the task work. Lots of easy tasks will get automated, and it'll be much easier to make web apps and things like that. But we'll probably see these much more complex, almost ‘grown’ software systems that someone is managing. In software, one deeply optimistic sense I have that is possible if timelines are slower, and if we can figure out how to make really good tools that are not just captured centrally, is that people can learn how to ‘garden’ software for themselves, and that becomes a source of power where people can harness computing. Computation is power, and people can harness this computation for themselves because we all have laptops, we all have GPUs, perhaps there's some way to allocate them more equitably. Now, because we own this laptop, this computation object, then we can harness it to run a bunch of software, to grow a bunch of software that does more and more complex interesting things for us — maybe inside of our jobs as well.
So you might see many people losing jobs, but many people gaining this capacity to create software that does really weird and unusual things, new things, more powerful things. I think there's a world in which it's not top-down automation, but bottom-up automation — bottom-up as in we are the ones who are automating our jobs away. I love automating my job. And when we're the ones automating our jobs, we become personally more valuable. It doesn't solve the full problem, and I think I'm still confused about the dynamics exactly.
Matt Boulos (1:14:05)
I think you're right. I actually think there's going to be a really interesting near-term dynamic where there's something really beautiful about human ingenuity. You give somebody a tool and they figure out neat stuff. One thing that will be really fun to watch is going to be somebody who had a job that involved a lot of these manual tasks and they're just figuring out how to automate it themselves. And they themselves actually become much more valuable to an employer; we'll watch people learn how to do that. There's this digital literacy that I think is going to add.
Kanjun Qiu (1:14:49)
The education lens. Something that we think a lot about on the product side is, how do you teach someone who doesn't quite understand these software systems what's going on? If we think of agents as like top-down automation versus bottom-up automation, the way that these agents get implemented is really different. If I am told as CEO that this technology is gonna automate my workers away and I can fire them, I'm going to do really different things as an internal process. I'm going to implement processes to measure what people are doing and then try to take the stuff that they're doing and automate it. Maybe this is an RPA [robotic process automation].
Matt Boulos (1:15:29)
Especially in financial services, there’s a lot of paperwork: boom, boom, get them out of the way.
Kanjun Qiu (1:15:34)
But if I'm told as CEO, hey, I have this technology and what it's going to do is if you hand it to your workers and you teach them how to use it, it's going to teach them how to use it itself. And, your workers are going to become much, much more effective because your workers will automate their own jobs, that's a really different perspective.
This is a place where we can make a lot of choices in building the technology that makes this go one way or another. When we are building prosumer products, you can either build for the buyer or for the user. If you build for the buyer, then you're building something that is built to automate people. And if you're building for the user, you're building something that's trying to teach the user how to use it. That's a choice.
Matt Boulos (1:16:33)
It's also an interesting choice because I don't know that as an economic matter that we know that it is better, for instance, for a large company to try to automate away its employees versus have higher productivity employees. The thing everybody wants is higher productivity employees, and if you can get that, that is a boon, and a more productive economy is actually generative.
Kanjun Qiu (1:17:03)
One of the things that people say is, AI doesn't have very good taste, in that it doesn't know what I want, itt doesn't know what other people want. As a result, I don't trust it to make certain decisions. I don't trust it to write on my behalf very well.
The reason why it doesn't have good taste is because it's not in my head. It does not know about my internal experience and I have a lot more context than it does about me and my situation. So there is a potential here where—to your point about economically, it's not clear if it's better to make your workers more productive or to automate them away—if people are better at spotting opportunities than AI systems, then it is possible that it's economically better to make your workers more productive. If systems are better at spotting opportunities than people, then maybe it's the opposite.
Matt Boulos (1:18:13)
This is something that policy leaders have to take seriously. In my conversations with lawmakers, they are sophisticated, it's just coming at them fast. What is very hard is the concerted effort of managers and workers and governments and technologists to build these things in a useful way. I feel that to some extent, we have to get that coordination right, which at the center would almost have to be the government, because nobody else has an accountability to the people.
But at the same time, this is where builders really matter because what are we choosing to build? If you don't build a surveillance system, it doesn't exist, or at least that one doesn't exist.
Kanjun Qiu (1:19:16)
If you choose to build things that teach people things versus choose to build things that don't teach people things; if you choose to build things that are anti-surveillance by getting people out of surveillance systems; if you choose to build things that let people get their data into their own system—there's like a lot of choice in what we build.
Matt Boulos (1:19:32)
I love spreadsheets. I'm not saying I want to spend all my time in them, but when you need a spreadsheet, that's really powerful. I've heard it described that Excel basically made programming available to the wider world. You have a bunch of people doing crazy stuff in Excel and they're like, I can't program, and you’re like, what is that macro? It's incredible what people are able to do with systems that build up their productivity.
Kanjun Qiu (1:19:57)
I want to reframe it. I think it's not about productivity. It may be somewhat about productivity, but this goes to the fourth category of psychic damage. It's about unlocking people's ability to spot opportunities and to learn and to become someone that is innovative and able to find opportunities and able to become more. I guess you maybe measure it economically as productivity. But when thinking from the builder perspective, when I'm building a product, what I want to think about is: how do I enable people to actually learn how to use these tools, do their jobs better, see opportunities in the world? There's a lot of upskilling or different-skilling.
It's not about productivity because productivity measures the output, but it doesn't measure how you get there. It doesn't talk about how you get there and if you measure just productivity, it's easy to make an argument that an agent is more productive in so many different ways. And if you measure the productivity of your workers, it's also easy to make an argument that workers are hopeless. They're not becoming more productive; it's useless. But in fact, maybe their tools are just not very encouraging.
What is really weird and interesting about LLMs is that you can make tools that are very encouraging, that can be very deeply empowering. This goes to your spreadsheet example, where a spreadsheet is actually one of the most deeply empowering things that exist because it has this vast legibility. It’s real-time, it’s live, you can see the whole system as you're building it and I think there's a lot of invention that is necessary for making kind of the deep capabilities of AI actually accessible to people in a way that harkens back to 1970s Doug Engelbart personal computing: how do you let people see so that they can learn?
Matt Boulos (1:22:09)
I don't know anybody who's like, “I'm highly productive” and they're proud of it, or someone who's well adjusted who says that. I do not measure myself or the people in my life on the basis of productivity. Nobody's eulogy is like: “He was a highly productive individual who helped improve the company's ROI on this project.” It's not what we do, and yet that productivity is going to be a determinant of other things in your life, back to your earlier point about what does it mean to be economically eclipsed in all of these things? There's also something about becoming more productive by becoming more able in what you're doing. That I show up to work and I have these tools that make me more effective at the thing that I care about doing.
Kanjun Qiu (1:23:11)
Becoming more able is a way that we can think about what the potential of the technology is: that it helps people become more able. But it has to be built a certain way to do that.
Matt Boulos (1:23:24)
There are challenges around productivity, which is that you need healthy and vibrant economies that will then reward productivity, because if you have one firm that's more productive, then it takes over the others and then the others get wiped out, but you don't really have significant growth. But if everyone is productive, then you have competition and then you have this intense growth. I'm not sure how economists would present something like Silicon Valley, but I suspect that that's an example of…
Kanjun Qiu (1:23:56)
A highly generative, productive, competitive environment.
Matt Boulos (1:24:19)
A function of the fact that this is where so much tech talent resides. That concentration of this productive accelerant. There may be something that we can analogize or extend to the workforce: you go to school, you study the thing that you care about, you go into the workforce, you want to have a job. Your job is a big part of your life; it is not the totality of who you are. And then one really weird thing about the way we talk about AI is we're like, okay, then you don't matter anymore. And I think that that framing is normatively wrong. You still matter. It does not matter whether or not you can get a job or not. But two, I think practically it is not a correct rendition. Our solutions have to look different. The startups are all in a tizzy right now about the way that a certain R&D tax credit gets applied, but basically it's about how you amortize the cost of software engineering, on your way to figuring out your revenue.
But what's really interesting is, are you gonna give a tax advantage to capital in the case of corporations automating the stuffing out of things? Or do you tax advantage labor? What are also the things, what are the incentives that you structure as a society? What do you encourage? You start to change these societal incentives. And I don't know what the answers are, but we have these incentives.
Kanjun Qiu (1:25:46)
There's a concrete problem or question here that could be solved, which is: what is a mechanism that incentivizes increasing the ableness of labor that — maybe it's about productivity ultimately — but it's fundamentally about the ableness of the workforce, such that labor maybe becomes able to own their own means of production?
Matt Boulos (1:26:17)
Take something like oil pipelines. Right now there's a lot of human inspection of them. With time, I think there’s going to be sensors to detect if something is going wrong, and drones to film it.
Kanjun Qiu (1:26:32)
Maybe you may still have some human labor, but there's less of it.
Matt Boulos (1:26:35)
Exactly. I do not want to say there aren't going to be labor disruptions. I think there are going to be potentially very large ones. The thing that we have to as builders build towards are systems that are additive.
Kanjun Qiu (1:26:54)
Systems that enable people.
Matt Boulos (1:26:56)
And they make us more effective. The reason you replace an employee with a machine is because then you get an insane productive return. But if you can't do that, and you could get a really good productivity increase off of your employee base, then that's a wonderful thing. And you, as someone who works for a company, that's a great thing for you as well. You get to be a contributor. But where I start to get really worried is around, if I've done something for a long time in a particular way, then it's hard to teach or change.
Kanjun Qiu (1:27:30)
This is why I think the ‘enabling’ piece as a builder is the most important. I am in agreement with you on the short-term, medium-term maybe. In the long term, I think everyone does have to become part of the capital class. In the short-term, in the medium term, what we’re saying is we have solutions that enable people to be part of the labor class for much longer, and for that labor class to be thick and sustainable for much longer. That slows things down, perhaps enough that allows us to build laws, to catch up morally, to think about these things. That's where we can have differential impact. And, over the long term, let's say in 50 years, 100 years, it does certainly seem like these systems are improving at a rate where they can collect enough data, either in the digital world or the physical world, where we will be able to do a lot of things in an automated way that aren't done today. So, the labor class will thin and we probably do want this other solution where people have the ability to own their own means of production. That, to me, is the only long-term stable equilibrium where people have things that produce for them and they don't have to worry about it so much and now they can live their own lives. When I'm in the capital class, I don't have to think about working and finding a job and making money. I can do what I want with the capital I have. Sometimes I make bad choices and end up losing it and then I need some help from the government, get myself set back up. I can start a different business. This is kind of like a small business owner situation. That world doesn't seem too bad. I'm not sure how to get there, but I want to bring us back to freedom. Because that's a very optimistic world in which potentially people are a lot more free to spend our time the way that we want.
But it feels like in the world we've just painted where people have these like capital producing objects that they own is very different than the world that we see being painted by technologists and others today, where it's a utopia that feels very much like a WALL-E utopia where people are somewhat infantilized and the world is abundant, but perhaps we're not free.
Matt Boulos (1:29:48)
I hate the word abundant. I mean, I love abundance, but its usage here is not right. What do you mean by abundant?
Kanjun Qiu (1:30:05)
I have food. I won't die. I have housing. Basic needs are met. Knowledge is accessible.
Matt Boulos (1:30:07)
I don’t even buy that we'll get to an abundant world in that regard because—back to the point from Seeing Like a State—aggregate wealth will shoot up dramatically. It's going to be hyper concentrated. The obligations to those who don't hold it are going to be much lower. What do you owe them? One of the really interesting dynamics that we've observed is when wealth concentrates in these extreme ways, an odd detachment starts to set in. It's such a perverse dream to me to count on the beneficence of people who are so insulated from the realities of regular life or the wealth that they've been able to concentrate.
Kanjun Qiu (1:30:51)
I think there's one world in which we have this extreme concentration of wealth. Very plausible, but it's assuming no distribution. It's assuming that this labor distribution we just talked about is not necessarily happening. We don't keep the labor class useful for longer; the tools we build are very concentrating.
Kanjun Qiu
I want to talk about what you mean when you think about freedom. What is the world you're fighting for? The reason I want to talk about this is because I want to end with what it means to be free in a society where there's powerful AI systems and potentially powerful other actors. Maybe it's possible to have powerful actors and still be free. Maybe there is a way to construct that world.
Matt Boulos (1:31:43)
I'm gonna challenge that. AI is new as a technology, but as a social and political dynamic, to live in a society with powerful entities, there's nothing new about that. I think this is really important because the things that make us free: to have laws, rights as individuals, consistency of their application, representation—just the wonder of modern liberal democracy when it works, and its capacity for self-correction. This is remarkable, and this is really worth highlighting. The difference between a totalitarian regime is when something bad happens, there it's the thing that happened. In a functioning liberal democracy, it happened, but it was wrong, and there is a correction.
Kanjun Qiu (1:32:39)
In theory, a liberal democracy can be anti-fragile.
Matt Boulos (1:32:42)
That's right. And for what it's worth, our liberal democracies have been, and have been for a very long time I don't know the exact construction of how I would place them within the anti-fragility cycle, but we don't have to give up even when things get bad.
If you go back to Isaiah Berlin and positive and negative liberty—the ability to realize your potential and the ability to not to get whacked in the head with a stick—we can continue to work on those two categories. What we need to do is look at where within the lawless spaces things are uncovered, where will AI exacerbate that, to build in those protections.
And in terms of realizing what's possible in our lives, accepting the idea that freedom is not an instrumental quality. What I mean by that is freedom is not something that gets justified because then you go and invent the airplane. Freedom is beautiful because you can sit on your couch. It is an end in and of itself. It does not depend on other things.
Kanjun Qiu (1:33:56)
Before we continue, I want to clarify your definition of positive liberty and negative liberty because it's not something I ever thought about before you told me about it. Positive liberty is the idea that you can do things: what are you enabled to do? Negative liberty is this idea that you're protected from being whacked on the head with a stick.
Matt Boulos (1:34:22)
We always need both. The brilliance of this construction is that people would get lost when talking about freedom and they say, well, am I really free if I can't open an ice cream manufacturing facility? And the response is, nobody's holding you back, you just don't know anything about ice cream. If you look at modern life, if you look at legitimate and illegitimate grievances in modern politics, they are often about the sense of a constrained positive liberty and an intruded upon negative liberty. So, part of what we do have to figure out as a society is to some extent, we have to manage the extremes, but — forget AI — are we actually tending to the broad societal sense that we're free? And within that context, then we ask, what is AI doing? And how is it modifying our society? Taking seriously this frustration with the weirdness of the discourse around AI is that if we don't characterize it correctly, if we don't characterize it honestly, then we don't have the ability to work with it.
Kanjun Qiu (1:35:37)
We must characterize it honestly so that we can actually increase our positive liberty and also actually protect our liberties against negative effects.
Matt Boulos (1:35:48)
That's right. Because a lot of the story that will come from the people who have invested huge sums of money into AI—and look, we're a company, we're in this game—is: look at all the positive liberty benefits coming your way.
Kanjun Qiu (1:36:07)
Therefore, don't get in the way, ignore the negative liberties. In the tech industry, my experience of the way people talk about freedom is about lawlessness. And the way you talk about freedom is about this deep enablement and this deep protection. And that that's what kind of world we want to build is one in which humans are deeply protected and deeply enabled and that's what it means to be free.
Matt Boulos (1:36:44)
If you think of the roots of the Valley to some extent, if you ignore the defense funding, a lot of its origin was that we're going to break free from the constraints of what's around us. I understand that as an ethos. But it's no longer just building personal computers in a garage.
Kanjun Qiu (1:37:16)
Now that we are reshaping society, we have to rethink.
Matt Boulos (1:37:20)
Those obligations, they're rich, but they're also beautiful, if we can really think about what our neighbors need, and furnish and recognize that. There's a real spiritual cost to our present moment where the factions are constantly warring. And I don't want to pretend that there was a golden age where people kissed each other on the way to the voting booths, but technology exacerbated the way we see each other.
Kanjun Qiu (1:37:50)
We think it's other people that's the problem, but I think it's technology that's the problem in a lot of ways.
Matt Boulos (1:37:55)
One of the things that I've experienced on a regular basis is that somebody expresses a bonkers opinion and you sit down with them and you talk, and they're lovely humans. And the fact is we are surrounded by lovely humans. I think it's really important that we resist the urge to vilify the people who have brought us to a place that we might not be thrilled about politically.
But there is a real responsibility that if you're building systems like the ones that we are building, you are not only in this race against other companies to build a successful business. You are also in a race against the other possible ways that these things might be built. It's incumbent on us to not just build something that is better, but also to win, and to have that paradigm win.
I don't have a lot of patience for this sort of like, “technology is going to eat us all, let's give up and let's just keep training our models.” It just feels like an unnecessary abdication.
Kanjun Qiu (1:38:58)
I think this illustrates the beautiful point, which is that as technologists, the opportunity we have today is to create technologies and build them in a way that deeply respects the actual freedom that people can have, which is this deep enablement and deep protection. And not to create technologies for the purpose of lawlessness, this ‘againstness,’ contrarian view. So the opportunity is creating technologies that enable humanity to be deeply free and not lawless, but protected and enabled. That's what we can do.