Sangeet Paul Choudary on Reshuffle
Key Points
-
With generative AI, the focus shifts from knowledge retention to skills like curiosity, question framing, and judgment.
-
Organizations, including schools, must rethink workflows and systems around AI’s capabilities to maximize impact beyond automation.

Join us on this week’s Getting Smart Podcast as we sit down with Sangeet Paul Choudary, author of Reshuffle: Who Wins When AI Attacks the Knowledge Economy. Together with Tom Vander Ark, Sangeet explores the profound reshuffling of markets, education, and talent development in the age of AI. Discover how the value creation chain—curiosity, curation, and judgment—is shifting and why reimagining systems around AI as an engine, rather than a tool, is the key to thriving in the future. Tune in to unlock insights into coordination, innovation, and the possibilities of AI-driven ecosystems. Listen now!
Outline
- (00:00) Introduction to the Great Reshuffling
- (04:55) Curiosity, Curation, and Judgment
- (15:38) AI as a Tool vs. AI as an Engine
- (23:14) Coordination and the Future of AI
Introduction to the Great Reshuffling
Tom Vander Ark: We’re living through a great reshuffling—the fastest and most profound reshuffling in human history. It includes a reshuffling of markets, business models, and organizations. Eventually, that’s going to mean a reshuffling of education and talent development. You’re listening to the Getting Smart Podcast.
Tom Vander Ark: I’m Tom Vander Ark, and I have the pleasure of being joined by a repeat guest, Sangeet Paul Choudary. He’s the author of Reshuffle. He is going to explain that to us. Reshuffle—the subtitle is Who Wins When AI Hacks the Knowledge Economy. Sangeet, welcome back.
Sangeet Paul Choudary: Thank you, Tom. Absolute pleasure to be here.
Tom Vander Ark: You’re joining from Dubai?
Sangeet Paul Choudary: That’s right.
Tom Vander Ark: We’re really glad to have you on. You and I had a chance to meet and do a little bit of work together seven or eight years ago around the last revolution. You wrote a super important book called Platform Revolution. It came out right while I was writing a book on platform networks in education. So, we were both giving thought to how learning organizations connect on platforms, and your work was really the most important for me in explaining the last economy. That’s why I was so excited to see Reshuffle, because I think it’s the best explanation of what’s happening and what will happen in the new economy.
Tom Vander Ark: Sangeet, I want to jump into an explanation that you made both in your book and on your Substack, which I really want to encourage everybody to subscribe to because it’s terrific. About two months ago, you explained that there’s a value creation chain that’s existed for a long time. It hasn’t always been evident, but the value creation chain is curiosity, knowledge, curation, and judgment.
Sangeet Paul Choudary: Mm-hmm.
Tom Vander Ark: Our audience knows this well, but for thousands of years, education has focused on that middle component of knowledge—knowledge transmission and accumulation. But now, as knowledge and expertise become commoditized with the rise of generative AI, suddenly there’s a shift in the value creation chain. Now, the beginning of that chain with curiosity—problem finding, opportunity recognition—and the back of the chain with curation and judgment are suddenly much more important. Is that a fair characterization?
Sangeet Paul Choudary: Yeah, I think so, because what’s really changed with the rise of generative AI is that we have application-oriented knowledge. We have knowledge that can be synthesized and made relevant in a way that can be directly applied. A lot of people say that the information age has been around since the web came up, and Google has been there for the last 20 years. But having access to more information doesn’t necessarily mean that you have the right access to applicable knowledge. What becomes possible today with generative AI is that you can apply knowledge in very specific ways to actually solve problems. Usable, applicable knowledge is much more accessible than before. It’s been decoupled—or what I call unbundled—from the training, skills, and investment required to acquire that knowledge. Traditionally, that’s what all of us have seen as our path through education and into our careers—a way to train and acquire that knowledge, which we can then charge a skill premium for. Some of that knowledge is now readily available with a prompt. Depending on the type of knowledge and the nature of its application, some of it is getting commoditized. My key argument is that traditionally, the role of education has been to get us better at storing answers and retrieving them at the right point of application. But if the cost of accessing answers and knowledge goes down, then what becomes more valuable is asking the right question. That’s where the idea of curiosity becomes so important.
Curiosity, Curation, and Judgment
Sangeet Paul Choudary: When knowledge was scarce, you could afford to be less curious because you could just compete on the basis of the knowledge you uniquely had. But when knowledge is abundant and everybody has access to very similar truths of applicable knowledge, asking the right questions becomes very important. There are two economic reasons why asking the right question is important. First, you ask the right questions, get to the right answers, and leverage all this AI-enabled knowledge access in the right way. Second, if you don’t ask the right questions, the opportunity cost of asking the wrong questions can be incredibly high. You typically follow a path of inquiry and might just go down the wrong rabbit hole. Despite having access to a lot of applicable, synthesizable knowledge, you may just be on the wrong path and unable to apply it to solving your problems. Curiosity becomes very important for that reason. For a similar reason, curation becomes very important. When you’re asking the right questions and generating answers, synthesizing and developing insights, you need to know what to include and what to exclude. As we’ve seen with AI in general, just because it’s very expressive and verbose does not mean it provides all the right knowledge at the right place and time. You also need to curate it. You need to understand what’s right for your situation and the problem you’re facing. Eventually, the value sits with judgment. Judgment is the ability to isolate the right path of action based on what you’ve learned and then assume the risk associated with taking that action. In a world where knowledge itself becomes commoditized, these aspects become much more important: framing the right question, choosing the right answers, and assuming the risk to back it and take those consequences on yourself.
Tom Vander Ark: I want to go back through that value chain again. We’re talking about this value chain of curiosity, knowledge, curation, and judgment. The way you described curiosity was not just being boundless curiosity but intentional curiosity that sounds like agency. Reid Hoffman suggested that we’re invited into a sense of super agency. Do you buy that link to agency, identity, or purposeful, intentional curiosity?
Sangeet Paul Choudary: I believe the idea of curiosity is definitely intentional. What you’re trying to do is compress the solution space toward achieving the specific purpose you’re aiming for. In mathematical terms, you’re trying to reduce the number of axes along which you pursue your exploration. You’re trying to manage the number of parameters based on which you want to explore. Curiosity is less about being generally curious. It’s less about some kind of fuzzy trait of asking the right questions all the time. It’s more about the fact that when you’re faced with uncertainty—and I can get into why the idea of uncertainty is important—you have access to this treasure trove of knowledge. Knowing how to constrain that solution space by identifying the right variables, which issues are worth pursuing, and which ones are not, and hence framing your questions accordingly, is what I mean by curiosity. It is definitely very intentional. When you’re faced with uncertainty, that’s what gives you agency—a forcing function through which you can compress the solution space.
Tom Vander Ark: Let’s talk about the back half of curation and then judgment. Curation requires a level of expertise because you’re stepping from a primary role as a creator—whether that’s a coder, a writer, or an artist—and shifting your focus to curation. So that’s a different set of skills. Can you do that without a significant cognitive offload? Does curation require more or less cognitive load? I guess we’re worried about that in education—that if you’re stepping out of the primary creation role, is that a higher or lower cognitive load?
Sangeet Paul Choudary: The way I would think about it is that these four elements are very tightly connected. You can’t develop the ability to curate unless you’ve actually worked through solutions, curated them, assumed the risk associated with implementing them, and then seen what the consequences are. When I say risk and consequences, this does not relate to really building products and seeing products fail. It relates to every conversation we have when we are taking a stand in a certain conversation and seeing that stand either rebutted or reinforced. We are constantly getting feedback, which improves our taste in terms of how we curate and make those choices. To a large extent, you can’t be a good curator unless you’ve actually worked through the entire value chain. You can’t just insert yourself into somebody else’s value chain and say, “I’m here to curate all of this information that’s coming through.” You need to go through the entire cycle. You need to go downstream and see the consequence of what you’ve curated in the past. That is what helps you develop that taste, and those signals help you improve that taste over time.
A lot of it is not mechanical. We often associate the idea of curation with algorithms, like when we say Amazon is curating your feed or Instagram is curating your feed. That is all very mechanical and based on pattern recognition. A lot of what human curation involves is more about understanding the narrative that needs to form around diverse data points. It’s about understanding how to bring individual pieces of a solution together and create a coherent outcome. So it’s really the larger coherence in the narrative, the system, and the overall design of whatever you’re building. That’s where curation becomes important, rather than just saying, “Here are the pieces that are important; now let’s bring them together.” It’s very different from the way curation is done by algorithms and feeds.
Tom Vander Ark: I think also included in curation and judgment is a higher expectation for what quality looks like, right? If we could do a five-paragraph essay in an hour last year, the opportunity to produce something better, longer, and more engaging now has to be part of this curation and judgment—a higher expectation of what’s possible. We’re still grappling with that in education, where the predominant mentality now is that using AI is cheating rather than asking what is possible now when working with co-intelligence and curating. What could we do that we couldn’t do a year ago?
Judgment—Tim Daisy called that wisdom and called for colleges to become wisdom factories. Is that part of applying judgment?
Sangeet Paul Choudary: Yeah, I think one of the challenges with thinking about curation and judgment is that it lends itself less to evaluation—not just standardized testing but any form of evaluation. It’s a lot easier to evaluate what we traditionally think of as knowledge. It’s a lot more difficult to evaluate curation and judgment—not just because they are very contextual (the same decision in a different context could be disastrous but in one context could be really powerful) but also because it’s difficult to evaluate the larger cohort system that you want to create. Let me give a simple example.
When machine learning took off, or even before that when data analytics took off in the late 2000s and early 2010s, we had the ability to pull up different types of data visualizations and analytics. The value of pulling up the analytics or the visualization went down, and the value of telling a story—a compelling story that links that to larger business outcomes or links multiple unconnected data points toward a larger story—that’s what became very powerful. That’s where the TED stage became so powerful, and the idea of owning the narrative became so powerful.
My point is that a lot of curatorial framing, as we see in these instances, is more difficult to evaluate and is also more contextual. I believe this also requires us to unbundle how we think about education. When the output of education was a one-size-fits-all knowledge that could then be contextually applied in different situations, you could have one-size-fits-all education systems. But if curation and judgment are so contextual, you also need to move how you teach and train closer to the point of application. That, I think, is one key issue we need to think through in the years ahead.
Tom Vander Ark: We are talking to Sangeet Paul Choudary, the author of the new book Reshuffle: Who Wins When AI Hacks the Knowledge Economy. Thank you for that insight. It’s just one of the examples of how Sangeet explains that we’re in a new era and the value creation chain is being reshuffled. I love how you apply that to the individual, the task, the job, the organization, and the ecosystem. That’s really the structure of the book. Jumping to the end of the book, Chapter 8 has this beautiful discussion of AI as a tool versus AI as an engine. Right now, we see most organizations sort of bolting on and automating some stuff that may or may not be useful, as opposed to thinking about AI as an engine. Talk about those differences.
Shorts Content
AI as a Tool vs. AI as an Engine
Sangeet Paul Choudary: The idea of AI as a tool versus AI as an engine really came out of a desire to contrast what the real impact of AI is going to be versus how we are looking at AI today. This is not an issue that’s unique to AI. Whenever there’s a technological shift, the first instinct we have is to apply the new technology to make a faster version of today’s way of doing work or today’s system. My key argument is that in order to really get the benefits of a new technology, you need to view your entire system—whether it’s your workflow, your organization, how you compete, the product you build, or the process through which that product is brought to market. You need to reorient everything around the capabilities of this new technology.
We’ve seen this before. We saw this with the shift to cloud computing. I talk about the example or the contrast between Adobe and Figma. Those listeners who are designers would intimately understand the difference between the two. The key difference was that even though Adobe adopted the cloud and shifted its delivery model and revenue model, its overall business and architecture remained the same. It was all structured around the design file. But what the cloud allowed was a centrally hosted design project, which did not have to be set up at the level of a file. Every single element could be manipulated and moved around, and multiple users could work off the same central version of that project. Figma reimagined what design meant, given the properties of the cloud. Before the cloud, you could not do this element-level management. With the cloud, you could.
We see this with AI as well. If you look at the rise of TikTok, in hindsight, it seems like a Chinese social network taking off because it was more video-oriented and targeted a different user base. But really, if you look back at history—2016 or 2017, when TikTok was beginning to take off—analysts had largely declared Facebook the winner of social networking and YouTube the winner of video, simply because their network effects were insurmountable and the logic of social networking was built around the idea of a social graph. You needed to connect to your friends before your feed would fill up. TikTok dismantled that whole logic. It allowed you to see value even after a few swipes, even without connecting to anyone. While Facebook and Instagram were structured around the social graph, TikTok is structured around what is called the behavior graph.
Tom Vander Ark: If you look at the rise of TikTok, in hindsight, it seems like a Chinese social network taking off because it was more video-oriented and targeted a different user base. But really, if you look back at history—2016 or 2017, when TikTok was beginning to take off—analysts had largely declared Facebook the winner of social networking and YouTube the winner of video, simply because their network effects were insurmountable and the logic of social networking was built around the idea of a social graph. You needed to connect to your friends before your feed would fill up. TikTok dismantled that whole logic. It allowed you to see value even after a few swipes, even without connecting to anyone. While Facebook and Instagram were structured around the social graph, TikTok is structured around what is called the behavior graph.
Sangeet Paul Choudary: And the behavior graph is made possible because of AI. The ability to track what users are doing and, on the basis of that, build a behavior profile and then connect different attributes of their profile with each other into a behavior graph—that’s really what helps you instantly connect into the universe of activity that TikTok has and yet get the most relevant items sent to your feed. TikTok changed the basis of competition. Figma changed the basis of competition. The key idea is, if you are thinking about the role that AI can have—whether you are in the education industry or any industry for that matter—your first instinct might be to use AI as a tool. Instagram and YouTube did use AI as a tool to make their recommendations better, but the logic of social networking remained the same. Adobe did use the cloud as a tool to deliver better, but the logic of design remained the same. If you want to really build a business that extracts and delivers the value that AI can bring in, you need to reimagine your business around AI as the engine and rethink what it takes to create value and compete in your industry, which is exactly what Figma and TikTok did. So that’s the key insight that I really talk about.
Tom Vander Ark: That’s well described in the book Reshuffle. You also have a beautiful recent Substack on Figma versus Adobe. We’ll include a link to that in the show notes. Just to summarize this idea, you wrote, “Strategy is less about how to use AI and more about envisioning the future systems that emerge with AI.” I think that’s super important. As school leaders are going back to school this year, most of them are thinking about where and how they can use AI. Many of them have adopted one of the Swiss Army knife toolsets that have many different little apps that automate this or that. That’s probably of some use, but at best, that will produce some efficiency. I just want to stress, I think your lesson here is that the separate task is about envisioning the future systems that emerge with AI. Would you agree that you need to do a bit of both of these things—adopting and improving while imagining the future system that will emerge?
Sangeet Paul Choudary: Yeah, I think you do need to do a bit of both, and you need to be careful about where you do what. The one thing that I would caution against is doing the former—using AI as a tool to improve what you do today—and betting the house on that, saying that that’s the future school or the future education system you’re building. You need to do all of that just to stay in the game, but that’s not going to give you the ability to win the race. That’s not going to give you the ability to compete at a different level. You need to reimagine what you do around the capabilities that AI provides. I take an example further down in the book about the fact that education is unbundling at three different levels. The unit of education in terms of the course itself is unbundling into smaller modular units. The unit of certification in terms of the degree is breaking down into smaller levels of microcredentials. And the unit of work, as opposed to the traditional nine-to-five, is breaking down into many different forms of work. Education has always been this three-party market of education providers, job providers, and learners. There’s unbundling happening at every level, and AI simultaneously provides the ability to rebundle or restructure these into new combinations. That’s what thinking of it in terms of an engine would look like—where you really reimagine what a learner’s journey would look like around the capabilities that AI provides.
Tom Vander Ark: The big idea in your book, the one that I’m still grappling with, is that you argued, yes, AI can automate some potentially useful stuff, but the power of AI is in supercharging coordination—coordination of complex systems. Explain that, because I haven’t got my arms around that yet.
Coordination and the Future of AI
Sangeet Paul Choudary: The idea of coordination is something that I bring up because very often we associate AI with the idea of automation alone. What I call out right up front is that AI—whether in whatever form, whether it’s today’s generative models, machine learning, or some other statistical form of pattern recognition—all these technologies have two properties when it comes to how they impact work. They impact the execution of work—previously slow, manual processes can be automated, yes. But more importantly, they impact the organization of work through coordination. The reason I place emphasis on coordination is because, by its very nature, all forms of AI do three specific things in order to work. They create a model of reality—it may not be accurate and a high-fidelity model of reality, but it’s a model of reality that helps them solve certain problems. They use that model to make decisions, and on the basis of those decisions, they support action or execution around it. This idea of creating a model of reality lends itself very well to coordinating fragmented systems—whether they’re fragmented bodies of knowledge inside an organization, fragmented notes from different meetings that you’ve attended, or fragmented information across companies in an industry. The ability of AI to model all of this and then support decisions and execution on top of that is actually what I believe the real value of AI is. That’s the idea of AI as a coordination engine.
A simple way to think about this is if you think about organizations. Organizations have always had a trade-off between team autonomy and cross-team coordination. Coordination problems in organizations are solved using meetings and documentation. Typically, the more autonomous a team gets, the more it imposes a coordination tax on the rest of the organization because it starts moving in its own direction. What AI can help us do is solve that trade-off because the ability to model an organization’s information and knowledge base, and then have every team work off that—whether the employees themselves or more agently the AI tools enabling them—that combination can potentially help us solve that autonomy-coordination trade-off. That’s why I believe the idea of coordination is so important. I repeatedly stress this idea because of that.
Tom Vander Ark: That’s such a powerful set of ideas, and it’s really promising. Organizations, whether they’re corporate or educational, have gone through these phases of centralizing everything and decentralizing everything. I think you’re arguing that we’ve been stuck on a limited continuum and that we actually have the potential now of having high coordination and high autonomy. Is that fair?
Sangeet Paul Choudary: Yeah, you can solve that trade-off where higher autonomy used to lead to low coordination and better coordination used to restrict your autonomy. You can potentially solve that trade-off if you deploy today’s technologies well within your organization.
Tom Vander Ark: Historically, to have high coordination and high autonomy, it’s required a high degree of consensus. But I think you even argue that AI enables coordination without achieving that consensus—that you can step into high coordination relatively quickly, right?
Sangeet Paul Choudary: Yeah, I think this was an insight that I gathered after having worked through many AI implementations. The idea is that the ability to model a version of reality and then make decisions and run execution on that basis essentially means that in order for these technologies to be effective at execution, the model has to be really good. Over time, we’ve seen modeling or perception capabilities of these tools improve. What that does is it can take highly fragmented, unstructured information and extract structure—a view of reality on the basis of that.
An example of this is the meeting notes that software like Zoom or Otter provides today if you let it join your meeting. It’s taking highly unstructured, incoherent conversations and abstracting a relevant set of notes. You can structure it in any way you want by setting up the right prompts and getting the right forms of output. If you extend that idea and think about all the unstructured information in an organization—or all the unstructured ways in which companies across an ecosystem exchange information, like PDFs, emails, and calls—if all of that could be fed into an AI system, it could extract structured information and create a structured representation of what is being exchanged. That can then serve as the basis for coordinating parties who have not actually agreed to coordinate in the first instance. Traditionally, the way we coordinate is by agreeing to some standards, setting up information in specific formats, and sharing information in those formats. But AI’s ability to manage and work with unstructured information enables this form of coordination without consensus.
Tom Vander Ark: And to make this super tangible, whether you’re in business or education, I think that means it could lead to a level of coordination among organizations using different metrics and measures very quickly—without the year-long process of arguing about which metrics and measures to use. Now we have translation engines that will allow rapid coordination without the arguing to get to a set of agreements about one assessment, one metric, or one measure. So this is really promising.
Sangeet, I just left your book encouraged. It made me think about systems more than tasks. It made me rethink value creation. I think you just nicely moved past many of the limited debates that we’re having today and stepped people up to the balcony to think about what’s possible. We really appreciate your contribution.
Sangeet Paul Choudary: Thank you so much, Tom. I really appreciate it. I’m really glad to hear that. Thank you.
Tom Vander Ark: We’ve been talking to Sangeet Paul Choudary, the author of Reshuffle: Who Wins When AI Hacks the Knowledge Economy. It really should be the new textbook for business education. We think young people thinking about the future ought to read this. Organizational leaders, whether you’re in education or business, could benefit from this. It’s a great study book. It’s challenging. It’s interesting. You do a nice job of using stories—both new and old—to make these topics come alive. So we appreciate it.
Thanks for being with us this week. Thanks to our producer, Mason Pashia, and the whole Getting Smart team that makes this possible. Until next week, keep learning, keep leading, and keep innovating on the big reshuffle. See you next week.
Guest Bio
Sangeet Paul Choudary
Sangeet Choudary is the best-selling co-author of Platform Revolution and the author of the new book Reshuffle. He has advised CEOs at more than 40 Fortune 500 companies as well as pre-IPO tech firms. He is currently a Senior Fellow at the University of California, Berkeley, and has presented at leading global forums, including the G20 Summit, the World50 Summit, and the World Economic Forum.
Links

0 Comments
Leave a Comment
Your email address will not be published. All fields are required.