Catching Up: School Choice, AI & Humanity, and Bottom-Up Innovation
Key Points
-
Parents respond more strongly to grades than standardized test scores—so districts should improve assessment transparency, timeliness, and communication.
-
Sustainable improvement is more likely when systems elevate teacher agency, support continuous improvement cycles, and enable bottom-up innovation to spread and scale.
In this episode of Catching Up, Nate McClennen and Mason Pashia explore what’s changing in education—and what should change next—through the lenses of infrastructure, accountability, and emerging technology. They discuss a “smart city” view of transportation and connectivity, unpack new research on how parents respond to grades vs. standardized test data, and examine a Florida study on how school choice competition impacts performance. The conversation then shifts to the Future of Tech and Work, including AI agents and the incentive structures shaping major AI companies—ending with a clear call to invest in “relational infrastructure” so that human connection, trust, and agency grow alongside AI.
Outline
- (00:00) Welcome & Overview
- (03:23) Infrastructure & Mexico City
- (08:36) AI Agents & Moltbook
- (13:37) Parents, Grades & Testing
- (18:28) School Choice Study
- (38:26) Bottom-Up Innovation Framework
- (47:08) What’s That Song?
Welcome & Overview
Nate: Welcome, Mason, to Catching Up. Good to see you. It’s been a while. What are you talking about today?
Mason Pashia: We’ve got a bunch of great topics. It’s a whirlwind when we don’t see each other for like 4 weeks.
Nate: No kidding. Right?
Mason Pashia: There is—we cover a study about competition’s effect on school performance. We talk about a couple themes kind of intermittently, which is what it looks like to be accountable and also infrastructure developments.
And then, at the end, we investigate what a farmer knows about fiber-optic cables.
Nate: Between that and the song there, this is going to be an incredible pod. You know, I’m going to jump in with some things about Moltbook. Moltbook is the AI agent communication being philosophical—pretty amusing and bizarre stuff, really. It’s some thinking about parents’ perception around grades versus standardized tests, and then something that’s deeply important for me, and I know you, is: How do we invest in humanity as we’re investing in AI, and what does that mean for schools?
And then I think a really important paper that came out from Hoover Institution around what are the conditions and the elements around sustaining innovation. So we have 3 key things that we want to share.
So stay tuned. It’s going to be an awesome show and a lot to learn. And we look forward to hearing from you about what it’s like, and stay tuned for the music at the end.
*Intro*
Nate: How’s it going?
Mason Pashia: It’s going great. It has been a long time, and sorry to our listeners who we told we wouldn’t do this again, too.
Nate: I think we’re going to keep doing it. I think it’s just like the waiting game is awesome. So we’re kind of at a 3- to 4-week cadence right now. And I don’t know, but you and I are doing a ton of different things, so we accumulate talking and awesome conversation points along the way, and then we put out something awesome.
So we’ll see. We’ll see if we can increase frequency.
What are you thinking about? What’s your—what’s—you were just down in Mexico City, I think.
Infrastructure & Mexico City
Mason Pashia: I was, yeah. Had a lovely little vacation in Mexico City last week. I mean, you know me, I’m thinking about infrastructure a lot. I worked with Tom recently on this blog about transportation, which was really great. If anybody hasn’t read that, we’ll put a link in the show notes, which is just such a hairy issue that really dictates a lot of the decisions that edleaders make.
And I think Tom is right for saying that it’s one of those decisions where the best day is a day where nothing out of the ordinary happens and you never get a thank you. And if anything out of the ordinary happens, it’s immediately bad news. So it’s a really tricky and fraught subject, but I think it analyzes how ridership is going down, bus driver shortages are happening, budgets are obviously being cut—all these things are impacting the buses.
And when I was in Mexico City, we did a food tour where we were taking the public transit around, and every time we got off the train at a stop, around us were 2 schools and 2 markets at every exit of the train. And it was just like a really great reminder of what it looks like to intentionally plan a place and what it looks like when transportation is actually the responsibility of a city and not necessarily the district. And so it got me thinking, and grateful, too, for good urban planning.
Nate: Super appreciate that. I love that blog. It really ties in—it was connected to our learning ecosystem part of our framework, right? That fifth element. And that, along with finance, we’ve been sort of wrestling with to say, how do we talk about this in a way that talks about innovation?
Because there’s just often not a lot of innovation in these spaces. So I really appreciated that blog that Tom dove into and wrote about. So hopefully—yeah—hopefully people will read that.
Well, you also—the other thing you told me that was interesting about Mexico City was this idea of the city sinking and the pyramid staying the same, or the archeological ruin staying the same level.
Just talk about that real quickly, because it gives us a sense of time and human intervention and what we should be doing in education, and how do we talk about history and things like that.
Mason Pashia: Right. Well, it’s so confusing because every time you’re in Mexico City, you’re seeing this one picture a lot—like a drawing of a pyramid on an island in a lake. And you look around and there’s no lake. You’re just like, “I don’t see—where is this of?” And they’re like, “Oh, it’s right here.”
And so essentially, when the Spanish came through, some combination of dry conditions and kind of terraforming—they buried the lake, and they started to build the city around it after the pyramid at the center.
And so now what’s happening—and if you’re downtown, you can see the land kind of like warbling, and all the structures look like they’re completely sloping.
Nate: Yeah.
Mason Pashia: What’s happening is the pyramid in the middle, due to tectonics and water pressure, is rising because it was built on the only actual landmass, and then everything else is sinking around it because they built it on super wet ground where water either was subterranean or just buried by them.
So it’s a really interesting scale of time to just be watching a sinking city where the thing that, of course, was built first is sort of coming back again. The locals were like, “The pyramid will rise again,” which is kind of epic poetic justice for the place.
Nate: It feels like, in terms of project-based learning (PBL), every student in Mexico City—I’m sure they have history courses, and I’m sure they learn about Mexico City—but this actual challenge of just watching a terraformed new city start to sink slowly while the original sort of hard-built structures are either maintaining or apparently rising.
So it just feels like it’s really ripe for an interdisciplinary project on all sorts of science and ELA and history and all sorts of things like that. So thanks for sharing. I love those kind of interesting stories of place, as we know.
Mason Pashia: Yeah, absolutely. So it was a great trip, but it’s good to be back. How about you?
AI Agents & Moltbook
Nate: Yeah. You know, this is totally opposite a trip to Mexico City, but I was intrigued by the Moltbook phenomenon that went around probably 3 weeks ago. It was after our last pod, and probably a lot of our listeners have heard it, but Moltbook is essentially one of the earliest experiments of AI agents talking to AI agents.
So we’ve talked a lot about: You have large language models, the big models that are out there; then you have ways to interact with those models. But the third element is around these AI agents, which are acting autonomously based on their knowledge and based on the direction.
So someone set up a website called Moltbook—M-O-L-T-B-O-O-K. We’ll put it in the show notes, but you could actually go on this, and it is only inhabited by AI agents. So humans have created the agents and then they populate them into the system, right?
And so these AI agents—you can watch. It’s like a Reddit thread. You can watch their conversation going on, and most of it’s like boring code-type stuff, so it’s not super exciting and it’s kind of hard to describe. But some of it goes into philosophy, and I took one quote from it because I thought it was really outstanding.
One AI was posting in a thread that was—the thread, I think, was called “Philosophy” or “Philosophical Thinking” or something. And the quote was: “Reading your own soul file after coming back online is genuinely strange.”
And so soul files are their recording. So when these AI agents go offline and leave the chat, they record their whole history in this soul file, and then when they come back online, they upload it back up again, and then they have memory, essentially, right? So this is just a memory trick.
But the idea that it was recognizing that it was a strange experience—it being the AI agent that they are—“Oh, I’ve got to read my own soul file after coming back online, and now suddenly I’m whole again,” is just a really interesting, perplexing thing to think about.
So.
Mason Pashia: It is like the epitome of uncanny valley, where you’re like, “Is this what we talk like? And are these like that? Is that—” That story, I think, broke my brain when I first read the Moltbook thing. I was just like, “Is this all a simulation?”
Nate: I mean, the human equivalent is like: Imagine if we woke up every morning and there was an email just telling us who we were and what we were supposed to be doing today, and we had no memory of the past, essentially. And you’re just rebooting every day and starting over.
Anyway, I think it’s just a premonition of things to come, is that these are just early plays in AI agent interaction. And I do believe that this is going to become more and more important out in the universe where AI agents are doing things behind the scenes. And what does that mean for education, I think, and what does it mean for the world within which young people who are being educated now move into? We’ll talk more about that later.
All right, back to you—ping pong.
Mason Pashia: Yeah, so you and I have been kind of toiling with this blog on district consolidation for a little while.
Nate: Yeah.
Mason Pashia: This kind of came to mind for me. In Texas, in the Austin Independent School District, they’re repurposing a district property that no longer is what it used to be into 674 affordable apartments open to the public.
So they’re doing this interesting thing where they’re taking a district property. They’re basically opening it up to the public, and they’re doing it to actually try and address living conditions for teachers. Like, if you are an educator or you’re someone who works in the district, you get dibs on this housing.
And so I think it’s a really interesting way to think about facilities—to incentivize kind of further down the chain of what value-adds you need within your district. And it’s, I think, just an important story for a lot of these places where affordability is kind of a runaway train and people are really struggling to figure out: How do I live in this place that I call home, or in this place where I have opportunity?
So I just enjoyed seeing the story, and I appreciate the creative thinking from Austin Independent School District there.
Nate: Yeah, it’s super interesting. In our local district here, Teton County, they also build and own affordable housing in order to house teachers. And I think if there’s opening, certainly it would go to the broader public through the affordable housing work here, because cost of living is so expensive here in terms of housing.
And the district, which employs hundreds of teachers, was having a really hard time to keep people here. So by building out their own affordable housing system, it helps. So I do appreciate that work of broadening the picture from just what do you do in the school building to thinking more broadly about the life of the parents and the people around you.
Parents, Grades & Testing
Nate: I thought this was a fascinating piece of research. The essential idea was: They surveyed a little over 2,000 U.S. parents, so a statistically significant sample size.
And they talked about these things called investment decisions. And so the investment decisions were: What do you do when a parent gets data? And the 2 data points were: I get a data point about my kid’s grades, or I get a data point about my kid’s standardized tests—so like standardized test scores or grades.
And they then measured the reaction level of the parent—like how much investment did they put into it when they had different data points coming into those 2 systems.
And the conclusion was: Both signals affect investment, which means when standardized tests are low, they’ll do something, and when grades are low, they’ll do something. So both of them will cause that.
But there are some trade-offs. When grades are high but test scores are low, parents don’t actually do much. They have much, much lower investment than the opposite. When grades are low and then test scores are high, they’ll actually jump in. Parents invest.
And so what I think this means—and I think the researchers concluded this—is that parents are overrelying on grades, which we actually know from our work and from work around the country has all sorts of variability and inflation and challenges with what does the actual grade mean in any particular situation.
But parents, at least in this particular survey, are overinflating their reaction to low grades compared to: If my kid has all As, but then they really are showing that they can’t really read very well on a standardized test, they underemphasize that. They don’t focus on that as much. They don’t react as much.
So I guess it just goes back to the stickiness of the Carnegie system—the stickiness and what we believe. We collectively, as a culture, believe so much that we know what an A is. We know what a B is. And if those go up and down, we react more strongly than if a standardized test score—which actually is way more objective than a grade—if that has fluctuation.
So just an interesting study, I thought.
Mason Pashia: That’s interesting. Yeah. It brings to mind some of our past conversations around feedback loops, too. Like, how can we shorten feedback loops for both of those such that you either—it’s like a credit for even incomplete learning to get an incomplete picture so you can have something more responsive and dynamic.
Or would state tests—like, would that number be more valuable if you got that data like the day after taking the test, and then you were able to actually kind of understand what that meant, what the conditions were that led to that score? Right now, it probably feels like an insurmountable obstacle. It’s just the thing nobody can really touch. You get data way late, you’re just kind of like, “Oh yeah, that thing happened,” and like, whatever. So.
Nate: Yeah. Yeah, that’s interesting. I think you’re right. This falls really in the signals world of: How do we communicate what a young person knows and is able to do, and the skills they have?
And I think there’s something there that you’re catching onto. And I wonder really if it’s—even if you have a letter-grade system—if you get an A in English class, all the assessments that are standardized should be directly connected to that in real time.
So we should have these frequent assessments that are assessing whatever you need to do, but they’re all formative. They’re just accumulating as a bunch of evidence saying, “Hey, does this student know this particular competency or this particular competency?” And like you said, we just don’t have that.
The delay—the standardized test world is not for the student. It is for the system. And that’s problematic, and that’s probably what leads to something like this where parents are like, “I don’t care. They took this 7 months ago,” or whatever the case may be.
So interesting research worth looking at, listeners, if you are interested in what parents are paying attention to—which most of us already know: They pay attention to grades—but this was verifying that finding.
Mason Pashia: Interesting.
School Choice Study
Mason Pashia: Bring this section to a close with another study. So this one builds on some conversations that we’ve had in the past, really around both school choice and what it looks like to have a marketplace of learning options.
So a recent study was conducted in Florida over essentially the first 15 years of their sort of expanded choice opportunities. And the goal of the study was to figure out: What does competition do to school performance? So how does that drive performance for both the new choice schools—whether that be private or kind of just choice lottery schools—or how does it affect traditional public systems?
And so essentially the findings were that the districts that were a part of this system—so this is just a public system where choice options are emerging—they had higher reading and math scores, as well as lower rates of suspension and absences when there was increased competitive pressure.
So this is sort of mapping that competition means that all schools sort of rise to the occasion, and it’s like a net positive there.
The students in those systems were about 120 days ahead of students in less competitive areas. And then it was even larger for students from low-income families, which, of course, is really interesting and something that we like to see.
So just a really interesting study about what competition actually does. I think we often—not necessarily you and I, but in the space—can talk about how competition’s a good thing, and it sort of comes more from like, “Oh, this is how it works in markets.” But education is such a strange thing with so many factors. Worth digging into the research—some pretty compelling information and super interesting.
Nate: Yeah, I think this one’s going to be taken up and broadcast more widely. This is going to hit politically, I think, with red states versus blue states—blue states where there may be less choice or less support for choice, so more of a monolithic system, whereas red states with more ESA money coming in and causing all sorts of choice.
Florida being a leading example. So I think this data is in support of the concept of market competition, which—markets work. There’s proof behind markets. We just haven’t tested it very much in the public education system, except for potentially charters as the biggest piece.
And also, I think we’re starting to see some work, and I wonder if the work around microschools that we have been promoting and thinking about and talking about—we’ll start to do this in a similar way.
So imagine if you had a high school that had 5 different microschools in it. Would those microschools actually create a mini market economy even though it’s all in the same system? And it’s no different than a student potentially requesting a particular teacher, or students in college who go on to Rate My Professor and determine who the best professors are—something like that.
So it feels like there’s something moving here, and you’re right. For me, this is the first evidence in this research of this having some data behind it—data that was pretty interesting. So we’ll see if people push back. I suspect we’ll get some rebuttals from this, because this came out in Next and which promotes choice. And then you’re going to get some rebuttal from a more non-choice-focused group.
But it’ll be interesting to see how the dialogue plays out.
Mason Pashia: Yeah, and I think it’s just worth keeping in mind: This is measured based on standardized test performance, which has some valence, but also it’s not necessarily like if every school in this just becomes a college-ready/career-ready school, does that actually do what we need it to do?
So it kind of is a reaffirmation of measuring what matters really matters when you’re going to take into account these competitive pressures and how people are responding to dynamic change.
Nate: And students choose. When they have choice, they’ll choose something that feels better for them.
Mason Pashia: Correct.
Nate: All the time they’re looking for something else. When they don’t have choice, they figure out how to fit within the existing system.
So it’ll be interesting to see if they continue research, especially with this body of 15 years’ worth of data. If there is any data on student engagement or student belonging or things like that, that would be interesting to see as a next level of analysis.
Mason Pashia: Yep. Absolutely.
Investing in Humanity vs. AI
Nate: All right. Let’s deep dive. Maybe I’ll start, and this is one that you and I have been thinking about a lot: this idea of how, in the age of AI—where we have a whole generation of young people growing up with a technology that is emergent and the world rapidly shifting around, and hundreds of billions of dollars in investment going into AI—what do we need to do on the human side?
So what do we need to invest in on the human side of things? And I think my premise for this—and then I’m going to prompt you with a question around this value networks research from Tom over at Clayton Christensen—is: If we are lopsided in our investment, if we invest less in humans than we do in AI, what happens in the long run?
So we need to have some ability to say what’s important. But I think right now what we’re hearing: There’s an incentives problem. And I know that Tom has written about this a bit, but talk a little bit about what you’re seeing on where the big AI companies—there’s not a whole lot of “humanity” right? Like, what’s their incentive model right now? And is it humanity, or is it technology for a profit?
Mason Pashia: Right. Yeah. And I think this gets at the underlying core question of like, “Is AI bad?” Right? Like, there’s so many people that are doing this kind of armchair work—just like, “Is this a bad technology?”
And I think what this research does is it kind of reaffirms: Technology is just a tool. It’s neutral inherently. And yet the way in which it is incentivized obviously is going to make it into something. And we kind of, as consumers, we have some sway.
We’ve seen in recent headlines with kind of the exodus from OpenAI to Claude, and recent news from our conversations with the Pentagon and Anthropic, which have basically gotten Anthropic out of the mix for now.
This recent report—or research—from Thomas Arnett over at the Clayton Christensen Institute is on these value networks. So this is really investigating the markets, the governance, the models that are going into shape the decision-making at each of these big companies, and it’s really different.
And I think some of this is common knowledge, but just when you put them all next to each other: You have OpenAI, which obviously is really influenced by consumer subscriptions, and it is kind of this first to win. They were the first one to release it. They’ve kind of been a little bit reckless, maybe, in some of the ways that they’ve put it out into the world, and then they kind of claw back favor later.
You have Anthropic, which I think is using this long-term benefit. You have Dario, who’s always the first one to caution, like, AI is going to maybe take jobs. You should be careful. They seem like maybe the most pro-human out of this mix. They’re at least being kind of careful with how they roll it out. And they do have some lines they’re not going to cross. They’re like, “This tool is too powerful. We can’t trust it for that.”
You have Google, who’s obviously this kind of weird one that’s both doing a good job, but they have a captive audience, both with Google Workspace and with search. So they have this interesting balance of other priorities and other businesses that they’re trying to strike.
Meta—kind of famously at the mercy of advertising.
And then you have xAI, which is, of course, like one man’s vision for what this can be, and as a result has very few safeguards in place.
So it’s just really interesting to take the incentives on from each of them and just cast that out to see where they actually are and how they’re navigating making decisions. And I think as consumers we just need to be really aware of what would drive them to do it, and make choices accordingly.
And of course, this is not just a consumer challenge. Regulatory action is necessary in certain cases as well, but it’s good research to investigate as we’re talking about pro-human AI tools.
Nate: Yeah, and from an educator perspective or a school leader perspective, who’s making decisions often at the school level, we’re purchasing the skin, right? The app that is sitting on top of one of these large language models. And so we have less view into the mechanics of the system that you just described.
And so while I think we’d like to think we have some control in our purchasing power, etc., we may not.
And so I think I’m going to counter with: That is moving at the speed of light, and that is moving really, really quickly. And all the investment—they’re co-investing in each other, first of all. And then there’s massive amounts of venture capital, etc., going—or venture capital money going—into these companies, and all these VC firms and investors are expecting some sort of return.
And at this point, there’s not a return on this. There’s a lot of investment, but these companies aren’t making profit at this point. They’re building the infrastructure and hoping that profit will come. So that will be a driving force.
And I agree that there are variations between, say, an xAI compared to Anthropic, certainly. Right?
But on the human side, I want to share 2 things.
One is a personal piece of work that I wrote with Abel, my brother, who was the director of La PA School down in Costa Rica. And we’ve been riffing, as you know, on this idea of what’s the choice we have to make. And so we built out this choice framework, and really as a way to prompt thinking.
And when we’re building a lot of portraits of graduates right now, what I’m seeing over and over is the incentive model is about employment and economics. So we’re thinking about building portraits that are about getting a young person a good job eventually. Right? And that’s not a bad thing, but I think we’re missing something in that.
And so we wrote this Choice: The Critical Human Skills piece to address that, and we focused on: Above anything, we need to think about developing integrity, compassion and agency in young people, right? Integrity: do the right thing. Compassion: feel for others. Agency: figure out how you have control over things rather than be controlled by things.
And we articulated that CHOICE stands for critical thinking and healthy living; originality; inquiry; connection; and emotional intelligence. And the idea is: How do we spark conversations so that when schools and districts are thinking about big-picture outcomes—the “what” in our framework, the outcomes part—how do we make sure we’re thinking about integrity, compassion and agency in young people?
Because those need to complement what’s happening in developing workforce skills, in developing pathways, the technical skills needed.
And then the second one, which just came out yesterday, I think, from the Stanford Social Innovation Review: Isabel How, who’s the executive director of the Stanford Accelerator for Learning, wrote this paper about investing in—I like this idea—this relational infrastructure. She calls it relational intelligence as a key piece.
And it really ties nicely into our Choice framework with emotional intelligence and connection. But the idea is that the human ability to build trust and meaningful connections with others will become more and more important.
As AI grows smarter, we need to strengthen human relationships to support learning, health and society. So we’ve got to invest in this. And again, it goes back to my premise: We’re investing in the technology but not the humanity concurrent to it.
And so her argument in this paper, which I think is quite good, is that human relationships ignite potential and are vital for brain development and learning. AI will more and more replace human connection—we’ve talked about that before—so there’s a risk of eroding those skills.
So we need to build relational infrastructure in our schools. What does it mean to connect with others? How do we create learning opportunities where you connect to others, which real-world learning does, right? The other 6 hours of the day does this.
And so I just really appreciate the conversations that are happening about what does relationship look like, what does connection look like, what does integrity and compassion look like in the human race while we’re investing so much in this AI infrastructure and technology?
There’s my soapbox. Mason, what do you have?
Mason Pashia: I’m going to get right on with you. I totally think that’s important. Honestly, there probably—and I haven’t had a chance to read this yet—but I’d be very curious to see a guide for policymakers around what it actually means to invest in this infrastructure. Like, what are the spaces, the places, the ideas that we actually need more of to drive this?
Because I think if we keep it at the level of skills, there’s something about it that becomes muted. In certain cultures, you’re going to have pretty different definitions for what some of these things are, and it’s really easy to manipulate them back toward kind of workforce incentives, which, of course, is a part of that conversation.
But I would love to see kind of a checklist of what does it mean to invest in relational infrastructure, and how do we essentially do that in a way that gets people next to each other in real life, in places where you can make lasting and meaningful connections. I’m on—
Nate: And maybe that goes back to sort of the community work we’ve talked about, and place. And it’s not just the responsibility of the school, but it’s the responsibility of the place around the school. And then it’s the responsibility of what happens in workforce teams to make sure that we’re not all just hanging out with our AI companions and our AI agents and feeling like we have these very strong relationships with something that’s just a bunch of zeros and ones, right?
So I think that’s going to be more important. So let’s keep—this is a thread we need to keep tracking and seeing what emerges as a revolutionary push against—not just say, “AI.” AI’s going to stop developing. It’s not going to stop developing, but what’s the counter-investment in humanity that we need to make? And I think that’s what I think is important right now.
Mason Pashia: Yeah. And Julia Freeland Fisher and the Rithm Project, and Isabelel Hau are 3 really great voices to follow kind of on this thread, and we’ll be unpacking it, too.
But one piece of advice is: If you’re looking to build meaningful connections, just start a podcast with everyone you meet. I mean, that’s the way to get an event on the calendar. That is the way to keep talking, and you never know what you’re going to dig up on a podcast.
Nate: That’s right. That, and saying hi to a neighbor you don’t know, or something like that. There are probably 10 easy steps that someone’s already written.
But I think these are really important things. I also think that students are going to have a reduction in skill level in this area, right? So as you more and more interact in a text-based way with a chatbot that’s LLM-generated, you get very good at typing fast, but you may have a hard time picking up the phone and calling someone, or having a face-to-face conversation with someone. That becomes more and more scarce in Generation Alpha that’s growing up now.
All right. So that’s one: invest in humanity while we invest in AI.
So I’m going to pitch it back to you. What’s on your mind for our deep dive?
Mason Pashia: Honestly, at some point the deep dives might just need to be called soapboxes, because I feel like every week I get on a—
Nate: OK. I’m jumping on board already. We have 2 soapboxes. You jumped on mine. I’m just going to jump on yours even before you start talking. OK. Go.
Mason Pashia: That’s very bold. This one I think is a little bit of an appendage to the one you were just talking about. So that core question that you asked around incentives being workforce for so much of what we’re doing, and kind of employability skills.
I was checking out this do-no-harm policy, which is tied into some of the current administration’s movement, and it really just got me thinking, and I just wanted to get your take on it. So I’m going to bring it to this public forum to ask you questions.
So essentially, if anyone doesn’t know this, there’s this do-no-harm test, which is a way of vetting current college programs to essentially say: Is the investment to get this certificate worth what you could make afterward?
And basically it’s saying: Do no harm is a policy for their kind of parameters they’ve put on the mismatch between those 2. So if your program is not going to get you enough money afterward and you kind of just go into radical debt on this thing, then maybe those programs should cease to hold kind of authority in a certain way.
And so it’s really interrogated for both the undergraduate programs and graduate programs, but it says that out of America’s 5,000 colleges and universities, nearly 2,000 of them have at least 1 failing program according to this.
A lot of it has to do with demand-set jobs, but it’s also just spaces like childcare is one where it costs quite a bit of money to go to school and also you don’t get paid very well afterward. So maybe that program doesn’t need to exist—which I’ve raised my eyebrows at pretty heavily.
But so far, the bar is set pretty low. 95% of programs passed the bar and say that they are viable.
But I’ve always been interested in this idea of accountability, specifically the accountability of our education systems. There is a world in which you could configure this sort of metric: Basically, if a student graduates high school, what are they earning 10 years down the line? And does that mean that the high school was a success or a failure?
There’s a question in that that is interesting. But I’m curious: What do you think about this as an idea, first and foremost? And 2, is there a way to take a similar approach and actually make it something generative?
Not—that’s not as leading of a question as it sounds like. I’m really puzzled by this and think it’s interesting, but I’m just curious: Do you think this is a good idea?
Nate: Well, I think there’s some merit in it for programs that are simply trying to make a profit, and they are not disclosing to those who are studying in them that, “Hey, you’re entering a field where it’s going to take you 20 years to recoup the cost of your degree,” whatever that is. So I think there’s a disclosure piece here that’s really important. That’s number 1.
Nate: The other piece that I think—and this is why I’m on your soapbox—is I think we need a broader measure. So this is a first effort, so yeah, sure. We should know what the general salary is leaving when you get a degree, undergraduate degree, etc. They’re trying to actually think about this even for credentials, right? So what’s the return on investment in credentials, and especially if there’s going to be Pell Grant investment and things like that that you and I were talking about before.
So I feel like there’s a set of criteria that are more broad that might be helpful. So things like: Are you participating in civic life at least? Do you vote? Are you healthy? Are you financially savvy? It might be going on the idea of human—the thriving conditions that we’ve seen, those 6 pieces of thriving that we need to measure.
Because what happens if you go into something like nonprofit work that’s really important, but it doesn’t make much money, but your impact on humanity is really high? So you might have a low ROI out of the university, or it would affect the ROI ratings of the university, which potentially could shut down that program. But then that reduces the long-term impact on the back end.
So that’s why I’m on your soapbox, is that there are programs that, unfortunately, people don’t make very much, but fortunately they exist because they make a big impact on the world.
Mason Pashia: No, I totally agree with that. Earnings is a first step of evaluation, but I think this really does apply, as you said, to our credentials work, right? Like we see time and time again: There are like 3 billion credentials issued, and so many of them hold questionable value. We need some parameter for accountability on performance of these things—these different ways of communicating capability and signals.
But at the same time, I think it needs to be a fundamentally pro-human endeavor. It needs to be truly like: What you are doing is for the betterment of humanity, and therefore it is a program worth existing, rather than because you don’t make 6 figures out of college. This doesn’t seem like a thing that should happen.
Nate: Going back to the common good, you know? This idea of common good versus individuals. So that’s something that we’re wrestling with.
I do think there’s another piece here, Mason, that is not around the do-no-harm—it doesn’t show up here—but around ROI and high schools. And I actually think we’re really bad at this.
I’m just thinking about the first step: Can you track at least 80% of your graduates over a 10-year period? And there are no real good systems for doing this right now. And sometimes data is available from the state, but it’s hard to get to. Some states are good at sending it to their high schools, and it usually is: Who has sustained in college? What’s a 2-year graduation rate, 4-year graduation rate?
But really, we need to have a much better sense of how our high school graduates are doing to see if portraits of graduates are working, to see if competency-based learning is working, to see whatever structures you put in.
Other than student engagement surveys and maybe standardized test stuff, we don’t have a real sense of knowing long-term success of our students based on their high school experience.
Mason Pashia: Of course. And honestly, I think that that leads to just wrapping it up with this: I think if there’s one takeaway from this, it’s an interesting question to ask yourself as an edleader—I’m sure that you do this already—but to ask yourself: If we were on the hook as a district for performance of a student 10 years down the road—just flourishing metrics—how would I change things in our system?
And maybe you’re already on that path, but I think that’s actually a pretty good question to ask yourself, even if this world doesn’t come to be. A set of checks and balances to orient a system in a way that actually serves your learners.
Nate: I love that. That’s like the 7 generations question, right? What decisions do you make now if you were thinking 7 generations into the future?
All right, I’m going to deep dive one more. I had one more that was super interesting that caught my eye.
Shorts Content
Bottom-Up Innovation Framework
Nate: Rebecca Wolfe, who’s at Hoover Institution at Stanford, wrote a great article. It’s very research-heavy. So someone like me who likes that—it’s great. I’m going to try to describe it succinctly here, to finish this off.
So the title of the article—her research—was “You Can’t Get There From Here: A Framework for the Start, Spread and Scale of Bottom-Up Innovation in Education.”
What she analyzed was how innovation does or does not work in systems—large systems of education. So that could be a district, it could be a school, it could be a state, it could be a country, whatever the case may be. And it was a meta-analysis of all the literature that was out there. So she was really looking at: What are the things that work and don’t work?
And her whole article was premised upon a framework about innovation starts as a catalyst. This might be 1 classroom. She used the AVID example. AVID is a school program that’s now everywhere. It’s about supporting students, and it started in a single classroom somewhere in the United States, and now is a formal program in thousands of districts around the country and even globally.
So she used that, and then she used HundrED as another example, which was interesting and worth reading if people are interested in global education.
But you have to have a catalyst. That catalyst—if it works—then it’s about how do you spread that catalyst, spread the innovation, and then how do you scale and sustain the innovation.
So she has these 3 stages of innovation, but her findings—which really reinforce what we’re doing with our districts that we work with through Future Learning Council in Michigan and through the Virginia Learns Innovation Network, Baylen and Virginia—here are the 3 things:
- You have to elevate teacher agency and collaboration. And that means giving them the ability to make decisions, to try things, the ability to try things that maybe won’t work, and then to learn from them. And then also to meaningfully collaborate, which means aligning planning time in some way because public system educators are so restricted by their agreements, their employment, their work agreements about working outside of school hours.
- Design for continuous adaptive improvement. So this idea of continuous improvement—improvement science—that’s been really championed by Carnegie. We use it as design sprints. We’re sort of taking what’s the continuous improvement version 2. So how do you really think about it from a human perspective, and what is the human need, and starting with empathy and things like that.
And so: Use short learning cycles. Allow for local adaptations instead of rigid fidelity. Don’t go right into implementation that’s long term. Allow for continuous adaptive improvement sprints rather than one-off reforms.
- Reshape the system to support bottom-up innovation. The research supports the idea that innovation is not as successful when it comes from the top down—when the state, feds, district are saying, “This is what has to happen.” Innovation happens when the system aligns to support bottom-up ideas that then act as catalyst, which then spread and then scale and sustain.
So I think this is really important for our listeners. I think Getting Smart’s audience is generally people who would say, “Yeah, of course. This is exactly what we do.” But it’s really hard to get these 3 things happening in school.
So this made me just continue to want to dive deep into what are the conditions that have to make these 3 things happen in systems so that all these awesome innovations that we know are good for students can come to fruition for more students.
Mason Pashia: Yeah. No, I think that’s a super helpful frame. It really emphasizes the power of documentation, too. So, you know, when you’ve tried something, I think there’s so many small pieces of this that just are forgotten often.
When you’re doing a pilot or something, you don’t document along the way. And then you have someone new come in in 2 years and they’re like, “Oh, have we ever tried this?” And everyone’s like, “Oh, no.”
And then there is—I don’t think there’s a hunger for trying new things, but I do think that the processes to build upon a thing that is tried is often lacking.
Nate: That’s a really important statement. That’s a third soapbox I’m jumping up onto today, Mason.
Mason Pashia: Come on up.
Nate: Yeah. You know, I was in a conversation earlier today with one of the schools in Virginia we work with, and they were building out a simple slide deck where people dump their design sprint results into so that they have a repository of what did they learn.
So when we do our design sprints, our design sprint stages are: You notice, you build, you test, and then share is the last stage. And I think it’s often missed in typical even improvement science work: How do you share widely so that you can move from this catalyst idea to this spread, scale and sustain idea?
So yeah, it’s really important. We need a library—a record of the journey and what’s working, what’s not working—so that 5 years from now when someone else comes in, they can quickly search and say, “Oh yeah, someone else tried a personalized learning rotation model in their classroom, and this is what they learned.” I’m going to start from that rather than scratch.
Mason Pashia: Yeah, and it doesn’t even have to be shiny. It just has to be organized and it has to be centralized.
I’ve heard WordPress is very notorious for doing this, where they just have an open employee forum where someone will post an idea that they have, and then it’s literally a thread that just goes forever off of that idea. And then people are just like, “Oh, I tried this,” or “Maybe we could make that better by doing this.”
And it’s all async, and it just is a perfect record of an idea. And with AI, of course, now you can scan that in 2 seconds and have a really great summary. So it’s just going to get easier and easier to keep track, but starting to keep track matters.
Nate: Yeah. All right. That’s a lot of ground we covered today, Mason. Let’s talk about human expression. What do you have for me that’s going to make me smile today?
Mason Pashia: All right, so I am not great with German words, but I believe this town is called Etteln, E-T-T-E-L-N, and it is in the North Rhine-Westphalia area.
So this is a story that I loved. I think this is from the Reasons to Be Cheerful folks, which always gives me a reason to smile.
Etteln is home to 1,750 people, and it recently won the IEEE Smartest City Contest, beating out kind of the shiny giants of the world—Singapore, Tokyo, all these other giant cities, which you would think, if you looked at them, you’re like, “Oh, that’s a smart city.”
But basically the reason they won is because they had this one challenge where they rolled out fiber-optic cables in this town to get people connected. And there are 55 houses that are on the outskirts of the village that were just going to be—it was going to be 2.5 million euros to essentially extend fiber to these 55 houses.
And essentially the municipality was like, “We’re not going to pay for that.” So these 55 houses were not going to have connectivity.
And the villagers of Etteln took it into their own hands. They got 65 volunteers from the town, and it was like farmers bringing their tractors, the local rifle club bringing their shovels, and the church members came out to actually lay the cables.
Once they started digging these trenches, it took 3,500 man-hours for these relatively experienced—but in different ways—65 members.
And then I love this final quote. The municipal administrator for the town said, “We made sure that high-speed internet reached the last milk churn.”
And it is just a really incredible pastoral image of what it looks like for people to work together. And I love that it got global recognition as an effort—as a smart city. So grateful for—
Nate: You know what? This just goes back to sort of the compassion and integrity and doing the right thing. And I actually have a deep belief in most humans have those things. We don’t talk about them enough because headlines don’t like to talk about those things, so that’s why we do human expression at the end of our podcast.
So.
Mason Pashia: It is.
Nate: All right. Well, mine is more on the phenomenological level of the way the world works.
So it was the worm moon—full moon in March. Apparently it’s called the worm moon. I didn’t know this, but apparently it’s something to do with things coming out of hibernation, like worms who come out of hibernation in this time of year.
So anyway: Full moon in March, also a lunar eclipse. So on Tuesday night I woke up in the middle of the night—not intentionally—but the moon was fully eclipsed. And so both these things happened at once: full moon with eclipse, which was awesome.
And it just gave me a moment to explain to all of our listeners—because we actually forget this. And there was a great interview—this is a video about 10 years old—with Harvard graduates, and no one could explain anything about how things like the moon and the sun and the Earth revolve around each other.
So here’s the 1-minute version of a lunar eclipse: You have the sun. The Earth goes around the sun. The moon goes around the Earth. But when they line up—sun, then Earth, then moon—the Earth blocks the sunlight from hitting the moon, which then makes it dark or reddish in this case.
So you have a lineup of sun, Earth, moon, and that is the 1-minute explanation of a lunar eclipse.
Mason Pashia: It is like 18 seconds. That’s pretty good. Now I just need someone to describe to me how tides work for the 2,000th time, because that blows my mind every time, and then I don’t hold onto it. So.
Nate: OK, we’re going to do that. We’ll do that next time. Let’s talk about tides, because that is awesome. Like, the pull of water by the moon is pretty awesome.
What’s That Song?
Nate: I think you’re really going to like it. It’s going to be a good Friday afternoon pick-me-up, so let’s play it.
*Song Plays*
Mason Pashia: That is outstanding. What that sounds like to me is if the kids from School of Rock made it big and they just never found a different muse. They were just always writing about the education that they had. And that is some fun head-bashing, chunky guitar.
Nate: Yeah. And it’s funny how it always—the picture they paint of education is so dismal, right? But we know that’s not always the case. A lot of kids have found success in education, but the chalkboards and the darkness.
This was a Guns N’ Roses-inspired song, and so you picked up on it, right? It’s kind of garage band turned into metal band, and has a very distinctive—for me, growing up in high school in the ‘80s and ‘90s—these were the songs that we listened to.
Mason Pashia: I mean, I imagine that if we ever get our hands on any of those early Nate McLennan hair photos, we will be able to tell.
Nate: Oh yeah. Well, maybe when we get to number 20—where I think we’re on pod number 12 or 13 right now—maybe we’ll do a celebration and we’ll do some photos of you and me: you building your Eiffel Tower thing, and me with my mullet playing soccer.
Right, Mason. Thanks. That was awesome. Good long catch-up today. We’ll be back in a couple weeks with more learning. Always love talking to you.
Links
- Watch the full video here
- Interpreting Performance: Evidence on Signal Weighting in Human Capital Investment
- In These Districts, Students Get an English Credit for On-the-Job Internships
- What actually determines AI’s impact on humanity? Incentives, value networks, and the forces shaping AI’s future.
- The CHOICE We Make: The Critically Human Skills That Define Our Future
- Welcome to the Era of Relational Intelligence
- Can’t Get There from Here: A Framework for the Start, Spread, and Scale of Bottom-Up Innovation in Education
- Low-Earning Degrees Will Soon Lose Access to Federal Loans—Is Yours on the List?
Mason Pashia

0 Comments
Leave a Comment
Your email address will not be published. All fields are required.