Evaluating educational practices with Dr. Shayne Piasta

00:00:13 Tiffany Hogan: Welcome to See Hear Speak Podcast Episode 43. In this episode I talk with Shayne Piasta about evidence-based educational practices and how randomized control trials provide important evidence. I’m a long-time collaborator with Dr. Piasta and am so excited to have her share her extensive knowledge with you. After listening, don’t forget to check out the website, www.seehearspeakpodcast.com,to sign up for email alerts for new episodes and content, read a transcript of this podcast, access articles and resources that we discussed, and find more information about our guests.

00:00:55 TH: Welcome to SeeHearSpeak podcast, Episode 43. Today, I have my friend and colleague, Shayne Piasta and I'm excited for her to introduce herself.

00:01:04 Shayne Piasta: Thank you so much for having me. I am a professor in the Department of Teaching and Learning at the Ohio State University, and also a faculty associate here at the Crane Center for Early Childhood Research and Policy. My background is as a developmental psychologist, and I study how children develop early literacy skills, but more and more so how we can best support that skill development in classrooms. A lot of my research focuses on generating evidence, especially causally interpretable evidence regarding educational practices broadly defined, so I think of that as including things not just curriculum and intervention, but also thinking about instructional strategies, professional development for teachers, and even teacher knowledge, and I'm really excited to chat with you about that today.

00:01:56 TH: Well, great. Well, we've known each other for a long time, we've worked on the Language and Reading Research Consortium, but started in 2010, but we started working on it before then when you were a postdoc, so it's exciting now that we are... We had a randomized controlled trial that we ran then, and now we're currently working on a randomized control trial. And whenever I was working on the LARC randomized control trial, I swore I would never do another randomized control trial. And maybe you could talk a little bit about why I might have said that, but then... Wait, hold on, I'm doing it now. So something must have happened. And you do them all the time. So what is a randomized control trial? Why are they so important for our field?

00:02:37 SP: Yeah, so randomized control trials are really one of the cruxes of my work. So let's start with what that is. So a randomized control trial is a true experiment that's done in a field based setting. You might have heard this term in the context of medical research, for example, where we have a new medication, we wanna know if that medication has the intended effects. And so we randomize, we assign by lottery people to usually one of two groups, so some people get the new medication and some people don't get the medication and they might get a placebo, instead, so they don't know that they're not getting the medication.

00:03:22 SP: And then what you do is you compare the outcomes, so in education, that becomes a little bit more complex because we are looking at phenomena that are not quite as simple as whether or not you take a medication. So in the work that I do and the work that we've done together, we have done things like randomized control trials of particular curricula, so the let's know curriculum where we randomly assigned classrooms to try out that curriculum or to not try out that curriculum. And what we're interested in is what we see as the impact of the curriculum on kids learning, typically at the end of the controlled trial.

00:04:15 SP: One of the reasons that I focus on this so heavily in our work, my work and that we have used this as a design in the work we've done together is because it's one of the strongest designs if you're trying to make what's called a causal inference. You're trying to know, does this new intervention, does this new curriculum cause better learning for kids, and the key to that is really this idea of randomizing because we can't possibly know all the factors that are playing into a given child's learning and development. The best we can do is make sure that all of those factors are equated across these two groups as much as possible, and those factors are things we might know of and think about, but they could also be unknown factors, and so the best way to make sure that those are equivalent is to just use a lottery system to put kids into groups or teachers into groups randomly, and that kind of washes out any effects that we haven't thought of.

00:05:22 TH: Yeah, so you have to be so strict in what you're doing around it, in the terms of really keeping the group separate as much as possible, but they are in the school system, so we have to ask... I always think about this ask for schools, and it is hard to get school partners to do this work because we are asking them, first, to be a part of a study that's very controlled. That means that we don't have a lot of flexibility in the types of measures we're giving, when we give the measures, we have to be as flexible as we can in terms of seeing the children, but we have to be also pretty rigorous in terms of the timing, so that timing isn't a factor. But then we also have to go in and sell some early work that says that this could be effective, 'cause we can't just go in and say "Well, whatever, we just decided we wanna try this out."

00:06:11 TH: We have to have some pretty... We have to have some sense that this might work based on other studies, and then we go in and we say "Hey, this is a cool thing that might work, like language comprehension stimulation, but only half of your children are going to get it," which is always disappointing. And I have to say as someone who's gone through controlled trials as a cancer patient, I'm always super disappointed, when you don't know what you've got, but you're disappointed when it's a control trial that way, 'cause you're like "I just want the actual intervention," so then you have to try to ask them not to give parts of the intervention. And then they have to give it in a very rigorous way, so you have to take some of that control out 'cause you have to have good fidelity. So it is a tall order for our school partners.

00:06:56 SP: It's a lot, and I think a couple of things that you were saying there are really key. So when we talk with schools about doing this, really being as upfront as possible that this is our preliminary evidence and this might be helpful, but that might is key, right? So I always go into these conversations knowing that I'm pretty sure this isn't [0:07:17.9] ____ the type of work that we do, we're not going to do harm to kids because then we would never try this out. But being also very clear that this might be better than what is typically happening in your school, 'cause that's usually what our control condition is, business as usual, whatever else is happening in the school that they're not getting as part of this new curriculum or intervention.

00:07:48 SP: And then the other piece is also being really clear as to why it's so important that we do the trial in this way. And acknowledging that there are all of these trade-offs, but that the only way we will ever know whether this curriculum, this intervention, this practice does what we're hoping is if we do withhold that from some students initially with the hope that we find out that this is a great thing to do, then we can go in and offer this to more and more students.

00:08:25 TH: And it's so rare that curriculums have gone through this randomized control trial process for many of these reasons, but it's also so critically important with the science of reading movement, that we do have the science behind what we are doing in the classroom. So it is that important trade-off that we see between needing to do this work, but also some of the downsides of it. And another consideration too, with randomized control trials is the number of participants and what is that unit of participation. Can you talk a little bit about that power analysis that we do to get the number of teachers or students we need?

00:09:05 SP: Sure, so this is another one of a trade-off. So in thinking about what we were just chatting about in terms of, it's really hard to ask a school to give something to some kids and not to other kids. One way of getting around that is to randomly assign schools instead of randomly assigning teachers within a school for example, or classrooms within a school or kids within a school. But at the point that you're randomly assigning schools, you now have to have way more schools than you would if you were, for instance, randomly assigning classrooms or kids.

00:09:43 SP: So what you're talking about statistical power is basically, if you're gonna run a study like this, you want to know that you have enough participants that you can detect an effect that's meaningful, it will help us understand the impact of the intervention or curriculum. And so you have to have a certain number of participants. And when those participants are what we call nested, so you can have children within classrooms, within schools, there are some statistical assumptions that are needing to be met or accounted for, let me say that instead. There are certain statistical assumptions that need to be accounted for, that increases when you increase the levels of nesting.

00:10:34 SP: So where we could do a randomized controlled trial, and if we were individually assigning kids to conditions, we could have... I don't know, you'd have to figure out all the other parameters, but it's conceivable that you could need... You know, I just had a student do a pilot study and they needed 40 children, but when you think about assigning classrooms or assigning schools, you might need that upper level unit to be more like 40 or even more than that. And so now you're thinking about, okay, I need 40 classrooms to participate in this study, and within each of those classrooms, I need five kids so that I'm representing the kids that might be enrolled in that particular classroom.

00:11:26 SP: So again, there are these trade-offs where it might be easier and make more sense to have some schools do the intervention and some spools just wait till the next year to do the intervention, so everybody eventually gets it, but then you're trading off in terms of the statistical power and the amount of effort and resources and money you need to actually do the randomized control trial.

00:11:50 TH: Yeah. And I think the key word there is trade-off, there's a constant trade-off. I know in speech language pathology, we do several controlled studies that are in labs, for instance, and the participants are the unit of measurement, and we still need lots of participants, but if we're doing it in the lab and we maybe show an effect, but it's very controlled. Then the next step is to try it out in the clinic, but because of that data we already have, a lot of times it's left to the clinician to just figure out how to really do that intervention and the messiness of the real world. Whereas the randomized control trials, it seems like you're doing... We do try to tackle them, in that we're doing together, we try to tackle them within the schools, but then you have the unit of measurement being the teacher at the school, which means you have to have a lot more... And you have a lot more messiness too, so it's tricky.

00:12:44 SP: Yeah, it definitely is tricky. That's part of what also makes it fun and a [0:12:47.4] ____ 'cause you think you can do it better and better, and I learn from each trial that I run. But it's definitely tricky. And one of the things I think about a lot is this idea that's been put forth of starting with something, like you said, doing trying out something new in a lab setting and having a true, what's called efficacy trial, where you have very, very tight control, so you know everything that happened, and there are no deviations from what the protocol is supposed to be for implementing that intervention, let's say.

00: 13:31 SP: And then you have something called an effectiveness trial, which is the idea of taking an intervention that generally already has evidence of being efficacious and seeing how that works and what the effects are when it's done under more real world conditions, and then the kind of far end of this continuum, we have something that we would call scale-up, which is when you're pretty sure that this is an intervention that has an effect, it can work under real world conditions, and now you're really trying to figure out, okay, what needs to happen in various contexts in order for this to be used on a regular basis and achieve its effects.

00:14:17 SP: And in the work that I do and work that I've collaborated on, I think that this is more of kind of a continuum, that it is separate sets of studies, and sometimes it gets a little squishy. For example, you mentioned the LARC and the Let's Know work that we've done together, where we never... We did not do an in-lab study of that, that was not the point, the funding wasn't meant to have us do that, and it doesn't... From some people's perspectives, it doesn't matter if something achieves effects in a lab setting, because that doesn't help you know if it's going to work with real kids and real teachers.

00:15:08 SP: And so in that work, it wasn't really positioned as an effectiveness trial, but it wasn't a true efficacy study either because we asked teachers to try doing this in their classrooms and the messiness creeps in, and so this is just something I've been thinking about, that there really aren't these hard boundaries among these types of studies, and maybe it's more of a continuum where you have something that's maybe closer to an efficacy trial, but you're doing it with real teachers and real kids out in the real world, whereas something more like an effectiveness trial where you've given up a lot of control over it and are letting it happen under more authentic conditions. All the way up to scale up.

00:15:52 TH: I think that makes a lot of sense, that it can seem like it's hard lines, but there's just this continuum, even if you think about the focus of sustainability, which I think is always on our mind, that if something does show to be efficacious, we want it to be something that would be sustainable or at least immediately applicable in the classroom. So even on our randomized control trial, we're always thinking about, even though we have interventionists that we're paying to have more control over the effectiveness that we might find, we're still in the schools, and I feel like we're making decisions almost with the hope... We hope that it will be effective or efficacious, so that we say, "Okay, this has an effect on children's language, and now that we know that because of the minor tweaks we've made in the protocols ahead of time, even with interventionists, maybe that will make it easier for teachers to... Or maybe the uptake, the scientific uptake would be easier if we make those decisions." But that's where that gray area comes in that you mentioned. I don't... I think you brought up a good point about the importance of RCTs and I definitely don't want to...

00:17:00 TH: It does seem like we're talking about a lot of the negatives, but I also think it's important to know that because you hear people say "Well, why don't we just find something that is effective? Or why don't we just run a study?" I hear that a lot, just run a study on it. But our study, for example, let's talk about how long it took with LARC and how that's even the continuum to the work we're doing now.

00:17:24 SP: Okay. So let's think, we started... Did we receive that grant in 2010?

00:17:29 TH: Yes.

00:17:30 SP: So we probably started writing that in 2009.

00:17:34 TH: Yep, that's right.

00:17:36 SP: We got it in 2010, spent two to three years developing the curriculum and doing pilot testing along the way, and then moved that into a randomized controlled trial for the last two years of that funded grant. It was a five-year grant. So what are we up to now? We're up to... The randomized control trial ends in 2015 or 2016?

00:18:06 TH: Yep, 2015, and it was accelerated because it was a special initiative, so we kind of compressed, I'd say at least 10 years of work into five, because we have the money.

00:18:16 SP: Only 10.

00:18:17 TH: And a lot of gray hair from that, but you know... Yeah, so then we're... 2015 when it ended.

00:18:24 SP: And then we were still... We collected data right up until the end of that funding, and so we were still processing the data and analyzing it. And I know that just this past year, I was on a team and that was one of our main impact papers was finally accepted for publication, and that was only in the past six months.

00:18:49 TH: Absolutely, and that impact paper too, also was... There were two primary papers, and the other one was 2019, and we used that to leverage for this current grant which started in 2021, so we're still in our first year of data to look at modifying the curriculum for a smaller group, so we took the data to say, "Well, let's modify it for children who have language difficulties," and then if you think about it, you have 2009, it started, and we're still working on it in 2022, and I imagine our grants, so we have what? Till 2026, and then we'll just be publishing on that for so much longer, so it just takes so much time and money to run these trials, you can see why we have to sometimes do the best with what we have and not use that data, but it's also frustrating when the data is out there to show something that is efficacious and yet isn't used.

00:19:52 SP: Okay. So this makes me think about a paper that Nell Duke published back in 2011, where she makes a distinction between things that are research-based versus things that are research-tested. So what we've been talking about is actually having interventions, curricula, practices, that are research-tested, that whole specific intervention or curriculum or what have you, has been subjected to a randomized controlled trial, or some other type of design that gives you really good causal evidence of what the effects of that intervention are. A lot of what we do though, is really what's better referred to as research-based, we don't have these randomized control trials on every intervention, every practice. So instead, we're relying on the best evidence that we do have, and we use that to inform the development of different intervention and curricula, for example, phonemic awareness, we know phonemic awareness is critically important for kids to develop when they're learning to read, most phonemic awareness curricula or interventions are not research-tested, but they're almost all research-based, they take what we know about effective phonemic awareness instruction, effective targets, effective instructional strategies, and pull that together into a curriculum.

00:21:26 SP: So that's research-based. And I really think about this within the kind of broader context of evidence-based practice, so we have to pull together the best evidence that we have about a practice or an intervention, but also with our professional knowledge and our professional judgment as it relates to knowledge of the learner, knowledge of the context and all of that to figure out what is the best strategy to use in that situation.

00:21:55 TH: That makes total sense, and I'm thinking, even back to thinking about the randomized control trial, can you talk about some of the analysis that might get us to think more about moderators, mediators, what conditions might work, and what does it mean to do what we've heard this intent to treat, can you talk through some of those variables and how that informs not only the evidence, but what teachers might decide to do based on that evidence.

00:22:20 SP: Sure. So a randomized control trial is really aimed at one particular analysis and outcome, and it is what you refer to this intent to treat analysis. So what that means is that even though we know this is messy work, even though we know some kids are gonna be absent, or the past two years, some kids are gonna be quarantining, all kinds of things, or we know that sometimes a particular lesson doesn't go the way we anticipated it would go, or there's a fire drill, all these things happen that affect the extent to which kids experience the intervention or the curriculum or the practice.

00:23:09 SP: In a randomized control trial, the main analysis is just what condition they were originally assigned to. So if they were assigned to the intervention condition and they ended up quarantining and missing an entire unit or something like that, we still analyze the data comparing everybody who was originally assigned to that intervention condition, to everybody who was assigned to the control condition. The reason that we do that is because that's what was randomized, and we have to analyze in accordance with the original randomization to have the strongest causal inferences, the strongest claims about impacts or effects.

00:23:57 SP: Now, what I think is also important is that we engage in additional analyses called As Treated analyses and other types of analyses, you mentioned moderator analyses, so that we can better understand those effects, because like I said, you're gonna have all kinds of messy situations, and so you might want to think about, okay, what were the effects for kids who got at least 80% of the lessons, for example. In that case, you can still analyze the data that way and see what you find when you compare that group of kids to the control kids, you can't make merely as strong of a causal claim about it, so you have to be very forthcoming about the fact that, well, maybe the kids who got at least 80% of the lessons were somehow different from those who didn't get 80% of the lessons and that might be biasing your results in some way, shape or form. And then you can do things like moderator analyses, so thinking about for whom and under what conditions the intervention or the curriculum or practice might be effective, so we're actually engaging in some of this work right now in my lab, where we used to look at this...

00:25:19 SP: I'll take a simple example, boys versus girls, sometimes children from certain genders seem to benefit more or less from certain programs, but what's really interesting to me is to think about the profiles of kids who are experiencing intervention, 'cause you could come up with any number of characteristics to look at, and chances are some of those are tied together, so in the US, we know that race and ethnicity and socioeconomic status and things like this are tied together, and so we are doing some analysis where we're looking at whether effects are moderated by these profiles of kids and looking at how that might mean certain profiles of kids, kids that share certain characteristics might benefit more or less from the intervention, so kids who have other characteristics. Again, you have to be careful 'cause we're not randomly assigning kids to their characteristics, we're not randomly assigning them to be from certain socio-economic backgrounds, so you can't make a strong of causal claims, but this really can flesh out and help you better understand some of those intent to treat results and figure out where to go next.

00:26:39 TH: Can you give a really concrete example?

00:26:42 SP: So one of the things that I see... One of the analyses I often see done is looking at the correlation between how many lessons are completed or fidelity to a certain lesson protocol and kids learning, and that can give you really helpful information because hopefully what you'd see is that kids who get more lessons have better outcomes. The caveat there is that kids could be not getting as many lessons for a lot of different reasons, right? So we have kids who are chronically absent, especially in the work I do, which is oftentimes in the pre-school setting, which is not compulsory. And so kids who might be absent, might have other things going on in their lives that are leading them to both not be present for the lessons, and also potentially not have as high of outcomes.

00:27:50 SP: Conversely, kids who have really great attendance, there might be other factors there that are leading to that, that also lead to having better learning. So for example, maybe they're coming from homes that put an extra emphasis on early education, so those homes, they're both more likely to make sure that their child is at school every day, but also they're doing other things like going to the library or reading at home, and these other things that could also lead to more positive literacy outcomes. And so just by looking at that correlation, which is kind of a rudimentary As Treated analysis, it gives us some information, but it doesn't let us make as strong of a claim to know that it is the intervention itself and how much of that intervention itself a kid is experiencing that leads to positive outcomes.

00:28:48 TH: I think that example is very helpful because it is... It can seem very straightforward, but when you dig just a little bit deeper, you see the complexities, I could even imagine a case where you might actually see that... This would maybe be less likely, but you could see where attendance could actually look like it's negatively correlated with outcomes if you have a child or if you have a situation in which children are attending that they have more severe deficits, and so they have more time they're attending, or there's more compulsory nature to their attendance versus ones that might just be like, "I'm dropping in, I'm dropping out." So it does seem like you need to dig much deeper in terms of the complexities of what might be driving these effects, it's just not straightforward as it seems.

00:29:34 SP: Right, exactly. And so again, it's not that these and types of additional analyses aren't helpful, they can point you in directions that you then wanna further investigate, and another way of trying to get to that strong causal claim is to rule out alternative explanations. And so, sometimes doing this work, can lead you down a path where you think, "Okay, I can't get the causal evidence that... I can't do the randomized control trial, initially, but I can try to use some of these additional analyses to rule out alternative reasons for positive effects, or to generate ideas of, okay, this is the next study that I need to do because... "

00:30:19 SP: For example, in a recent randomized control trial, I just finished up, we didn't find the effects that we were expecting to see, and so we wanted to try to understand why that might have been. And so we did look at things like dosage and found that it was much, much lower than it had been in previous more efficacy-like trials of this intervention, and then we wanted to see, well, can we predict the children that were experiencing more dosage because those might be points, leverage points that we can better support, for example, during professional development or other things that schools that might be adopting this intervention would want to think about in terms of whether this is a context that facilitates high dosage or at least the dosage that might be needed in order to see the intended impacts.

00:31:19 TH: That's a good point too, in terms of the complexity, 'cause it's not like you do this randomized control trial, you don't find an effect, you just throw it all out, you say what really is driving some of the effects we might actually be seeing in these sub-types of analysis, or looking at profiles and those complexities, I think, that makes a lot of sense to me that we wouldn't wanna throw it out, even just thinking about the timeline of randomized control trials, so we're running... In our first year, we have a whole another year that we'll need to collect data to get enough school, enough classrooms, then to analyze the data, plus we have a blinding procedure, which is important for reducing bias as well, so then the data is going to our third site, and that's where it's getting analyzed and that takes a lot of time as well, so we might not know those results or until we are at the last year or so, plus we wanted to look at long-term outcomes, which is something we haven't discussed so far, but is so important, can you tell me a bit about the long-term outcomes when it comes to randomized control trials and how that might play into findings.

00:32:20 SP: So I'm a big believer that we want to see long-term impacts of any interventional practice that we're recommending, but this has been a really contentious area in the field lately, not just for older kids, but particularly in my area of early childhood, so there have been some recent studies of early childhood programming showing that although there might have been advantages for children's learning and development early on during the pre-school year, that those... What they say is, they fade out, over time, and so, what does that mean? One way of looking at that is, well, if we're investing in these interventions, curricular programming, and they're not showing long-term effects, is it worth investing in them, but really we need more research to kind of disentangle that because there could be a number of different alternative explanations as to what happens subsequent to the intervention being offered, and if those are differential... Remember again, the whole idea behind the randomized control trials, we're trying to equate our treatment and control groups on as many measured and un-measured factors as we possibly can, and we do that for randomizing.

00:33:52 SP: But if anything happens differentially between the groups as they progress over time in schooling, then we don't have the same benefit we did of the randomization initially, and so we have to start rolling out the explanation, so, were extra services offered to kids who are in the control group because they did not have the chance to get certain opportunities that were offered to the treatment group or were teaching and learning. It's a responsive interaction. Right? So were there perhaps interactions and content that were different between kids who came in already with a head start versus kids who maybe didn't, and because that was offered differentially, does that affect the long-term follow-up? So this is a really tricky issue for me, because on one hand, I strongly believe we want to see these long-term impacts, and on the other hand, I'm constantly trying to think through the rigor of the design and the potential alternative explanation, so that if we say this doesn't have long-term benefits, I wanna be really confident in my understanding of it.

00:35:18 TH: And understanding why. Why if it doesn't have that effect, what was the driver even on changing the, or reducing the impact over time.

00:35:30 SP: Which can lead us in ways of doing things better too, right? So I feel like we often talk about these randomized control trials and other aspects of research as kind of like you do the study and then you have the findings, but it's so much more of the cyclical and iterative process as we work through it, because they're always asking why?

00:35:52 TH: Absolutely, it is. It's long term, it's [0:40:39.3] ____, it's iterative as you mentioned. And it costs a lot of money as well. Time and money to do this work. So it's something that has to be done. But it's definitely easier said than done when it comes to this work. So in terms of your, we've talked a lot about the process, what are some of the highlights of these studies that you've been working on so far in different areas, I know you work in early childhood, as you mentioned, you've worked in alphabetic knowledge, you worked in teacher language and professional development, what are some of the highlights of your finding contents wise?

00:36:30 SP: So as I shared, I think as we've been talking, quite a bit of my recent studies have turned out to not have the impacts that we were anticipating them having, and so really thinking deeply about that and trying to unpack those results. So for instance, in one randomized control trial that we wrapped up over the past few years, it was looking at a shared book reading intervention that previously had shown effects and had shown pretty good effects in a pre-school setting, and we did a conceptual replication of that, meaning that we replicated, but also kind of changed a few parameters. And one of the parameters we changed was we used a more rigorous counterfactual, so instead of just a business as usual control condition, we actually had teachers in the control condition read the same books, but just not do the intervention that was associated with reading the books, and we've no longer found effects.

00:37:48 SP: And so it's interesting to think about the difference in the counterfactual that we used and also thinking about what this might mean in terms of where we are in education and as a society. There's been some work that's been put out by others, so Chris Lemons comes to mind talking about how that business as usual, condition that we often use a control is actually much better and better as time goes on. And we learn more and more about how to support kids early literacy, and so whereas this particular intervention might have been effective relative to that business as usual control condition in the past, now that we are across the board, perhaps using more evidence-based practices and have a better idea of how to facilitate kids early learning, maybe it doesn't show those same effects anymore because teachers are doing some of these practices as part of their normal shared book reading.

00:38:55 TH: Oh that's good. That's so... That's really intriguing, I think about that even just when we have to give informed consent and we're telling the teacher why the study is important and then they're randomly selected into it or not, we've still given them information that's perhaps validated maybe some of the things they might think to do. And so that's really fascinating. What else has been on your mind with teacher knowledge and early childhood lately?

00:39:20 SP: Oh, so many things. I love the teacher knowledge work, I think that that's kind of a different facet of some of the work that I do, and there... What I'm really interested in is understanding the mechanism by which teacher knowledge might exert its influence. So there's been a growing literature on the foundational knowledge that teachers of reading need to have in order to effectively use evidence-based practices. So a good example is, again, going back to phonemic awareness. So the idea that a teacher themselves have to understand what a phoneme is, and has to have their own phonemic awareness in order to be able to teach this well, and as adults, we often confuse ourselves. We're so knowledgeable about how words are spelled, sometimes that makes it really hard for us to pay attention to the sound structure.

00:40:22 SP: And so there's... Like I said, there's a growing evidence base showing that teacher's knowledge is important and associated with kids learning and outcomes, but we don't necessarily know exactly how that's working. So there is mixed evidence when we try to manipulate that and do something like a randomized control trial where we use professional development or a course work in order to increase knowledge and whether or not that does translate to kids effects, and then some work that I've been doing with colleagues, we've been looking at this in more of like a mediation type of framework and what we find is that the measures we have of classroom practice only partially mediate the effective teacher knowledge on kids outcomes. And so it's like, Okay, well, what are the factors that we're not measuring, what are we missing, how else? What are the other pathways that highly knowledgeable teachers are using in order to have that knowledge inform their practice and result in better outcomes for kids? So.

00:41:32 TH: That's really interesting, that's a new take on... Well, it goes to the theory of change that we often have when we're thinking through these, so you're saying the theory of change would be that if a teacher learns more, it should translate to better classroom practice, but you're not finding that necessarily.

00:41:51 SP: Not necessarily, or at least not fully, we find some evidence of that, but I think we are having to think about alternative models and that it's not this straight shot of more knowledgeable teachers are going to exhibit that in the measures of quantity or quality of practice that we currently have, which will lead to better student outcomes, but that there might be other pathways we need to be considering there, or perhaps more nuanced ways of measuring what's happening in the classroom is the other way of thinking about that.

00:42:28 TH: Yeah. There's always the measurement side too, right? It's like it didn't show an effect, but we didn't measure it correctly. That always could be the case.

00:42:35 SP: Well, that's one of our issues we went up against in [0:47:22.6] ____ with reading comprehension, this is a continuing issue in the field.

00:42:45 TH: It is... Yeah, it's both sides, it's not only the intervention, but how are we measuring it to determine whether it's effective or not, and also even going back to what you mentioned about knowing what to measure to even look later, so you have to almost have a bit of a crystal ball to think through what could be the factors that would influence this intervention outcome later that we need to examine and measure now.

00:43:12 SP: Yeah, and I always think too, the... And we've talked about this a lot today, but the amount of energy and time and resources that goes into running these studies, not just from the side of the researcher, but also from the side of the schools, the kids, the teachers... And so being able to be really thoughtful about some of those other measures or factors upfront, you don't wanna have just one finding or one paper come out of a study that you put five years into and these teachers were on board with you and willing to do this work with you. So trying to honor all of that time, energy and effort through being able to further explore other aspects of the intervention or sometimes you can learn things about kids development because you're measuring them over time, especially when you have these longitudinal outcomes and things like that.

00:44:10 TH: And that just goes to that importance of our partners, and just like I love how you said, just honoring the time they've put into it too, and one way we honor them is to make sure that we're listening to what's happening there and getting some input and also trying to capture everything that we can in the moment. Well this has been such a helpful discussion, I'm gonna be mindful of the time, Shayne, and get to my final two questions, but I just really appreciate delving into these complex issues, and I also just very much appreciate that these are complexities that you're discussing in a way that's really accessible to our listeners who are often in those settings trying to determine what evidence to use for a very specific child or group of children or a classroom that's sitting right in front of them, it is so, so tricky and you've given them more information about that. So the final two questions I ask, and maybe you've answered this one, I'm not sure, but what are you working on now that you're most excited about?

00:45:11 SP: You're gonna make me pick one huh?

00:45:12 TH: I know it's so hard. I know.

00:45:14 SP: Well I've already gotten to talk about some of the other things I find exciting, so one of the things, I'm at a little bit of a pause in this work, but one of my areas is how to support children's output and knowledge development. And I find this to be a really fun area and something that teachers are really curious about too. And so I get lots of questions about how to teach letter names and letter sounds and letter writing to kids. And this is one of those areas where we don't have as much research and certainly not as many randomized controlled trials as you would think. And so my team has worked really hard and we've developed a set of alphabet lessons that are almost modular in nature, and so we can manipulate those lessons pretty easily to test certain aspects of instruction. So let me give you just a quick example. One of the questions I get from teachers a lot, is should I be teaching upper case, should I be teaching lower case, should I be teaching them both at the same time? We don't have a great answer to that.

00:46:29 SP: We have some research that suggests kids are more likely to learn lower case letters when they have already learned the upper case, which suggests that either teaching it simultaneously or sequentially uppercase then lowercase makes sense. You could also make the argument that kids mostly see lower case letters, so it's important to have them learning those lower case forms fairly early. So this seems like such a minor thing, but I get this question a lot, and so we've designed these lessons, we've now piloted these lessons and shown that in their basic form, they are effective and now we can do things like manipulate, teaching upper case first, versus teaching uppercase and lowercase together versus starting with lower case, and we can see whether or not there is a reason for picking one type of instructional strategy over another. And what I'm really excited about this is, this is an example of an instructional practice as opposed to a particular intervention or curriculum, which I love doing that work, but I feel like being able to test something like this, that's a practice that could be applicable to multiple interventions or multiple curricula could be really impactful.

00:47:44 TH: That's very cool. That is definitely a question we get asked all the time. So I'll be hanging on every word of that work and trying to think more about it.

00:47:53 SP: Next step is to find funding for it, so once I have that in place then we'll be moving forward.

00:47:58 TH: Yes, always, always looking for the money, takes some money to do the work. That's very cool, very cool. Well, speaking about the alphabet, what is your favorite book from childhood or now?

00:48:10 SP: So, my absolute favorite book is called What's Next Baby Bear, and it's... I don't know if anyone else knows this book, but it's a story about baby bear who goes and has a picnic on the moon, and he goes to his picnic on the Moon in a cardboard box with a colander on his head as his helmet. And when I was little, my mom and one of my aunts, we would read the book and they'd make us a picnic lunch, and me and my cousins, or me and my brother, we would get into our box and put the colander on our head and we would take our own Trip to the Moon for our picnic, and so that's the story. I still have my dog-eared copy of that book, and I recently was able to send a copy to my niece along with her very fashionable red colander so that she can now start going on picnics to the moon.

00:49:09 TH: Oh Shayne, that is so cool. That's amazing. And you've passed it right down, that's really neat.

00:49:16 SP: Yeah, they changed the title of the book though now, and I can't remember what it was, so if anyone's looking for it, they changed the title somewhat, but the story is pretty much the same, and you can tell by the picture on the cover page, if it's a baby bear with a colander on its head, you know, you've got the right one.

00:49:33 TH: Oh, that's great, I'll link it in the resources, 'cause I like to have those in there for our listeners to grab 'cause... And I get great ideas, I'm gonna go out and get that book too and have some fun with my boys. That's very, very cool. Well Shayne. Thank you so much for your time. I really appreciate learning more about this with you.

00:49:51 SP: I love the opportunity to have this conversation. So thanks for inviting me.

00:49:59 TH: Check out www.seehearspeakpodcast.com for helpful resources associated with this podcast including, for example, the podcast transcript, research articles, and speaker bios. You can also sign up for email alerts on the website or subscribe to the podcast on Apple Podcasts or any other listening platform, so you will be the first to hear about new episodes. Thank you for listening and good luck to you, making the world a better place by helping one child at a time.

Evaluating educational practices with Dr. Shayne Piasta
Broadcast by