In Chapter 20 of 21 in his 2012 Capture Your Flag interview, design educator Jon Kolko answers "What Roles Have Evaluation and Testing Played in Building Your Design Graduate Program?" Kolko details how testing, evaluation, assessment and feedback are honing the Austin Center for Design program. Kolko details the iterative and collaborative process that is taking place in Austin as the school matures and improves how it operates and educates.
Jon Kolko is the founder and director of the Austin School for Design. He has authored multiple books on design, including "Wicked Problems: Problems Worth Solving." Previously he has held senior roles at venture accelerator Thinktiv and frog design and was a professor of Interactive and Industrial Design at the Savannah College of Art and Design (SCAD). Kolko earned his Masters in Human Computer Interaction (MHI) and BFA in Design from Carnegie Mellon University.
Erik Michielsen: What roles have evaluation and testing played in building your design graduate program?
Jon Kolko: So we've treated the building of Austin Center for Design as an iterative design exercise and part of that process is testing it with real people. And so, we treated our first cohort as co-designers and to their faces I called them co-founders and I think that the majority of them would agree that they're co-founders in the venture. The venture is a non-profit. They don’t literally own equity in it and neither do I, but they own decision-making power. And it wasn’t all democratic but there was certainly a lot of things that we changed as a result of both explicit feedback, implicit feedback, observation assessment.
And so I think testing—so testing means different things to different contexts but I think it always means trying something, and learning from it and then iterating on it. And in this case, we tested the pedagogy: how we were actually going about teaching and learning. We tested the entrepreneurial idea, the notion that when you leave the program, you’ve started a company. We tested some professors who had never taught before. We tested some course content that had never been sort of used before. And like anything else with testing, we failed a bunch of times and that’s the point. I mean, so, arguably it's better this year and arguably it will be better next year.
What's really nice about being a new school is that if you're not dealing with bureaucratic organizations like accrediting bodies, you can change on a dime. That changes when you're dealing with those organizational bodies and probably in my future, I will deal with those organizational bodies because there's a huge benefit to them. But at least for the time being, it means that I can hone this program, content notwithstanding because the content is always going to change but I can hone the structure of the program until I feel like there's evidence for it being really, really good. As always with evaluation, you sort of take it with a grain of salt. And so, there’s things that I just have pushed back on as changes that were suggested and there’s things that I completely didn’t think of that students were like, “Hey! We should be doing it like this. Why aren’t we doing it like this?” So now we're doing it like that.”
There is something sort of really, really nice about building a program together with the people that are benefiting from it. I wasn’t expecting that at all when I started it. I never really thought of this as like, I guess I do think of it as like, it's my thing but I've never felt overly protective of it from outside feedback but I was not ready for how much benefit I got from that outside feedback, I think is what I'm trying to say.