The Missing Link: From Pilot to Purchase

This past week, Digital Promise published a report on the state of procurement in K-12 (http://www.digitalpromise.org/Improving_Ed-Tech_Purchasing).  It provides some great insights into how to improve and streamline the procurement process.  One of the recommendations hit home for us as it speaks directly to some of our previous blogs, “Create guidelines for districts to help evaluate evidence about products, leverage peer recommendations, and conduct well-structured pilots that increase rigor and inform purchase decisions without reducing instructional time.”

The report goes on to say:

“In the absence of trusted evidence of product success, it appears districts rely heavily on peer recommendations and ‘pilots’ within the district. And, based on interviews, those pilots are often informal, essentially ‘tryouts.’ Districts do not report using structured, data-driven approaches with clear and inclusive decision-making processes within pilots.”

How is it that data from the front line of teachers and students does not make its way to the decision makers?  Why are pilots not producing evidence from which purchasing decisions can be made?

No buy-in at the teacher level:   District administrators are presented with a new product or service offering, often for free.  It appears to meet their district’s needs.  The provider suggests a pilot to try it out and see.   Admin reaches out and taps certain schools or certain principals who push the ball down the hill.  Like a game of telephone, the message gets more and more diluted as it gets to the actual tester.  Participation is lukewarm or drops off before the pilot is completed.   

How to avoid this error:  As a provider, make sure you have a list of your testers well before your pilot starts.  A couple of weeks before the pilot starts, communicate the goals, talk to why they were selected, what they can expect with regard to providing feedback, what feedback you will be looking for, and how much time you expect for them to spend using the product as well as providing feedback.  We also find that a “getting to know you” survey, given before the pilot starts, can provide some insight into who is testing the product while also serving as a test of your feedback mechanisms.  It also gives your testers a tangible example of what to expect. 

No buy-in at the administrative level:  This is the same process in reverse.  A teacher finds a product he likes.  He goes to his principal who is happy for him to test out a new product.  He tests out his product and likes it, but “the district doesn’t have money in the budget this year, so thanks—and make sure to remind of us of this product come budget time.”

How to avoid this:  Talk with your tester about what success would look like to him.  Then ask him what his principal would define as success.  You will want to make sure that your product’s value will translate to the district’s goals.  Then set up a communication plan where the teacher and provider share ongoing information and data during the pilot period.  At the end of the pilot period, assist your piloting teacher in presenting his findings to his school.  The provider and the teacher should work together on that presentation to ensure that the data is supporting the stated goal of the pilot.

Lack of agreed upon and stated goals:  A teacher looks at some data from the pilot product that shows students are working on the chapter ending activity for an average of 56 minutes and thinks “Wow. That is way too much time for students to be working on this kind of activity.”  Meanwhile, the provider views the same data and think “Wow.  Look at how engaged students are in these activities.”  Both sides will be very disappointed at the end of the pilot. 

How to avoid this:  It is crucial that the provider, the pilot testers and the school/district have articulated and agreed upon what success will look like.  Providers must be honest about what their product can and can’t do and design pilot activities that can be measured against the stated goal.  Districts must be honest about what measures would lead to a purchase.  When goals are stated and measurable, the pilot has a defined path, testers know their role, and the provider can measure and report effectiveness to this district and others. 

Lack of communication:  This is a true story.  Principal says, “Sure, we’ll give it a go.  We are going to need 800 books and 50 teacher guides.” The pallets show up.  No one is expecting them.  Books are scattered about classrooms.  Principal says to teachers “They are free.  Try them out.  Let me know what you think.” Two months later, the provider shows up to pick up the extra books that are getting in the way.  As he checks in with teacher after teacher he is greeted with “Yeah.  They look fine.  I used a couple for some extra lessons.  I don’t know that we will be ordering books this year.”

How to avoid this:  See: all of the above. 

We are always happy to share what we know.  If you are interested in EyeLevel helping your team to design and execute measurable and meaningful pilots, we would be delighted to hear from you.