I've described PM101 in an earlier post. In brief, we use a learning-by-doing approach to teach students who've never worked as product managers the basics of that role (course description here; syllabus here). Students specify functionality for a software application, then oversee its development and launch. This year, two-thirds of PM101 students are working on their own startup ideas; the balance are building apps that will be used only by the HBS community.
Product professionals and entrepreneurs who embrace agile development methods might dismiss the notion of writing an MRD — even a light-weight version like the one we assign — as "old school." We do bring seasoned PMs to class to explain agile's merits, but we also believe that students' understanding of agile is enhanced by experiencing waterfall techniques first-hand. Also, our students spend only one day per week on their project, which makes it impractical for them to serve as product owner on an agile team. Finally, requiring written MRDs and PRDs makes it easier for instructors and over fifty volunteer mentors from the Boston Product Management Association to provide feedback.
So, what patterns emerge from the MRDs?
- Problem vs. Solution Focus. Students were asked to prove the existence of a critical mass of potential customers with strong unmet needs that might be satisfied through a new application or online service. In their MRDs, most students avoided premature lock-in on a single, specific solution. They followed the design discipline of first exploring customer needs in depth, thus improving the odds of generating differentiated solutions.
- Research Methods. Only a modest fraction of the MRDs provided detailed descriptions of research methods employed. I accept blame for this, since I didn't include a research methods section in the MRD outline. But MRD authors should be aware that it is difficult for readers to assess the validity of claims about customer needs without understanding the quality of data behind those claims.
- Customer Interviews. Most students did a good job of summarizing what they learned from one-on-one interviews with prospective users. It's more difficult to discern whether the students followed interviewing best practices described in assigned readings, since only a handful submitted interview guides/protocols along with their MRDs. Next year, we'll insist that such guides be included in a research methods appendix to the MRD.
- Focus Groups. Only one student conducted a focus group. This research technique has well-known limitations, but focus groups can generate powerful insights with products that have strong emotional, status, or lifestyle associations. With such products, a comment from one focus group participant can trigger responses from others that might not be forthcoming in a one-on-one interview.
- Surveys. In class, we emphasize that: 1) surveys should be used to explore associations between prospective customers' beliefs and behaviors, and to document the prevalence of unmet needs in a population; and 2) these objectives can only met through surveys after a researcher has gained a deep understanding of customer needs through interviews, ethnography, and other qualitative methods. This message — along with a strong admonition to avoid biased (e.g., "leading the witness") survey questions — evidently hit the mark, because most of the students' surveys were reasonably well designed. However, not many surveys were systematic about including questions that measured both the strength of respondents' needs (i.e., how important is attribute X to you?) and perceptions of the degree to which existing solutions satisfy those needs. This combination of questions can be very helpful in identifying customers segments with distinct needs, and in estimating the size of those segments.
- Mockups/Prototypes. Only a few students showed wireframes or other low-fidelity prototypes to interviewees. Many students thus forfeited opportunities to get prospective users' reactions to mockups of potential solutions and thereby gain a richer understanding of their needs. Since students had only four weeks to complete their MRDs, and since we haven't yet covered UX design basics in class, this omission is understandable — and one that we'll address next year.
- Concierge MVPs. Again, time constraints probably made it difficult for students to conduct "concierge MVP" tests, through which small numbers of customers are served via manual, makeshift methods. Only one team conducted such a test, evaluating their idea for soliciting feedback on job candidates' practice interviews. Many other PM101 projects lend themselves to concierge MVP tests, and we've encouraged students to attempt them in coming weeks.
- Personas and Use Cases. Students were exposed to personas in class; they made good use of them in their MRDs, especially in identifying use cases for their application.
- Competitor Research. Not surprisingly, given MBAs' penchant for analysis, most MRDs included strong sections describing the strengths and limitations of existing solutions. With rare exceptions, however, students did not conduct usability research with rival products, missing a learning opportunity.
- Market Size Estimation. Likewise, most students did a good job gauging the size of the potential market for their application. There was some confusion about whether this exercise should end with an estimate of the Total Addressable Market (TAM) or with a rough sales forecast reflecting projected share of the TAM. Obviously, projecting market share so early in the product development process entails lots of guesswork. But even an informed guess can provide helpful guidance on whether the envisioned product can plausibly generated enough sales to warrant further development. Only one student analyzed the magnitude of switching costs that would be incurred by customers adopting a new solution. Such analysis can significantly improve the quality of market share projections.
Students will submit their Product Requirement Documents in early December. I'll report back then on lessons from that exercise and also on what we've learned about the design and delivery of a learning-by-doing course like PM101.