People have been building complex software for over sixty years, but until recently, only a handful of researchers had studied how it was actually done. Many people had opinions—often very strong ones—but most of these were based on personal anecdotes, or on the kind of “it’s obvious” reasoning that led Aristotle to conclude that heavy objects fall faster than light ones.
To make matters worse, many of the studies that were done were crippled by lack of generality, artificiality, or small sample sizes. As a result, while software engineering billed itself as a “hard” science, rigor was much less common than in “soft” disciplines like marketing, which has gone from the gut instincts of Mad Men to being a quantitative, analytic discipline.
Over the last fifteen years, though, there has been a sea change. Instead of just inventing new tools or processes, describing their application to toy problems in academic journals, and then wondering why practitioners ignored them, a growing number of software development researchers have been looking to real life for both questions and answers. In doing so, some are increasing the sophistication of their quantitative research toolkit, putting the power of statistics and data mining to good use as they plow through massive amounts of electronic records. Others have used rigorous qualitative techniques from anthropology and business studies to deal with complexities that t-tests and data mining algorithms cannot handle.
Sadly, most people in industry still don’t know what researchers have found out, or even what kinds of questions they could answer. One reason is their belief that software engineering research is so divorced from real-world problems that it has nothing of value to offer them (an impression that is reinforced by how irrelevant most popular software engineering textbooks seem to the undergraduates who are forced to wade through them, and by how little software most software engineering professors have ever built).
Another reason is many programmers’ disdain for qualitative research methods, which are often dismissed out of hand (and out of ignorance) as “soft”. A third reason is ignorance—often willful—among practitioners themselves. People will cling to creationism, refuse to accept the reality of anthropogenic climate change, or insist that vaccines cause autism; it is therefore no surprise that many programmers continue to act as if a couple of pints and a quotation from some self-appointed guru constitute “proof” that one programming language is better than another.
The aim of this blog is to be a bridge between theory and practice. Each week, we will highlight some of the most useful results from studies past and present. We hope that this will encourage researchers and practitioners to talk about what we know, what we think we know that ain’t actually so, why we believe some things but not others, and what questions should be tackled next.