This question appears often in research papers focused on the disciplines of psychology and philosophy. They tend to explore the dissonance of interpersonal relationships that is created when one party ‘digs in’ because of their belief in being “right”. Or the research might seek to philosophize what it means to be “right” in the context of correct, accurate, or self-assured. I want to take a few minutes to setup a different paradigm and confine it to the ideals of empowering decision maker’s by handing them tools for analysis.
For over a decade, whether focusing strictly on Business Intelligence (BI) practices or more recently Rapid Application Delivery (RAD), the end goal has been to minimize the barriers between the decision makers and the data. The thinking is that if business users could interact with the data directly without waiting on developers and analysts those decision makers could transform that data into actionable information with greater velocity and agility.
This is all true. That is why there are suites of tools dedicated to this ideal. But something has gotten lost along the way that I feel is important to consider and explore:
How ‘right’ is that interpretation of data into information really, and how important is it to be ‘right’?
I’m not talking here about well-modeled data warehouses or metadata layers. I’m focusing on the results of users of self-servicing tools that allow the business user to define their own data linkages.
If we simply eliminate those layers of analysts and developers after a tool has been enabled to allow for decision makers to merge disparate data we are giving up a critical piece of the puzzle: Data Integrity. This is my primary concern. What if the rules they define in their self-created models or reports accidentally inner join the data instead of a needed outer join, resulting in fewer result records than is ‘right’ as just one example? For a low-value decision, it's not that impactful, but what about those mid and high-level value decisions? Without properly assessing the integrity of the results, ‘wrong’ decisions can and will be made. The value of being right, therefore, is the inverse of the cost of being wrong (obviously, since they are direct antonyms).
Without good process controls, training, and oversight, transforming data into ‘right’ actionable information is in jeopardy. Build processes to validate results for any decision that might be considered mid-level value or higher. Train decision makers not only in the use of the tools but in analysis techniques. I’m not suggesting every decision maker should be a Six Sigma Black Belt (though wouldn't that be nice sometimes!), but most are not equipped with the understanding of how to translate data into information (and finally knowledge but that is a different topic for another day). You must invest in honing the analysis skills of all decision makers. You can’t just assume that everyone understands how to weed out the signals from the noise.
Lastly, I challenge the idea that you have to be ‘right’ 100% of the time. Right is relative to begin with or psychologists and philosophers wouldn't debate the connotation and etymology of the word. Pragmatically, we need to teach our decision makers understand anomalies of a single data point, when a trend is really a trend or out of control, and how to consider qualitative factors with the quantitative before decisions are reached and actions taken.
If I had a concrete answer to the question of valuing being ‘right’ I would have books published by now rather than blog posts. But I hope this gets you thinking about the investments made in affecting the culture of an organization in a way to enhance the analytic skills of decision makers. The importance of this as part of the maturation of an information organization should not be forgotten when handing those enabling tools over to them.