^(Million-dollar academic jargon right there, isn’t it?)
Much of the labor that is done in today’s digital economy is intellectual. Economists point to intellectual capital, psychologists promote emotional intelligence, and management gurus flaunt terms like knowledge management and organizational learning (though, apparently not as much as they used to). Certainly, work is still done and “stuff” is still produced, but technology, networked thinking, and machine learning are perpetually encroaching on the realm of work and labor. This shift to acknowledging “intellect as the key productive [economic] force” (Brennan, 2009) brings with it myriad questions about gaining knowledge, making sense of information, and gaining expert or referential power (Johnson, 2005) among workgroups and social networks.
Weinberger (2011) – in a nod to Marshall McLuhan via his profile of Jay Rosen’s long form/web form blog – proposed that the network itself is responsible for the emergence of new knowledge and new ways of thinking. Just as literacy re-oriented humanity’s working memory and cognitive capacity, so too has the proliferation of the “ecology of temptation” (p. 117). The net is limitless. It has no edges. Lines between experts and laypeople have been almost completely erased as content becomes more and more democratized. We are forever bombarded by links to one more resource and it becomes difficult to determine where to stop (and sufficiently trust the information we’ve discovered). This presents a challenge for workers, teams, and leaders, as we struggle to “filter forward” (p. 11) the information we need to do our jobs.
Danny Kahneman and Amos Tversky developed many ideas about the ways in which we take mental shortcuts in order to make sense of the information that overwhelms us on a regular basis. The gaps in what we know about a given situation or problem are filled in by our brains by way of “heuristics and biases” (Tversky & Kahneman, 1974). For example, we use “representativeness” (p. 1124) to make a judgement based on how well we believe something fits an existing category of things that we already know about. We use what we (think we) know to make cognitive leaps, but these leaps aren’t always correct. Uncertainty is amplified in the networked ecosystem, and, as we have in physical space, we must learn to deal with that missing information and figure out ways to find “stopping points” (Weinberger, 2011) and trusted information sources.
The new digital heuristic model is complicated by the fact that so much of our knowledge generation is social. If, as media ecologists like Weinberger and Rosen suggest, knowledge is moving from paper and our heads to “the cloud,” our ability to make sense of complex information now relies heavily on what others know and what we know about others. In an effort to shed some philosophical light on the topic, philosopher Steven Turner (2012) explores the notion of “double heuristics” and “social epistemology.” Turner suggests that “that individuals, each with their own heuristics, each with cognitive biases and limitations, are aggregated by a decision procedure, like voting, and this second order procedure produces its own heuristic, with its own cognitive biases and limitations” (p. 1). In this way, learning and sensemaking are inherently social; epistemology that’s ideally situated for the networked digital ecosystem.
Turner (2012) uses Michael Polanyi’s example of a group assembling a puzzle to demonstrate the collective heuristic. The optimal method of solving the puzzle (i.e., gaining new knowledge) would be a system in which “each helper will act on his own initiative, by responding to the latest achievements of the others, and the completion of their joint task will be greatly accelerated” (Polanyi, 1962). This requires social interaction, but Turner (2012) argued that the true nature of knowledge here still comes form the individual. There’s one piece that fits and only fits those adjacent to it, and that is the individuals’ contribution. In contrast, he proposed the notion of “bilateral asymmetric consilience” (p. 11) as a means of generating knowledge that can only spring forth from the interaction of two knowing entities. The example he uses is that of a doctor and patient. Both have knowledge (bilateral) of the presenting symptoms, but in different ways (asymmetry). Only when patient and doctor collaborate on identifying the disease does the answer emerge (consilience). The doctor knows the frameworks in which such symptoms might exist (“expertise”), but the patient knows which are present for him. Together, their interaction has produced and verified knowledge about the patient that could not have previously existed independently.
In his theory of Wirearchy, Husband (n.d.) stressed the importance of social interactions (networked) as a means of developing social norms and specifically power. He asserted that “command-and-control” (para 4) hierarchy is losing ground to the more effective methods of “champion-and-channel” (para 5) leadership. This echoes Turner’s (2012) discussion of planned science and the idea of top-down, individually biased leadership decision-making. The command-and-control model leads to information bottlenecks that are not needed in organizations with evolved social-epistemology systems. I believe that in such environments, a leader can assist in the development and distribution of heuristic learning. We can develop systems in which “bilateral asymmetric consilience” might occur; generating knowledge (or hopefully wisdom) that no leader, no matter how specialized, could have ever predicted or planned for. Experience and expertise will continue to hold value, I believe, but will shift to become tools in the facilitation of collective learning.
Brennan, T (2009). Intellectual labor. South Atlantic Quarterly, 108(2), 395-415.
Husband, J. (n.d.) What is wierarchy? Wirearchy [website]. Retrieved from http://wirearchy.com/what-is-wirearchy/
Johnson, C. E. (2005). Meeting the ethical challenges of leadership: Casting light or shadow. (5th ed.). Thousand Oaks, CA: Sage.
Polanyi, M. (1962). The republic of science. Minerva, 38(1), 54–73
Turner, S. (2012). Double heuristics and collective knowledge: the case of expertise. Studies in Emergent Order, 5, 64-85
Tversky, A., and Kahneman, D. (1974). Judgement under uncertainty: Heuristics and biases. Science, 185(4157), 1124-1131.
Weinberger, D. (2011). Too big to know: Rethinking knowledge now that the facts aren’t the facts, experts are everywhere, and the smartest person in the room is the room. New York: Basic Books