Ethical Questions in Net Neutrality

Sir Tim Berners-Lee, Generally considered the “inventor” of the World Wide Web.

In the late 1980s and early 1990s, Sir Tim Berners-Lee was hard at work crafting a series of tools that would help revolutionize modern communication.  While working at CERN in Switzerland, Berners-Lee developed HTTP (hypertext transfer protocol) and HTML (hypertext markup language) that enabled him to publish a set of basic “web pages” to the public from a server in his office.

These two protocols became the backbone of today’s World Wide Web, and are the foundation for many network-based tools such as Facebook, Netflix, Spotify, and Google.  Development like these, according to supporters of net neutrality, were possible because of the open, democratic, and decentralized sources of power in the emerging internet.  In a blog post directed to the members of the European Parliament in 2015, TimBL (as he’s often called) wrote, “The Web evolved into a powerful and ubiquitous platform because I was able to build it on an open network that treated all packets of information equally. This principle of net neutrality has kept the Internet a free and open space since its inception” (Berners-Lee, 2015).

Opponents of net neutrality would argue that in fact, more innovation, more new technology, and stronger infrastructure investment would occur if only the pesky government would leave major telecom corporations alone and let the power of the “free market” rule the direction of the net.  Proponents and detractors both offer compelling arguments for a democratic, fair, and open internet, but ethical issues begin to emerge when the conversation shifts to policy design and implementation.

What Exactly is Net Neutrality?

Net neutrality is the idea that “the internet” – comprising many protocols such as http, FTP, email (IMAP/POP/SMTP) – should be equally accessible for all users. Further, all publishers or networked services should be equally available to all customers.  The neutrality of the network has been put to the test recently as major telecom providers and ISPs (internet service providers) have begun to develop plans to “throttle” network bandwidth (at its own whim), or provide preferred access to pay-to-play sites and services.  When these large conglomerates provide preferential treatment to certain sites, others struggle.  When bandwidth is throttled by the ISPs, not all customers experience the same internet.  In this arrangement, it won’t be long until we’re careening down the slippery slope of open market greed and corruption.

Feds to the Rescue.

In 2015, facing huge public outcry, the Obama administration’s FCC (Federal Communication Commission) set up a series of regulations designed to preserve network neutrality and prevent large corporations from exercising profit-driven decisions about network performance and content.  CNN Money poetically reported that “The FCC just granted itself the power to defeat a raging, fire-breathing monster: the monopolistic network owners who can kill Internet freedom by blocking websites — or by creating an Internet fast lane for the privileged, few, rich tech companies that can pay for it” (Pagilery, 2015).

Feds on the Attack.

As the current administration settles into Pennsylvania Avenue, net neutrality protections may suffer a quick and painful death at the hands of new FCC Chair, Ajit Pai.  Pai, according to tech website Cnet, is the “man who will dismantle net neutrality ‘with a smile’” (Reardon, 2017).  Chairman Pai has been a long opponent of government oversight of the internet, suggesting that any such oversight would unnecessarily choke corporate operations, limit infrastructure investments (e.g., fiber & broadband), and ultimately harm consumers by way of higher prices and limited choice.  Of course, defenders of net neutrality (the current author included) recognize that these telecom behemoths already have the monopolistic clout to out-invest any competitor and very quickly seize self-interested control of the delicate ecosystem of an open network.  When a handful of companies controls the distribution channels AND the content, we all lose.

The Moral Authority?

According to scholar Norbert Wiener, we can begin assessing the ethics of technology by identifying and clarifying how the technology (or related actions) affect society (Bynum, 2001).  In the net neutrality discussion, this can be difficult because there are few (if any) demonstrable correlations on either side of the argument.  That is, we can’t necessarily prove that a truly open network has more or less ethical utility than one with tiers, ISP gatekeepers, or “zero-rated” services.  We also cannot demonstrate that all corporations that may exert control in a non-regulated network are inherently evil and looking to destroy civilization in search of profits.  However, examples already exist that telecoms are policing traffic, discriminating against certain protocols, and favoring preferential content providers.

Once we’ve clarified the underlying issue (here, I believe: who’s in control, the government or the telecoms? or, is government intervention ethical?) Wiener’s model suggests that we attempt to apply existing ethical frameworks to the problem.  A teleological framework seems appropriate, as both sides are ultimately promoting an open network for the benefit of innovation, user experience, and healthy economy. More specifically, a utilitarian approach that suggests the greatest good for the greatest number would demand a network in which no site/service, user, or packet of data is favored over another; this type of network, I believe, requires some intervention and cannot be left to trickle-down technology (Balkan, 2013).

The open and global nature of the internet means that every node in the network operates at a unique level of utility.  A casual Facebook user might certainly appreciate faster connectivity (via ISP sanctioned “fast lanes”), but the small business owner who can’t afford to buy in to the ISPs “specialized services” might lose valuable customers that experience slow load times on his website.  Thus, we begin a discussion of relative utility and the value that each user adds to or receives from equal participation in the net.  This is murky ethical territory. Within the context of the social contract framework, the casual (utilitarian) Facebook user might be willing give up a few megabytes per second of connectivity (individual liberty) for the sake of a network regulation that ensures equal opportunity for everyone on the network.

The nature of the open, borderless, limitless network (Weinberger, 2011) also means that ethical decision-making must adopt a global, if not universalist approach (Bynam 2001).  In contrast to the consequentialist approach mentioned above, perhaps the nature of the net calls for a deontological framework that does not attempt to account for the many possible outcomes, but rather, judges the morality of the action on our duty to preserve equality and fairness on the internet.  A majority of engaged internet citizens seem to believe that equal access is “right.”  Only 1% of more than 800,000 comments made to the FCC online were found to be “clearly against” Net Neutrality.

Judgement?

The net neutrality discussion presents a wide range of ethical concerns.  Judging the decision on potential outcomes is philosophically impossible (and irresponsible).  Applying moral relativism to decisions of network regulation is difficult because of the global nature of the network itself.  In my opinion, I believe that a deontological framework is most helpful for tackling this discussion.  I believe that equal access (not just access) to all of the internet is a right, and that defending that right is morally right.  Corporations and ISPs have rights too, of course, but when they are given free reign to control, filter, and eliminate data from the network, they begin to infringe upon perhaps the more basic right of equal internet access.  In this case, I think the access right is more fundamental than the profit right. In that way, minimal government regulation provides a great good for a huge number of internet users.

 —

References

Balkan, A. (2013). Trickle down technology and why it doesn’t work [web log]. Retrieved from: https://ar.al/notes/trickle-down-technology/

Berners-Lee, T. (2015). Net neutrality in Europe: A statement from Sir Tim Berners-Lee. World Wide Web Foundation [website]. Retrieved from http://webfoundation.org/2015/10/net-neutrality-in-europe-a-statement-from-sir-tim-berners-lee/

Bynam, T. (2001). Computer and information ethics. Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University.

Pagliery, J. (2015, February 26). FCC adopts historic Internet rules.  CNN Tech. Retrieved from http://money.cnn.com/2015/02/26/technology/fcc-rules-net-neutrality/

Reardon, M. (2017, February 14). Meet the man who’ll dismantle net neutrality ‘with a smile’. CNet. Retrieved from https://www.cnet.com/news/fcc-chairman-ajit-pai-dismantle-net-neutrality-with-a-smile/

Weinberger, D. (2011). Too big to know: Rethinking knowledge now that the facts aren’t the facts, experts are everywhere, and the smartest person in the room is the room. New York: Basic Books.

Technology is Leveling the Organizational Playing Field (but not always).

The age of the networked economy is upon us, and with it, comes a slew of issues for leaders to address in their organizations. Digital, network-enabled technology has been sufficiently positioned in the center of many human interactions, from personal relationships to complex economic structures. This near ubiquity (at least in the modernized countries of the world) has led to new ways of interacting with co-workers, sharing information, and understanding power.

A fundamental advantage of a connected workforce is productivity and collaboration. Weinberger (2011), among scores of others, asserts that networks can generate larger quantities of quality innovation than any single (so called) expert. Information can be brought to bear from all corners of humanity and analyzed by individuals with limitless perspectives and life experiences. In my opinion, however, this broad assertion fails to account for the a broad socioeconomic heterogeneity of internet users.

Growth of global internet users since 1993: http://www.internetlivestats.com/internet-users/

Internet Live Stats (2016) estimates that only 40% of the world’s population has access to the internet, up from less than 1% in 1995. The same site shows that in contrast, nearly 89% of those living in the US has access to the internet (in some form). My point here is that there is still a significant part of the globe that is not yet connected (or as connected) as we might believe. And those of us that are connected – despite contentious Facebook arguments in the last 6 months – are living lives that are more similar to one another than different. I think this is important in the discussion of globalization and the connected global workforce. Like all forms of communication, digital communication is subject to all sorts of cultural norms, biases, and media literacy. I think it’s important to understand just how different our experience of connectivity are from a majority of the rest of the world’s.

That said, it is very evident that even within the more “mature” user base of, say, the United States, there are still huge gaps in levels of experience, proficiency, and digital fluency. In a multi-generational digital workforce, it may be critical for leaders (titular or otherwise) to assist team members with limited technology experience or lower levels of proficiency. In this way, I think the leader continues to shoulder the burdens of facilitating learning beyond the content of the work. Teaching/implementing technology, I think, presents an important opportunity for developing organizational culture. That is, how are employees taught to use technology in their roles, how are norms and expectations communicated, and how do co-workers deal with each other in light of digital acculturation (Jarc, 2015).

From http://www.pewinternet.org/2015/11/19/searching-for-work-in-the-digital-era/

In this same sense, digital workers may use the same tools very differently. The information presented by Smith (2015) illustrates a few of the ways in which different users manipulate their digital experiences. For example, Smith outlines the differences between smartphone users who execute job applications on their devices and those who are using smartphones for “analog” tasks like calling (I mean really, who even does that anymore?!). Importantly, users across the spectrum expect different things from themselves and their technology. This can cause problems in networked teams, when individuals want or need to communicate in dramatically different ways. So, even if we are connected, we are connected in different ways.

Finally, I think the broadening of the networked workforce raises interesting questions about the nature of power and how individual nodes within the network make sense of relationships with others. Husband’s (n.d.) definition of wirearchy addresses part of this question. For Husband, power in the network is “a dynamic two-way flow of power and authority, based on knowledge, trust, credibility and a focus on results, enabled by interconnected people and technology” (para. 8). I think that this is a fine definition of a self-governing structure, but I wonder if this definition can play all the way through the end of the human transaction. As a colleague points out in her blog this week, sometimes teams need someone who can overtly assert power, make a decision, and drive teams out of counterproductive ruts. In terms of the networked workforce, with its heavy reliance on tech, does the best, most proficient tech user assume this role? If I can, say, type faster, or produce reports more quickly or research more efficiently, will others begin to look to me as the leader of the networked team? For me, this discussion of perceived power is an incredibly important one, and one that I will certainly continue thinking about!

 

References:

Husband, J. (n.d.) What is wierarchy? Wirearchy [website]. Retrieved from http://wirearchy.com/what-is-wirearchy/

Internet Live Stats (2016). Internet users [webpage]. Retrieved from http://www.internetlivestats.com/internet-users/

Jarc, J. (2015). Parlance, perception and power: An inquiry into non-standard language use in digital media (master’s thesis). Retrieved from Proquest: http://search.proquest.com/docview/1684659688

Smith, A. (2015, November 19). Searching for work in the digital era. Pew Research Center. Retrieved from http://www.pewinternet.org/2015/11/19/searching-for-work-in-the-digital-era/

Weinberger, D. (2011). Too big to know: Rethinking knowledge now that the facts aren’t the facts, experts are everywhere, and the smartest person in the room is the room. New York: Basic Books.

Bilateral Asymmetric Consilience and Networked Leadership

^(Million-dollar academic jargon right there, isn’t it?)

Much of the labor that is done in today’s digital economy is intellectual.  Economists point to intellectual capital, psychologists promote emotional intelligence, and management gurus flaunt terms like knowledge management and organizational learning (though, apparently not as much as they used to).  Certainly, work is still done and “stuff” is still produced, but technology, networked thinking, and machine learning are perpetually encroaching on the realm of work and labor.  This shift to acknowledging  “intellect as the key productive [economic] force” (Brennan, 2009) brings with it myriad questions about gaining knowledge, making sense of information, and gaining expert or referential power (Johnson, 2005) among workgroups and social networks.

Weinberger (2011) – in a nod to Marshall McLuhan via his profile of Jay Rosen’s long form/web form blog – proposed that the network itself is responsible for the emergence of new knowledge and new ways of thinking.  Just as literacy re-oriented humanity’s working memory and cognitive capacity, so too has the proliferation of the “ecology of temptation” (p. 117).  The net is limitless.  It has no edges.  Lines between experts and laypeople have been almost completely erased as content becomes more and more democratized.  We are forever bombarded by links to one more resource and it becomes difficult to determine where to stop (and sufficiently trust the information we’ve discovered).  This presents a challenge for workers, teams, and leaders, as we struggle to “filter forward” (p. 11) the information we need to do our jobs.

Danny Kahneman and Amos Tversky developed many ideas about the ways in which we take mental shortcuts in order to make sense of the information that overwhelms us on a regular basis.  The gaps in what we know about a given situation or problem are filled in by our brains by way of “heuristics and biases” (Tversky & Kahneman, 1974).  For example, we use “representativeness” (p. 1124) to make a judgement based on how well we believe something fits an existing category of things that we already know about.  We use what we (think we) know to make cognitive leaps, but these leaps aren’t always correct.  Uncertainty is amplified in the networked ecosystem, and, as we have in physical space, we must learn to deal with that missing information and figure out ways to find “stopping points” (Weinberger, 2011) and trusted information sources.

The new digital heuristic model is complicated by the fact that so much of our knowledge generation is social.  If, as media ecologists like Weinberger and Rosen suggest, knowledge is moving from paper and our heads to “the cloud,” our ability to make sense of complex information now relies heavily on what others know and what we know about others.  In an effort to shed some philosophical light on the topic, philosopher Steven Turner (2012) explores the notion of “double heuristics” and “social epistemology.” Turner suggests that “that individuals, each with their own heuristics, each with cognitive biases and limitations, are aggregated by a decision procedure, like voting, and this second order procedure produces its own heuristic, with its own cognitive biases and limitations” (p. 1). In this way, learning and sensemaking are inherently social; epistemology that’s ideally situated for the networked digital ecosystem.

Turner (2012) uses Michael Polanyi’s example of a group assembling a puzzle to demonstrate the collective heuristic. The optimal method of solving the puzzle (i.e., gaining new knowledge) would be a system in which “each helper will act on his own initiative, by responding to the latest achievements of the others, and the completion of their joint task will be greatly accelerated” (Polanyi, 1962).  This requires social interaction, but Turner (2012) argued that the true nature of knowledge here still comes form the individual.  There’s one piece that fits and only fits those adjacent to it, and that is the individuals’ contribution.  In contrast, he proposed the notion of “bilateral asymmetric consilience” (p. 11) as a means of generating knowledge that can only spring forth from the interaction of two knowing entities.  The example he uses is that of a doctor and patient.  Both have knowledge (bilateral) of the presenting symptoms, but in different ways (asymmetry).  Only when patient and doctor collaborate on identifying the disease does the answer emerge (consilience).  The doctor knows the frameworks in which such symptoms might exist (“expertise”), but the patient knows which are present for him.  Together, their interaction has produced and verified knowledge about the patient that could not have previously existed independently.

In his theory of Wirearchy, Husband (n.d.) stressed the importance of social interactions (networked) as a means of developing social norms and specifically power.  He asserted that “command-and-control” (para 4) hierarchy is losing ground to the more effective methods of “champion-and-channel” (para 5) leadership.  This echoes Turner’s (2012) discussion of planned science and the idea of top-down, individually biased leadership decision-making.  The command-and-control model leads to information bottlenecks that are not needed in organizations with evolved social-epistemology systems.  I believe that in such environments, a leader can assist in the development and distribution of heuristic learning.  We can develop systems in which “bilateral asymmetric consilience” might occur; generating knowledge (or hopefully wisdom) that no leader, no matter how specialized, could have ever predicted or planned for.  Experience and expertise will continue to hold value, I believe, but will shift to become tools in the facilitation of collective learning.

 

References:

Brennan, T (2009). Intellectual labor. South Atlantic Quarterly, 108(2), 395-415.

Husband, J. (n.d.) What is wierarchy? Wirearchy [website].  Retrieved from http://wirearchy.com/what-is-wirearchy/

Johnson, C. E. (2005). Meeting the ethical challenges of leadership: Casting light or shadow. (5th ed.). Thousand Oaks, CA: Sage.

Polanyi, M. (1962). The republic of science.  Minerva, 38(1), 54–73

Turner, S. (2012). Double heuristics and collective knowledge: the case of expertise. Studies in Emergent Order, 5, 64-85

Tversky, A., and Kahneman, D. (1974). Judgement under uncertainty: Heuristics and biases. Science, 185(4157), 1124-1131.

Weinberger, D. (2011). Too big to know: Rethinking knowledge now that the facts aren’t the facts, experts are everywhere, and the smartest person in the room is the room. New York: Basic Books

Knowledge Management and Collective Learning in the Age of Google

This week’s readings were fascinating to me.  I’m hoping this area will potentially be a part of my dissertation research.  I’ve been curious about communicating leadership via technology, as well as facilitating (teaching) creativity through technology.  In my role as an educator in several creative disciplines, I’ve wrestled with how leaders can foster communities of practice and open up dialogues in on-the-ground classrooms and online courses.  Specific to the knowledge management discussion, I think there needs to be a balance between archived, best practice, explicit knowledge (Dixon, 2009) and postmodern, socially generated heuristics developed in the cloud (Weinberger, 2011).  I would tend to agree with Jarche’s (2016) recent assertion that, “While people learn from formal instruction, they also learn in the workflow and outside work” (para. 8).

In my classrooms and online classes, I believe it’s still important to “deposit” – as Freire (1968) might describe it-  explicit knowledge into the minds of the learners.  In graphic design, there is a specific tool to use when attempting to accomplish a specific task.  In web development, HTML syntax is extremely important… a missing semicolon can ruin an entire web page, not to mention frustrate students and instructors alike!  In this way, I think that knowledge management serves an important function.  Organizing and disseminating (Dixon, 2009) the proper tools and processes is a requirement that must be satisfied before we can even begin to think about creativity or adaptive challenges (Heifetz, Grashow, & Linsky, 2009).  Once a student or employee has access to a baseline amount of information, then the leader can begin to shape and use sociotechnical systems (Burke, 2014) to optimize social learning (Jarche, 2010).

In terms of confronting those adaptive challenges (Heifetz, Grashow, & Linsky, 2009), I do believe that collective experiential knowledge (Dixon, 2009b) is powerful (and necessary).  In my web development level III course, for example, we work a lot on troubleshooting obscure problems that don’t necessarily have a right or wrong answer.  There’s no way I can teach the solution to every potential technical problem.  In this case, I need to manage the process of identifying aspects of the challenge, thinking critically about paths to resolution, and, most important to the new KM discussion, how to seek out and apply the expertise of the multitudes.

Just yesterday, I got an error on a client’s website: “php.mailer.Mzh.517.UNOFFICIAL FOUND.” The first place we go is online to the “expert exchanges.” Sure enough we landed among the discussion forums and GitHub pages posted by experts, and wove our way through different resources to find the right places to look to fix the error.  Through this network of professionals, and my own decade of experience with the system we were using, we were able to locate the right file, and fix the problem.  This is an example of socially generated knowledge that is then archived in the online format.  I’m curious to think about how this changes as storage and access methods evolve.

To Davenport’s (2015) point, I would imagine that few companies in the digital media space would ever attempt to deploy a traditional KM system in light of the available social knowledge already available online.  I know some organizations do maintain code repositories and other such silos of information that can be readily accessed.  I think a major difference in the new knowledge management world is context.  Most of Davenport’s critiques seem to boil down to a forced flow or a context-agnostic adoption of a system for the sake of having a system.  I appreciate Jarche’s (2010) comment that “Stock [archived information] on the internet is everywhere, and the challenge is to make sense of it through flows and conversation [context]” (para. 23).  Thus, I think the role of the leader is to help maintain a basic level of explicit knowledge through systems and institutional memory, as well as allowing for the creation of knowledge networks and organizational learning.  Not easy, by any stretch of the imagination, but doable with the right tools and human resources!

References:
Burke, W. W. (2014). Organization change: Theory and practice (4th ed). Thousand Oaks, CA: Sage.
Davenport, T.H. (2015). Whatever happened to knowledge management? The Wall Street Journal: CIO Report.
Dixon, N. (2009a, May 2).  Where knowledge management has been and where it is going- part one. Conversation Matters [blog]. Retrieved from http://www.nancydixonblog.com/2009/05/where-knowledge-management-has-been-and-where-it-is-going-part-one.html
Dixon, N. (2009b, July 30).  Where knowledge management has been and where it is going- part three. Conversation Matters [blog]. Retrieved from http://www.nancydixonblog.com/2009/07/where-knowledge-management-has-been-and-where-it-is-going-part-three.html
Freire, P. (1994). Pedagogy of the oppressed.  Continuum Publishing Company: New York, NY.
Heifetz, R.A., Grashow, A., Linsky, M., (2009). The practice of adaptive leadership: Tools and tactics for changing your organization and the world. Brighton, MA: Harvard Business Press.
Jarche, H. (2010). Closing the learning-knowledge loop. Harold Jarche: Adapting to a world in perpetual beta [blog]. Retrieved from http://jarche.com/2016/12/closing-the-learning-knowledge-loop/
Jarche, H. (2016). A framework for social learning in the enterprise. Harold Jarche: Adapting to a world in perpetual beta [blog]. Retrieved from http://jarche.com/2010/02/a-framework-for-social-learning-in-the-enterprise/
Weinberger, D. (2011). Too big to know: Rethinking knowledge now that the facts aren’t the facts, experts are everywhere, and the smartest person in the room is the room. New York: Basic Books.

Week 2 – Tool Analysis

This week, our task is to experiment with and analyze one of Jane Hart’s “Top Tools for Learning 2016.”  Considering the tool from both a technical perspective, as well as from a leadership and organizational perspective, this post will discuss both the pros and cons of integrating Clarify into one’s digital toolbox.

What is Clarify?

Clarify is a screenshot generation and management tool.  The application is designed to streamline the process of taking screenshots of your computer and compiling them into a format that can be useful for training, client presentations, customer service, or any number of other applications.

The app consists of the screenshot engine, which allows you to simply drag your cursor over a section of your screen to capture the content into an image format.

Once you have the capture, Clarify lets you add formatted text, annotations, highlighting graphics, and links. (This document was created through Clarify).

 

Thanks for downloading Clarify for Mac

I went to clarify-it.com to download the app.  Super simple, attractive site, that was easy to navigate and quick to download.

(Interesting feature sidenote… when I screenshot the browser window, Clarify pulled the <TITLE> tag from the website’s HTML and automatically added it to the document.  This is super handy!)

The application needs to be added to your applications folder, and the installer provides a simple shortcut for doing so by a drag and drop. (left)

The editing application has a very comfortable and easy to use interface, offering a consistent user experience that most computer users are used to.  Right away, you’ll recognize formatting options like bold, italics, paragraph justification, and font formatting.  A few of the icons were unfamiliar to me, but the app offers tooltips when you hover your mouse over an icon (very helpful!)

 

Leadership and Organizational Applications

A tool like Clarify can be extremely helpful in an organizational setting.  The application allows you to create documents that can help you communicate policy and/or procedure.  Implementing new practices in an organization can be very difficult for many reasons.  Providing clear, accurate, and attractive documentation can help leaders and change managers improve adoption rates and reduce employee (or customer) frustrations!  They say that a picture is worth a thousand words, and in this case, a picture (screenshot) may help organizations save time and money in the development of their training, policy, or technical support documentation.

As a real world example:

A few years ago, my team was consulting a retail outlet that sold locally-made products from vendors across the state.  They sold items mostly on consignment, though some were done wholesale.  We recognized that the retail floor staff was bogged down with managing vendors rather than selling product and interfacing with customers.  With 100+ vendors looking for payments, sales numbers, and inventory counts, things were getting out of control.

We devised a web-based vendor portal that would allow individual vendors to log in and see all of their data in (nearly) real time.  The portal  connected to the store’s point-of-sale terminal system and updated the data approximately every 6 hours or so.  It would alert vendors about low inventory, and provided numerous customizable reports such as monthly sales, most popular products, trend maps, and so forth.

I developed an internal training document for all retail staff that included step-by-step instructions for many different functions.  Given the wide range of technical expertise on the staff, the document had to be clear, concise, and VISUAL!  Thus, it contained screenshots of nearly every section of the portal, with annotations, highlights, and special instructions or tips.  Clarify would have been an ideal tool to use!

Similarly, I created a user guide that was distributed to all vendors along with their portal credentials.  In this case, it was very important for us to brand the organization.  May vendors sold products at many other locations around the state.  We wanted to make sure that we were distancing ourselves from the competitors, and did so by creating a positive brand association with this cutting edge tool (that no other retailer offered).  For this initiative, Clarify would not have been sufficient, as it does not have the depth of customization and control that I needed.

Do I really need/want this tool?

I’m always hesitant to download more third-party apps because I’ve found that more often than not, they do not provide a significantly better solution than either built in OS tools or tools I’m already using.  For me personally, apps like Clarify take time to learn, time to build a habit around, and seem unnecessarily disruptive to my process.  I believe this is a biased opinion because of my professional experience and knowledge using titles like Photoshop and other image manipulation/presentation/training tools.

In order to create, for example, a simple tutorial for a client on editing their website, I would do the following.

  • Take screenshots using Mac’s build in capture utility (Cmd+Shift+4 from ANYWHERE within Mac OS).
  • Drag and drop the screenshot from the finder into either a document or directly into an email
    •  If I need to add annotations, graphics, or anything like that, I would use Photoshop or Illustrator (for multiple page docs)
    • If it’s a simple demonstration of what something looks like or which button to press, I would drop it in an email and add formatting, bullets, numbered lists, etc.
  • In general, I would prefer to use screenshots embedded in an email rather than creating entirely separate documents, but I can see the benefit of the latter, especially in illustrating long or complex processes.

I am a firm believer in reducing digital bloat by way of fewer apps, generating fewer unnecessary documents, and taking fewer steps in my processes.  In the example above, I would rather not complicate things for myself or my client by creating another document that we both have to save.  I can accomplish almost anything I can do in Word (for example) in my email client.  Now, that information is stored in the cloud, accessible via multiple email clients, and perhaps more collaborative.  We don’t conflate the issue with software compatibility questions, firewall attachment restrictions, filesize limitations, or untrustworthy formatting.

However, as I mention above, email may not be suitable for creating long or complex documents from your screenshots.  If this is the case, and the client/customer need warrants the creation of a new document, I would want to have a lot of control and flexibility over customizing the document.

Challenges in Using the Software

  • Clarify documents are saved with a *.clarify file extension, making them unusable from any other application.
  • You can export a Clarify file to a doc/x format, but that basically makes a Word doc that you could have just created to start with.  The doc/x export provides 3 out of the box formatting options.  When using those, the document includes some “styling” to headers, colors, and fonts.  (Example 1 below)
  • The PDF export option does the same, but offers 4 formatting options.  (Example 2).  For me, this is moderately useful.  Clarify allows for some customization through the onboard PDF template generator. You can add a logo and specify fonts, colors, page sizes, margins, etc.
  • There is also an HTML output option (Example 4), which could be useful. When exported as HTML, you can select from 7 “themes.”  The resulting HTML document is created, but so are three additional folders that contain dependent files such as Javascripts, UI images, and Cascading Style Sheets.  (As I noted above, I don’t like this type of bloat.)
    • You can apparently build your own custom HTML templates!  This process is terribly complicated. In my opinion, if you can follow these steps, you could probably build your own HTML pages from scratch, again using existing tools.
    • Yes, in a large organization, an IT specialist or instructional designer could setup these templates for less tech-savvy content creators for publishing on an intranet, for example.
    • To make the most use of an HTML output, you’d want to post it to a web server, which you cannot do natively from within the app.  Thus, you have to have a separate FTP client and available web server.
  • Preferences: Clarify offers some customizations for things like user interface and export options.
  • Exporting options are tied to some popular services like Dropbox and WordPress and Evernote
    • Sharing to the above services requires several steps of setup and authorization.
    • Publishing to Evernote seems redundant.
    • Sharing the .clarify file on Dropbox means that your collaborators need the Clarify app as well.
    • WordPress sharing options are limited and seem to be complicated by many technical issues associated with publishing to the web (image uploads, permissions, updating, WordPress cache and image management).

Conclusions

In conclusion, I will not be adopting this tool into my regular workflow.  For some individuals and organizations, Clarify may provide a quick and easy way to share annotated screenshots.  However, I would caution leaders against trying to force the adoption of this system.  As illustrated above, the features of this program are well covered by other existing programs and utilities.  The benefits found in Clarify’s convenient packaging, in my opinion, are not sufficient for the effort that might be required to spread adoption.

Author, Didier Bonnet, in a 2015 article at Harvard Business Review, was quoted as stating that poor communication about the (comparative) benefits of a new tool is key to ensuring adoption in organizations: “Employees need to understand why [the new technology] is an improvement from what they had before” (Knight, 2015). In the case of Clarify, I’m not entirely convinced that the solution is better than what I had before. I would be hard pressed to champion the adoption of this tool in my organization.  It’s also not the type of tool that would necessitate organization-wide adoption.

As a piece of software, it’s a fine application.  Easy to use, fast, and very niche in purpose.  It does what it sets out to do.  That problem Clarify solves, however, may not need to be tackled with yet another app on your hard drive.

 

References:

Knight, R. (2015, March 19). Convincing skeptical employees to adopt new technology.  Harvard Business Review.  Retrieved from https://hbr.org/2015/03/convincing-skeptical-employees-to-adopt-new-technology

Week 1 – Technology and Leadership

This is my first blog post for a course at Creighton University entitled “Technology and Leadership.”  The course is a part of the Interdisciplinary Doctor of Education program, in which I am a student (clearly).  Throughout the semester, I will be posting more thoughts and reflections from the course… to my classmates who are reading along this semester, I look forward to sharing these ideas with you and learning from you as always.


Is the world “flat” or “spiky”?  And what happens when machines become smarter than we are?

In The World Is Flat: A Brief History of the Twenty-first Century, Friedman (2007) suggests that the earth (in the social, economic, technological sense) is “flattening” because of improvements to communication and travel technology.  The assertion is that technological advances since 1989 (para 10) have allowed individuals and businesses to dramatically increase productivity, expand markets, and enhance innovation through broader talent networks.  In many ways, Friedman is correct.  Technology has decreased the time it takes to do complicated tasks and increased our abilities to connect with one another.  The internet’s beginnings as a government/university research and collaboration network speaks to the genetics of how we use the network technology.

Taken from the technological determinist perspective (e.g., McLuhan, 1964), Friedman might have us believe that the seemingly ubiquitous nature of technology and the internet is responsible for major shifts in both human consciousness and subsequent economic shifts – on par with the effects written language had on society (e.g., Ong, 1982, Shlain, 1998).  That it pervades our lives in such a way as to change the way we think, what we perceive as real, how we do business, and even how we conceive of human relationships.  If that were truly the case – or when it inevitably becomes the case – I would agree that networked technology will indeed fundamentally alter human communication and consciousness.  We have already seen its effects in countries with advanced communication infrastructures like the US.  In other countries, however, where even literacy hasn’t yet completely transformed society, technology will not thrive in the same ways it has elsewhere.  (I’d be interested to explore the notion of skipping literacy in the shift to technology further…

What Friedman may miss, and what Florida (2005) asserts, is that the technology (no matter how good) is not sufficient for the complete “flattening” of the world. The social construction of technology approach (e.g., Bijker, Hughes, & Pinch, 1987) would suggest that the technology is only as influential as its host environment allows it to be.  That is, technologies (in all forms) emerge and thrive because of economics, culture, and people’s readiness to adopt the innovation.  The printing press, for example, became a pivotal technological innovation in human history because of the climate of late medieval Europe at the time of its invention.  Scientific work was blooming, and intellectual curiosity was creating a demand for information.  Economics and trade were calling for standards of language and measure and was pushing innovation from the financial side.  Exploration and travel meant that more people across Europe, Asia, and Africa were being exposed to written language in the form of mass replications of text via the printing press.  I believe that literacy has indeed fundamentally changed many societies, yet may still be a novelty in some others.  If we plotted it, I think we’d see a trend of technological hotspots across medieval Europe similar to those presented in Florida’s (2005) article.

Does super intelligence eliminate the social constructivist narrative entirely?  In some senses, I think that if the human condition is eliminated from the equation, and AI is given reign over further development and adoption of “technologies” we may see a very different topology.  As Bostrom (2015) discussed in his talk, when machines begin to learn and adapt, the potential exists for them to move beyond even the limitations of the physical (or sociocultural) environments in which they operate.  If self preservation becomes a value of learning machines, it could follow that they would devise ways of guaranteeing technological adoption and subsequent control of societies through a kind of determinism.  If, as Shlain (1998) argues, the invention of writing systems led to the rise of hegemonic masculinity and the subjugation of women, who’s to say that a similarly nefarious plan couldn’t be hatched by the machines to subjugate humanity?  I don’t know if I believe this (or just don’t want to), but I recognize the power of technology as a shaper of and respondent to humanity.  I sincerely hope that we figure out, as Bostrom (2015) suggests, how to manage our continued exploration of technology!

I’ve failed to address how this practically applies to my work… as a teacher, the influence of networked life on students is profound.  Simple examples like language fluency show how the changing communication technology shapes learning. The creation of unrealistic social expectations through MMORPG and other gaming systems impacts how students interact with one another in the classroom. Children who develop cognitively in fixed rules gaming/app environments become college students with a fixed understanding of what’s possible in the world.  I think this last example (anecdotally) creates myriad problems in creative fields such as graphics, web design, and advertising; an area I plan to continue exploring!

Can you tell I love this stuff?!

James

References

Bijker, W. E., Hughes, T.P., and Pinch, T.J.,(eds).  (1987). The social construction of technological systems: New directions in the sociology and history of technology. Cambridge, MA: MIT Press

Bostrom, N. (2015, March) What happens when our computers get smarter that we are? [Video file]. Retrieved from: https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are

Florida, R. (2005, October). The world is spiky. Atlantic Monthly. 48-51.

Friedman, T. (2007). The world is flat, 3.0. London: Picador.

McLuhan, M. (1964). Understanding media: The extensions of man. New York: McGraw Hill

Ong, W. (1982). Orality and literacy: The technologizing of the word.  New York: Methuen

Shlain, L. (1989). The alphabet versus the goddess: The conflict between word and image.  London: Penguin Books