Part 2 of Manovich’s Software Takes Command reading notes, with part one hashing out the categories of cultural software.
Lev Manovich divides his book into three parts: 1) “Inventing media software,” 2) “Hybridization and evolution,” and 3) “Software in action.”
Because my New Text Report will be centered on Manovich’s text, I am going to focus primarily on the “Inventing media software” section since that will not feature as much in my report. So let’s start with what Manovich sees as the “secret history of software” and look briefly at the major movers-and-shakers of the software/hardware world:
Though Manovich does not spend a lot of time discussing Alan Turing and the Universal Turing Machine, he does make it clear that Turing is one of the key foundational people who made today’s computers and World Wide Web possible. Manovich states that Turing’s work “theoretically defined a computer as a machine that can simulate a very large class of other machines, and it is this simulation ability that is largely responsible for the proliferation of computers in modern society” (Kindle Locations 1286-1288). To supplement Manovich’s scattered comments about Turing, I turned to other sources: 27Stars’ blog entry on Turing, biographical website on Turing by Andrew Hodges, and the BBC section on the mathematician. One article I found absolutely fascinating on the UK’s Daily Mail website is the work still being done with the film about Turing, “Imitation Game,” by academics.
**Side note: He was definitely not the most humanely treated man on the planet, as he was subjected to chemical castration for being a gay man and has only recently received posthumous pardon from the Queen of England.
These two men are probably the coolest computer techies I have read about in Manovich’s text as they helped shape the kind of culture we have on the interwebs. While Engelbart is famous for inventing the computer mouse along with Bill English, he is also known for his team’s development of “the ability for multiple users to collaborate on the same document” (Kindle Locations 1309-1310). The collaborative nature of the second development is something we use heavily in the New Media course as we work together on Google Docs (along with other software available through the Google Drive) and sites like Wikipedia (and the horde of smaller wikis that are cropping up, like this one on New Media). Manovich also explores Ted Nelson’s (paralleled with Engelbart’s) designing of a way to link documents together in what is now known as hyperlinking, though Manovich points out that the hyperlinks we use today are just one of the options Nelson pointed out in his theoretical works.
Despite Turing, Engelbart, and Nelson being super stars in the computer world, Manovich spends much of his time centered on Alan Kay and his “universal media machine” (with the name being a play off of the Univeral Turing Machine): “Kay wanted to turn computers into a ‘personal dynamic media’ which could be used for learning, discovery, and artistic creation. His group achieved this by systematically simulating most existing media within a computer while simultaneously adding many new properties to these media” (Kindle Locations 1196-1198). In essence, Kay and his Learning Research Group at Xerox Parc set about to simulate existing media (such as print, film, and sound) within a single machine (rather than watching a movie on your TV, using a typewriter, or turning on a radio, and so on) while also adding new dimensions of what could be done with each of these mediums, for “while visually, computational media may closely mimic other media, these media now function inf different ways” (Kindle Locations 1206-1207). But what does this mean? How can existing media now have different functions than before they were accessible on a computer?
Let’s work through an example Manovich brings up: word processor. Because my computer is such a prevalent part of my life and my work (especially as a grad student), I take using Microsoft Word for granted. The software will never do ALL of the things I want it to, but it functions and I know how to use most of its features. So why is a word processor on a computer something to take notice of? Well, think about your relationship with your writing when you write with a pen/pencil and paper compared to when you compose on a computer screen. Both have limitations and affordances that the other may share, but not always. Personally, writing by hand is my preference because I can move the papers every which way I want without being constrained by screen size and I have as many pages as I want scattered about me without needing one to overlap another. On the other side, though, composing on a computer allows me to copy and paste without extra effort on my part (clicking a few buttons vs. rewriting entire sections). And then there comes issues with distribution. Yes, I could physically hand over a copy of my handwritten work to a professor or colleague or whoever else would see my work, but a computer that has access to the interwebs allows me to email work, upload documents to learning sites, share work through this blog, and so on instantaneously (in most cases, though not always). Composing on the computer also feels less permanent in the way that pushing delete a few times will erase what I had previously written without leaving a visible mark (we’ll leave that thought here because that would be one hell of a rabbit hole to fall through), but there is also a deeper sense of permanency because what going into the interwebs and now the Cloud is archived so long as there is an archive.
Whew, that was quite a tangent, and that was only looking at a few aspects of word processing software that many of us use but don’t always take the time to thoroughly consider. And this is exactly Manovich’s point in this first section of the book. Much of our Web culture is founded on software that is invisible to us so long as it is functioning. Once something breaks down–such as a site not working, a blog entry not saving, a browser freezing up, a digital game glitching — we start to take notice of the software running our work, hobbies, shopping experiences, and information gathering.
Collaborative writing is another space where the developments in this “secret history of software” makes looking at the current Web’s affordances interesting. Manovich talks about collaborative writing/editing spaces on the Web (spaces that include pictures, video, sound files, and text), which have altered approaches to information: “By harvesting the small amounts of labor and expertise contributed by a large number of volunteers , social software projects— most famously, Wikipedia— created vast and dynamically updatable pools of knowledge which would be impossible to create in traditional ways . (In a less visible way, every time we do a search on the Web and then click on some of the results, we also contribute to a knowledge-set used by everybody else. In deciding in which sequence to present the results of a particular search, Google’s algorithms take into account which among the results of previous searches for the same words people found most useful)” (Kindle Locations 1317-1321). These sites (or search engines) are not static texts waiting for the next edition. They are constantly being updated, reviewed, changed, expanded, and deleted as people access them as readers, writers, and editors. And anyone who has access to the Interwebs can potentially access these sites and become writers/editors (though there are practices in place where the sites’ moderators attempt to review information for accuracy). We are consumers and producers in the information age.
Here’s a terrible example of collaboration, but an example nonetheless. Do love watching Stephen Colbert, though, that crazy man.
Is the Web a truly democratic space? Yes and no. Manovich states that, “at least in Kay’s and Nelson’s vision, the task of defining new information structures and media manipulation techniques— and, in fact, new media as a whole —was given to the user, rather than being the sole province of the designers. This decision had far-reaching consequences for shaping contemporary culture. Once computers and programming were democratized enough, many creative people started to focus on creating these new structures and techniques rather than using the existing ones to make ‘content'” (Kindle Locations 1484-1488). There may have been some democratization of computers and programming, but there are still obstacles to learning the binary code underlying software: financial ability to purchase the hardware, time to learn to code, access to any external resources (guide books, forums, wikis), mental capability/interest, and (at times) familial/societal/cultural expectations on whether such a thing is a worthy pursuit (or waste of time). There is a definite learning curve in regards to attempts with programming. If you are like me, all of the zeroes and ones make my brain swirly and I scurry back to the comfort of letters.
Shipping the Arrow and His IT Lady Love