Originally written in April 2007. Minor edits: March 2010.
Preface
In the past 50 years the digital user-interface has become a major field of cultural production, since the innovations of Douglas Engelbart in the sixties (mouse/keyboard/video-screen) through the personal computer revolution in the eighties to the rise of the World Wide Web in the nineties and the wider trends for social web applications since the turn of the century. Producers of hardware and software systems have been attempting to develop interfaces that will direct the users to produce the interaction desired by the system they represent.
Discussions about interface design have been constantly revolving around the axis of experience and usability, presented sometimes in contradiction and sometimes as complimentary assets of ‘good interface design’. As a tool the success of interface is defined by its ability to generate the desired interaction on behalf of the user and have the user understand and act by the set of rules that the system defined.
It is important to mention though, interfaces have existed for a long time before the personal or the institutional (academy/military) computer. Actually, they have been around longer than culture or man-made tools have. Yet the rapid development and consumption of interfaces have made this an important and influential part of contemporary culture.
Interface is defined as a point of interconnection between two independent systems. This definition sheds a different light on the way we have learned to know the interfaces around us. If the sides interacting through the interface are to be two independent systems, then one would expect interface itself to maintain that balance and not favor one system over the other. This essay would address the question of control and agency embedded within interfaces and attempt to find where is interface situated within the map of power. It would also use several examples and attempt to propose tactical and strategic approaches to act within this conflict.
Encoded/Decoded
One of the first fundamental interfaces we all use is language. Semiotics is occupied with questions of interface down to the level of the building blocks of meaning. In that level interface is both what differentiate symbols as independent units and is ‘the glue’ that connects them into new units.
Linguist Tania Reinhart explored the low-level interface between syntax and systems of sound. She researched the counter influences of context and meaning and the role of the linguistic interfaces on multiple levels of language. Her work is very influential on the margins of human and computer languages. Reinhart’s work investigated both the interface between the low level symbols and the very high level of media theory where information (and disinformation) lays on the interface between context and meaning.
Researchers working in the intersection of Computer Science and linguistics try to analyze processes both in human and in computer systems. In the highest level we can find the computer-human-interface – a point in which the two differentiate both as independent systems and as a new constructed unit. This requires us to question both the interface and the nature of the new unit it constructs.
The oral communication circuit as defined by Ferdinande De Saussure involved a symmetrical feedback loop: message expressed through speech from Alice’s mouth and received through Bob’s listening ears. Then again, a feedback loop occurs when Bob constructs a new message, express it through speech to be received by Alice. This communication circuit depends on equal sharing and use of the interface, in this case – spoken language.
British Sociologist and cultural theorist Stuart Hall rejected this model of what he called ‘textual determinism’ and suggested that the code used by Alice to encode a concept into an oral message is not necessarily the same code used by Bob to decode the heard message into a concept. In an essay from 1980 titled ‘Encoding/Decoding’ he suggests that rather than being a passive action of receiving, the recipient of the message is actively involved in the communication circuit and decodes the message into a concept. The use of code in the communication circuit will be key for examining questions of interface.
We have established that language is a communication interface for sending and receiving messages, but it involves other cognitive interfaces which are the codes responsible for encoding and decoding the messages and concepts. While language should be shared for the Saussureian circuit to maintain itself, the codes used for encoding and decoding can be and often are different.
Mass media communicates not only the message but the recommended code to decode it with. Here is an example:
Sender’s concept: Buy Nike products.
Sender’s encoding: Nike stands for the free spirit of sport. Rather than only fashion, Nike products embody sports itself and the athletics abilities it stands for are at the core of the American dream.
The program / (meaningful) discourse: A clean and aesthetic TV ad showing Michael Jordan in an empty basketball court shooting hoops and slam-dunks in slow motion with only the sound of his shoes squeaking on the floor. Nike’s logo appears at the last 3 seconds of the ad.
Up to this moment the sender has complete control over the message. The sender hopes for the receivers not only to receive the message but to actually identify with the proposed identity and to adopt it as their own. Indeed, adopting the suggested identification model would lead to decoding the message with the original code encoded into it and would get the desired message across. This is what Hall calls the Dominant Code. In a case like this the receiver can be expected to continue the process in this fashion:
Receiver’s decoding: Nike stands for everything I believe in – the free spirit of sport, creativity, the love of the game, athletic excellence and the American dream.
Receiver’s (produced) message: Nike is a brand for me, I should buy their products.
According to Hall another option for decoding proposed is the Negotiated Code. It means the receivers understand the message and the encoding process, they do not dismiss it all together but do not automatically buy into it either. In this case the circuit might continue like this:
Receiver’s decoding: Nike have a fancy ad, it sure is aesthetic. I guess they have a new line of shoes as well. I like sports but that has nothing to do with my taste in fashion.
Receiver’s (produced) message: At the end of the day, I wouldn’t mind buying their shoes, Nike are as good as the next brand, when I do need to make this decision I will choose shoes based on my own reasons, not because I like the Chicago Bulls.
The third code would be the Oppositional Code. The receiver understands the content of the message and the code it is embedded with, but chooses to dismiss that code all together and use another code to decode the message in opposition to the initial intent of the sender:
Receiver’s decoding: Nike attempt to buy me with Michael Jordan, his Slam-Dunk and the American dream, while all I see are sweatshops and labor exploitation of the worst kind.
Receiver’s (produced) message: I will never buy any Nike product, I might even consider switching to the side of the Lakers.
These three interpretations of the same encoded message reveal the complexity of the communication circuit. Saussure spoke of oral communication which creates a symmetrical circuit of interaction, Hall spoke of mass media and specifically television, a unidirectional medium. The communication interface is crucial in the process of encoding and decoding – it is the structure that formalizes the message and defines the nature of the communicated relationship. Yet in both speech and television interface is invisible to us, it has very explicit rules and we are aware of its abilities and disabilities. When in oral dialog mode, we expect to be given the chance to speak back and use the same interface for discussion as our correspondent use, we are expecting a symmetrical one-to-one relationship. With Television we are expecting a one-to-many asymmetrical relationship – we consume the audio-visual televised message without expecting any response from us. We can always switch to another channel, still that would not directly change the message of the broadcaster, rather switch the broadcaster itself.
The Web’s Communication Diagram
Language is a common interface, television is not. We do not respond to the television since it does not provide an input interface to us. The internet is a many-to-many platform which through formalized interfaces allows different types of communication. We have the one-to-one communication diagram of a video-chat or a networked Chess game, the many-to-many diagram of chat rooms and IRC channels, but what would be the case of the web?
The web is celebrated for dramatically lowering the threshold for publishing with new creative platforms and accessible interfaces potentially turning any user into a media producers. The relatively low prices of hosting, the simplicity and flexibility of HTML and the interconnectivity model of the hyperlink have made the web a revolutionary tool for gaining ownership of media.
The web contains interfaces that allow for one-to-one, many-to-many or one-to-many unidirectional or bidirectional interactions. This multiplicity complicates the web’s communication diagram. In this case again the key to exposing the diagram is the question of identity. In both the oral dialog’s symmetrical one-to-one dialogue diagram and the television’s one-to-many broadcast diagram, the identity of the communicating systems is defined and so is their role in the communication circuit. In the case of the web this identity is a bit harder to distinguish. Let’s try to look at a few examples.
Even today with the with the ubiquity of commentaries on blog comment interfaces and social media sites many websites, function in a classic one-to-many broadcast format without offering interfaces for users input. This would be similar to the case of the television – the user’s interaction defines the form of consumption – which pages to browse, at what pace, when to scroll the page and so on. All the content is predefined by an identified system – the site’s editing board. It is in the site’s benefit to fit its content to the model of the audience just like the Nike TV ad fit its message to its audience’s value system. Yet the audience can be abstracted as a general public since its passive consumption of information will not be very relevant to the nature of the communication cycle. The only identity represented through this dominant interface is that of the publisher.
Other sites allow visitors to use text comments. Most blogs are built in this model. In this case the owner allows her audience to be active consumers of the information and to take part as authors of content within it through a predefined interface. The communication cycle is still one-to-many though a second layer of feedback is added and the audience of the blog can develop many-to-many interaction between themselves based on the context set by the blogger. The identities in action are first and foremost that of the blogger and then those of the community of followers that have gathered around the blogger and her writing.
Others so called social web services such as Flickr and Twitter are based on user generated content and primarily offer interface and hosting as their product rather than content. The user becomes the author and assumes a perceived ownership over the content. The webpage is empty without the participatory content and is dependent on it. This diagram might appear identical to that of the second model, but it is in fact inherently different. The identity of the author is merely that of a privileged audience member. The actual identity in power which is formalized through its interface is that of the hosting site. The owner’s interface again, much like in the non interactive example, sees the members as an abstract public rather than a defined and identified community. Defined communities might emerge within this interface but the choice of interface and in that sense the context and format of the interaction is totally dependent on the service provider.
We can see in all these models that the website’s owner may give away some control over the content but would always maintain the control over the interface itself. Even the highest level of interactive web content does not allow authorship of the interface – and so while content can be authored by the owner of the site or its audience the rules of engagement are always defined by one side of the communication cycle.
Commons-Based Peer Production – A New Ideology
One of the most radical interfaces of the web is that of Wikipedia- The Free Encyclopedia. What makes Wikipedia’s model so exceptional and have become the subject of an extensive discourse and research, is again the relationship between content, interface and identity. In Wikipedia’s case there is no single identified author identity but a peer-produced context – the Wikipedia article. Yale Law professor Yochai Benkler, one of the most influential theorists of free culture and network production defines this phenomenon as a new force in the market. He coined for it the term Commons-Based Peer Production:
I call this commons-based peer production. Commons (as opposed to property) because no one person controls how the resource is used, they are either open to the public or a defined group. Peer production because it is done through self-selected, decentralized individual action.
The Wealth of Networks: How Social Production Transforms Markets and Freedom / Yochai Benkler (2006)
Benkler mentions Wikipedia as a prominent example in his writing but stresses that it is not that a Wiki is just some kind of a magical interface ingeniously designed to generate high-quality content. It has been the community of editors and moderators that from the early days of Wikipedia made sure that the work of vandals, spammers, pranksters or even just inadequate contributors is balanced by moderation and this valuable yet vulnerable network produced good is protected.
Wikipedia is a collective identity involving a complex governance structure. It might be the most liberal example of a successful web application we can see today, and is an inspiring proof of how alternative social structures can emerge on the web. Still, its communication circuit has a pattern similar to that of the Nike commercial:
Sender’s concept: We want you to edit content only if you can really make a constructive contribution to better the quality and accuracy of the article and towards the shared goal. Wikipedians should work towards a wide-consensus, the articles are not meant to present individual expression or discussion.
Sender’s encoding (embedded in the interface): Wikipedia is a common effort and a valuable resource to all its users. The page you are browsing is the product of hard voluntary work by a group of people dedicated to a mutual goal. We invite you to be a constructive part of this group. Should you decide that your input can benefit this work, then and only then should you click the edit button, learn the unique editing syntax and make the edit. We expect you to respect the power we invested in you and to double and triple check before pushing your edits live. Remember your edit is always temporary and can be changed or reverted immediately by any other user or moderator. We trust you and believe you would act in the benefit of the greater good.
The program / (meaningful) discourse: Minimal interface, very rational and utilitarian. The article page is not editable in itself. Edit links are available for different parts of the page. The link leading to a discussion page appears at the top and is deemphasized (only a fraction of Wikipedia’s users ever noticed its existence). The interface is unique, it is not impossible to understand but it does requires learning, adjusting and a bit of trial and error. The interface allows you to preview your edit prior to submitting it.
Wikipedia’s ideology is deeply encoded into its interface. It is run by a non-for-profit organization, and is built on the practices and ethos of the Free Software movement. This carefully crafted and tightly policed ideology is the main source for its success. We know for fact that Wikipedia’s dominant code is widely exercised by tens of thousands of editors who follow the message and practice the ideology. We can firmly say that this ideology is also practiced by millions and millions of Wikipedia users who do not edit Wikipedia entries feeling not knowledgeable enough to contribute, or not worthy of taking part in this almost religious practice.
The Revolution Will Not Be Verified
Attempts at oppositional or even negotiated decoding of Wikipedia’s participation ideology, like spam and vandalism, are strictly reverted and blocked by Wikipedia’s moderators – volunteers who have climbed up the Wikipedia hierarchy by proving to be loyal to the shared ideology and worthy of more authoritarian powers.
On June 27th, 2005 inspired by the way Wikipedia successfully maintains a dominant code in an open and critical environment, the Los Angeles Times launched a new feature in their site which they called Wikitorials. The idea was that the editorial articles would be offered as wiki articles for the readers to participate on and collaboratively edit. On June 19th, after two days of seeing their editorials being spammed and vandalized this brave initiative in open journalism was canceled. The following message was left on the page: “Unfortunately, we have had to remove this feature, at least temporarily, because a few readers were flooding the site with inappropriate material. Thanks and apologies to the thousands of people who logged on in the right spirit.”
“If you’re going against what the majority of people perceive to be reality, you’re the one who’s crazy”
Stephen Colbert in The Colbert Report
An interesting example of leadership and of conflicting codes happened on the Wikipedia Elephant article. In the TV show The Colbert Report Stephen Colbert plays a satirical character of a right wing television host dedicated to defending Republican ideology by any means necessary. For example he constructs ridiculous arguments denying climate change. He is not concerned that this completely ignores reality, which he claims “has a Liberal bias”.
On July 31st, 2006, Colbert ironically proposed the term Wikiality as a way to alter the perception of reality by editing a Wikipedia article. Colbert analyzed the interface in front of his audience and performed a live edit to the Elephants page, adding a claim that the Elephant population in Africa had tripled in the past 6 months.
Colbert proposed his viewers follow a different social pact. He suggested that if enough of them helped edit the article on Elephants to preserve his edit about the number of Elephants in Africa, then that would become the reality, or the Wikiality – the representation of reality through Wikipedia. He also claimed that this would be a tough “fact” for the Environmentalists to compete with, retorting “Explain that, Al Gore!”
It was great TV, but created problems for Wikipedia. So many people responded to Colbert’s rallying cry that Wikipedia locked the article on Elephants to protect it from further vandalism. Furthermore, Wikipedia banned the user stephencolbert for using an unverified celebrity name (a violation of Wikipedia’s terms of use).
If we refer back to our definition of interface as a point of interconnection between two independent systems, we can understand how both Wikitorials and Wikiality were pushing the wiki interface beyond its ideological context – exposing the delicate ideological balance it is situated in. Wikipedia (as an independent system) strives to maintain a productive relationships with each of its users (the other independent systems) through its ideologically encoded interface. The LA Times Wikitorials experiment attracted an audience similar to that of Wikipedia, Jimmy Wales, the co-founder of Wikipedia actually contributed one of the first edits to Wikitorials, and attempted to borrow the interface model without understanding that the LA Times in itself represents a different ideology. The LA Times is not a Non-For-Profit and it does not stand for Commons-Based Peer Production. It has a history of representing exclusive authority and maintaining tight control over content. The message it was encoding could not conceal these inherent identity and ideology differences between its own and that of Wikipedia. The LA Times could not invest the patience and endure the growing pains that Wikipedia suffered from in the early stages of self definition. After only two days it have used its ultimate authoritarian power as the owner of the interface – it called the experiment off. The Colbert Wikiality attack has used Wikipedia as a model and a platform for a mutually constructed ideology and have created a spectacle of information vandalism along the lines of the Yes Men’s Dow Chemicals TV prank. It was in a way the opposite example to Wikitorials – the same system (Wikipedia) offers the same interface (the wiki interface) to an audience that is dedicated to a different ideology than its own. Hosting the wiki interface under the LA Times domain encoded a different message into the interface and generated oppositional decoding. In the Colbert Report case the same Wikipedia message encoded into the interface was at play. Yet a deliberately oppositional decoding was leading Colbert fans edits practicing the dominant code of Colbert’s televised Wørd.
We can see by now that ideology is embedded in the interface and that often the interface acts as a message in itself. When Mcluhan wrote “the medium is the message” he was implying a lot of the communicated message is embedded in the medium of choice. He was referring to atoms when a medium, like television, had a defined interface and a stable control mechanism. Today he might have revisited this quote and go for “the interface is the message”. In the case of the web this message is almost always broadcasted in the model we are familiar with from earlier, less-interactive forms of mass media. In both these wiki cases, though control was definitely distributed through the wiki interface, one side of the communication diagram always holds the keys. In both cases this side chose to execute its authoritarian power to ‘call the deal off’ when the desired participation was not acheived. The fact that one side can break the deal and the other can’t is a part of the interface and reveals its bias – a bias that has become the foundation of how we know the web.
Unknowns Knowns in On-line Urban Space
We can now frame the paradox of user-interface at the age of proprietary software. While interface attempts to stand between two independent systems, to define their borders and their rules of engagement, user-interface in software is almost always defined by the side of the software developer. In the diagram of software/interface/user the interface is controlled by the side of the software. This often proves to be an efficient model, but we should look beyond efficiency. As a form of cultural practice, user-interfaces teach us how to interact with systems and how to comply with their rules. The paradigm of user interface as compliance with biased rules of engagement is a way of manufacturing consent.
“There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don t know we don’t know.”
Donald Rumsfeld, March 2003
In an essay titled Design As An Ideological State Apparatus Lacanean philosopher Slavoj Zizek reacted to Rumsfeld’s attempt in amateur philosophy and suggested: “What he forgot to add was the crucial fourth term: the ‘unknown knowns,’ things we don’t know that we know which is precisely the Freudian unconscious, the “knowledge which doesn’t know itself,” as Lacan used to say.“ Zizek claims that it is with these unknown knowns that design deals. It is also the unknown knowns that are embedded into interface design. We “don’t know we know” that we don’t make the rules of the interface. We “don’t know we know” that we follow the dominant ideology encoded into the interface. We “don’t know we know” that our compliant use of interface is also defining our “know-how” of interfacing with other systems of control.
The great promise of the web was that it lowered the threshold of accessibility to media publishing, both as a consumer and a producer. Writing HTML is fairly easy and does not require any programming skills. The most basic and most powerful interface of the web – the hyperlink is in the grasp of any user. HTML is an open standard and is not exclusively controlled by a single party. If so, what are the unknown known of the web? What are the constructs of the web that we have come to take for granted?
HTML is the common denominator for web development, and is involved to some extent in any interface on the web. That is, of course, as long as you are the owner of the website. The construct of the Domain Name System (DNS) as it is used on the web creates a link between three elements: identity, control and space. While our experience of everyday life in the physical world formalizes these three elements and unites them in the body (as identity, as control and as space), on the web they are projected to the webpage. Control over the space of the webpage is in the hands of the identity behind it.
Unlike the body though, the online space is experienced as an information retail space – inviting people to wonder through it and shop for information. Like in retail space private space is maintained and the rules of engagement are defined by the identity in power. Unlike in physical space though, private control is not contrasted by other forms of control, it is the only control diagram on the web. Every space is owned and controlled. The web has developed like a hive of networked benevolent dictatorships which practice their control through interface.
“A unitary urbanism — the synthesis of art and technology that we call for — must be constructed according to certain new values of life, values which now need to be distinguished and disseminated.”
Gil J. Wolman, September 1956
at the Lettrist International Delegate to the Alba Conference
There is currently no public space on the web. In the fifties and sixties Wolman and the Situationists International have warned us the shift from public to privately controlled urban spaces. This accurate prediction about urban space has materialized perfectly in cyberspace – a social space completely controlled and privately owned.
While this critique of the web might share a lot with the Situationist ideas of Unitary Urbanism, we must distinguish these two social spaces. Unitary Urbanism and ideas advocated by newer movements like Reclaim The Streets are based on a somewhat romantic idea of preserving and reclaiming the city’s public spaces. Yet, the web has never had any public space.
The closest thing to public space on the web in my view would be Wikipedia, which offers an alternative to the identity/property paradigm and offer a democratic governance system that in potential allows any user to achieve access to position of power. But as we’ve already established that Wikipedia in itself does not make its interface (and ideology) as accessible as its content. Wikipedia’s governance is complex to the level a bureaucratic catch 22 – to attempt to change the system you need to become its greatest disciple. In our physical world terms Wikipedia would be a public service institution. While unitary urbanists speak of the transition of the city’s social life from the town square to the mall, the entire web has been built as a mall and currently has no model for a town square.
Moreover, in our context the web is formally closer to ideology – in its immateriality, in its artificiality, in its detachment of body from identity and in its practice of information. This similarity between ideology and interface is exactly what makes the web an important field for social and political practice.
Our compliance as web users is the web’s biggest unknown knowns. We can’t think of the web in any other way. Wikipedia proved this perception can be challenged when it comes to user generated content, but we have yet to see a substantial challenge to the dominance of interface. This might be a call for a new approach of user generated interfaces.
Cracks in the Walls
It seems like the call for more openness, further information mobilization and a challenge to the bordered website paradigm is coming from all ends of the web and new initiatives and technological trends are being promoted.
The past few years have seen a growing tendency for embracing web standards that keeps the content, the structure and the presentation of the page separate from each other. The content, lets say, a blog post can structured using the title with an excerpt from the text body and a mention of the time of publishing, the category and so on… and then presented with a different, composition, colors, fonts.
Using web standards (as promoted by the W3C) has many benefits for web development but it also the propagator of the huge information mobility that we see on the web today. When the content can be extracted from the context of the page it can be published in different formats and can ‘travel’ beyond the walls of the webpage.
One of the main developments making use of this information mobility is the RSS feed – a way to store content as in a file that can then be easily structured and presented in different contexts. And so new interfaces are being built to deliver information and serve it from different sources on the web. This content mobility beyond the private webpage not only does not deprive websites from the desired incoming ‘traffic’, but often generates more traffic and greater exposure for sites previously anonymous.
Information feeds have become a standard that generated many innovation in the field of interface design and have promoted the idea of the web as a continuous information space rather than a collection of segregated private spaces. Today there is a demand of many web services to open-up and provide hooks for external interfaces to use the data without having to browse to the website and conform to the specific structure and presentation used there. This demand expands beyond just text feeds. Media is also being fed through image, audio and video aggregators. Even interfaces are moving beyond the borders of the webpage. The most prominent example in this field is Google Maps. It provides both a powerful online map interface and a hook into that interface, what is called an Application Programming Interface (API). Unlike the User Interface (UI) the API allows simple ways to programmatically request services from a software. What this means is that the powers of one software can be shared by another. In the case of Google Maps, it has created an explosion of online mapping applications and together with GPS technology have substantially contributed to the renewed interest in mapping and geography.
We cannot suspect Google of being just plain generous with its services, obviously Google benefits from having user generated content routed through their API and copied into its databases for its ongoing total information project. Proprietary software companies have an incentive to invest in APIs to extend their product’s penetration and their users dependency on the service. It allows them to offer hooks to the service while maintaining control over the source code and not having to open it – enjoying both worlds – the open and the proprietary.
While search engines have always attempted to survey the information on webpages and to refer to ‘point at them’ from afar, a couple of services have developed to formalize website metadata that is not based on web-crawler algorithms but rather on user generated content. A major trend in that field is that of social bookmarking. Social Bookmarking services gather links, classified and tagged by users and shared between them. Various service models like Del.icio.us, Digg and to a certain degree even Twitter are working in this field.
Social Bookmarking have become a standard process to collaboratively gathering metadata about pages and to some extent have become ways to annotate a site from afar. They further emphasize the tendency towards interconnectivity and more user authorship in the websites they browse.
Another technological field of research that has been around for a while but have yet to take off in its full potential is the metaweb. Metaweb stands for web applications and platforms that attempt to expand the interactive features offered by webpages. The most common application of the metaweb are social annotation applications – allowing users to leave text on pages (in many cases using the sticky note metaphor). Another common metaweb application resembles the function of a highlighter pen and allows to save marked-up text on a page.
Metaweb applications often use a browser extension architecture to add the meta functionality to the page or sometimes through a proxy page – a copy of the original webpage content under another domain that includes the meta interface. Metaweb applications are pushing the envelope on the way we’ve learned to experience the web, as they offer us to carry our own interface with us as we browse the web.
It seems like even though the webpages have always been built on a model of individualism, ownership and privatization, more and more users are demanding a public space on the web. There are several ways to challenge interfaces, to reexamine the privatized model of the web and to promote these user generated interfaces. They require understanding of the current technological trends and an open discussion of the power structures behind interface. We can spot two approaches to this task, one in the practice of tactical media and the other in what I would refer to as strategic media. Each can be used, apart or in conjunction to retrieve user agency in the interface, and to claim interface as a proposition rather than a construct.
Something To Do: I – Tactical Media
There are many definitions to tactical media. All of them speaks of this practice as a short lived ‘hit-and-run’ use in opposition to a target of power.
“The goal is not to destroy technology in some neo-Luddite delusion, but to push it into a state of hypertrophy, further than it is meant to go. Then, in its injured, sore, and unguarded condition, technology may be sculpted anew into something better, something in closer agreement with the real wants and desires of its users. This is the goal of tactical media.”
Alexander Galloway
Protocol – How Control Exists After Decentralization
In the case of interface, the goal of tactical media is not to refrain from engagement with systems, but rather the opposite – extend it. One of these approaches is hacking. Hacking is more than just a technical skill, it is an important approach to the world we are living in today. In a world that becomes more controlled and consolidated from day to day, hacking stands for examining relationships with a fresh eye, it is an approach very close to Hall’s negotiated decoding. And indeed what we should be promoting is for interface to become more negotiated.
Hackers attempt to exploit the system, and this does not have to involve complex programming. For example, an interesting tactical hack on interface is the Google Bomb. In 2003 Anthony Cox have created a parody of the “404 – page not found” browser error message in response to the war in Iraq. The page looked like the error page but was titled “These Weapons of Mass Destruction cannot be displayed”. The rest of the page kept switching between the original text of the error message and the prank concerning the lack of proof that Saddam possessed any WMDs.
The page has become a successful amusing meme and have gained a lot of popularity and traffic for the first couple of weeks. Four months later, after the meme had already died, it was reborn in another form. It seemed like when searching for the term “weapons of mass destruction” Google returned Cox’s prank site as the first result. Google’s page rank algorithm had calculated all the pages linking the term “weapons of mass destruction” to Cox’s site and has ‘assumed’ that this mean that site would be the most relevant result in a search for “weapons of mass destruction”. This prank and the metaphorical search result issue were initially the cause of a pure coincidence, but the users have decided to embrace it and a big grassroots campaign were launched on the blogosphere to mention the words “weapons of mass destruction” and link them to Cox’s prank page and by that assure the hack is sustained.
Google prides itself in its unbiased algorithms and in their mathematical accuracy, but the Google Pagerank technology – the heart of Google’s search engine can in fact be decoded as a latent interface. It is designed to crawl the web and survey its content to decide which site is considered by enough other sites as a reliable source. In the case of Google Bombing data-mining is being appropriated deliberately to inject a specific page as a search result. Google’s top search results are a desired goal and are not at all meant to be interacted with. Yet, the Google Bomb of mass distraction managed to divert the system and not only oppose the Bush administration through a political parody, but also oppose Google’s administration of the web and game its so called unbiased rational. And so, thanks to the Google Bomb hack, according to Google circa 2003 the most relevant answer to the search for Weapons of Mass Destruction was “These Weapons of Mass Destruction cannot be displayed”.
Rather than using code Google Bombing uses reverse engineering – an analysis of a system’s structure in order to learn its processes and possibly introduce change in the system. Reverse engineering is a practice at the heart of hacking but is also widely practiced in political activism and by movements for social change. If we can practice reverse engineering in software maybe we can apply the same approach to reverse social engineering.
Tactical media should question interfaces and promote a critical discussion of its role in society. Tactical media practitioners should offer hacking spectacles such as the Google Bombing but also inspire and educate others in the approaches of hacking – hacking of software, hacking of hardware, hacking of interface and hacking of the social structure.
Something To Do: II – Strategic Media
“For there to be such a thing as tactical media implies that there are also strategic and logistic media. These terms go together, and describe 3 different levels at which contestation can take place. If the tactical is local and contingent, the strategic involves planning and coordination. The logistic would then refer to systematic, global and long range organizations of forces.”
McKenzie Wark
Strategies for Tactical Media – Realtime (Oct/Nov02)
Strategic media is a different approach from the short-lived hit-and-run. Strategic media is a “hit-and-stay” method of opposition. It often shares some of the goals of tactical media and sometimes even involves tactical practices as a part of the larger scope strategy. Strategic media is a more complex practice as when you “hit-and-stay” you risk being called to take responsibility for your actions. Not only from the target you oppose or other authorities, but maybe even from your peers in the struggle. Unlike its tactical younger brother, strategic media requires patience and leadership. Strategic media comes from an inclusive approach to social and political conflicts – practitioners of this strategy don’t see themselves as external to the culture they are attempting to change. I would argue that identifying oneself within the system she opposes, makes her even more committed to the struggle. Strategic media is indeed harder to execute, it requiring further commitment and less immediate satisfaction. It promises a more sustainable approach to system building, a system that can mature and grow and not only oppose power, but actually propose viable amendments.
Strategic media shares a lot of the values of parasitic media in its attempt to influence the system from within. It is always a conflicted practice and is bound to produce some miserable failures, but while tactical media is winning some battles, it often looses the war.
One of the main strategic media practices we’ve seen bloom especially in the past two decade since the rise of the internet is the rise of Free and Open Source Software. This movement is not led by top-down ideology, but rather by a very basic tendency of people to seek freedom within systems. The Linux project, the Firefox browser, the Apache server, the Creative Commons licenses and (once again) Wikipedia are all a part of what Benkler defined as the new ideological force in the market – the commons-based peer production. All of these examples and thousand others can be referred to as strategic media practices.
Greasemonkey is a good example of how a tactical media practice in the field of interface have turned strategic. Greasemonkey is an extension for the Firefox browser that allows users to install “userscripts” – javascript hacks that automatically execute and modify the webpage on-the-fly. That is – change the page that is displayed to the user while not affecting the source of the website on the host server. Greasemonkey allows users with coding skills to add, remove or fix features on the page their browsing, it also allows them to integrate content from other sites and web services into the page.
It was first published in December 2004 by Aaron Boodman, who according to Wired magazine is “…a software engineer who got sick of dealing with the Web on other people’s terms” and it has been developed since day one as an open source project. Three months later Boodman started receiving code contribution from other developers, another three months later it has become the third most popular extension to the Firefox browser. Later that year a book on the Oreilly series was published titled ‘Greasemonkey Hacks’ and the community of hackers, that have developed hundred thousands of userscripts, have been growing since then.
Greasemonkey, like any other successful open-source project would have never succeeded without the initial leadership of Boodman and the other emergent hackers who got the community excited about the process. The very nature of javascript is its openness – since it runs on the client side (on the user’s computer) any user could easily open the userscript and based on her javascript skills adjust and modify the script to fit her own needs. Greasemonkey did not only offer a channel to easily hack interface – it has also made sure that the new hacks are easily further hackable.
We can think of userscripts (most of them consisting of just a few lines of code) as tactical media interventions in webpages, while Greasemonkey, as a platform would definitely be a strategic media initiative – offering a standard and a committed leadership which was well received by the hacker community.
Conclusions
The web has become possibly our major interface to globalization, it has inspired us to engage with it and have been teaching us how to. Interface is an extremely important field of political action today since it is not only our engagement within software and networks that is on the line. It is our perception of engagement and responsibility in a world that is drifting away from social structures based on human relationships, involving mutual dependencies and trust and further into formalized technocratic structures based on numbers and statistics leading us to segregation, privatization and profit/loss based relationships.
Interface is the key to responsibility in political structures. Democracies offer an interface to governance, not only as a way to base the government on the will of the people but as an interdependent system that implies distributed responsibility. Processes of privatization and segregation have been affecting the way we perceive democracies. The latent interface rendering the single vote almost completely powerless have resulted in alienation and lack of trust between governments and the public they represent.
I see the crisis of democracy as an interface problem. When groups of power can interface with governance through finance the idea of equal representation is broken. The way the current political system is set up in the US, allowing political lobbying and fundraising for candidates is practically an interface for corruption. (not rendered as such for its legality)
There is no doubt that voting through money is an anti-democratic interface. In the beginning of the 19th century the perception of power was different, women were not allowed to vote in the US until 1920. Since World War II questioning the capitalist democratic model was considered treason and was a social and ideological taboo. The fall of the Berlin wall and the collapse of the Soviet block (which also marks the rise of tactical media) have provided a new opportunity to reexamine the interfaces of American democracy. But the ruling ideology of this era is well embedded into the structure of the web, it is one of privatization and passivity in front of systems.
Bureaucracy became the interface between citizens and governance. The mall replaced the town square as an interface to public space. Financial power became an interface to democracy, and to some extent the term ‘democracy’ became an interface for financial power.
Today, interfaces are designed to channel our behavior and the way we interact with the systems behind them. They are revealed to us as tools. We have learned to trust them and have grown dependent on them. We have gotten so used to our interfaces that we forget to critically examine them and reveal their biases. We forget to ask who designed the interface, and on whose behalf? How was it introduced to us? What is our desired interaction with the system and how is it channeled or not channeled through the interface?
While they offer us formalized interaction, software interfaces teach us not to expect to define these rules of engagement. This is a call to regain agency, through hacking, open-source and media activism. We should use the practices of tactical media and strategic media to oppose the logistic media of global power. There is an inherent conflict in interface, a conflict we need to engage with and attempt to subvert. New ideologies are developing from global interconnectivity, from the Free Culture and the Free Culture movements and from the different facets of DIY and hacker cultures. These new ideologies are developed from bottoms up – from communities sharing mutual goals rather than those in powers defining an arbitrary abstract public. This new action demands a renewed social dependency, openness, creativity, leadership and trust. The power balance of interface can be reconsidered. It is time for us to sit down and rewrite our rules of engagement.
4 thoughts on “Interface as a Conflict of Ideologies”
Comments are closed.