What does mnemonic mean in relation to computers

Mnemonics in graphical user interfaces

by Oliver Wrede
April 1996
published in "formdiskurs - magazine for theory and design", January 1997

[available in German]

Columbus was prophesied to fall from the edge of the earth at the end of the sea. The first train travelers met the superstition that if they drove too fast, they would leave their souls behind. The 17th century memory travelers were believed to overflow their heads with images and today there are people who argue that virtual reality alienates one's imagination insofar as they interact with it the machine is reduced to the perceptual and thereby compensates for a lack of creative ability of the user, which was not infrequently at its peak in toddlerhood.

At first glance, this seems to be a naive thesis, but the concerns mentioned do not necessarily arise in the context of conservative or anti-technology attitudes. You can also combine with a progressive assessment that misses a self-reflective and emancipatory approach to new media - and not only on the level of design, but also and above all on the level of use. This may be one reason why a number of articles have recently taken up the subject of remembrance and memory; because in the multimedia, which is being freed from technical restrictions by digitization and networking, it is important to explore new conditions for forms of use. Mnemonic technology (also known as the art of remembering) as a special method of accessing memory plays a special role in this context because it is not in principle dependent on constant changes in technological development .

Information and communication become direct, omnipresent, simultaneous and no longer representative of the authentic and permanent, but potential and only real in effect [1]. In the existing information pools, the electronicization leads to a veritable breach of the dam, even if the proportion of electronically published information is still low. The information catastrophe predicted by critics of the development is already underway. In public opinion, everything seems as informative as it is illustrative. The most sustainable lifebuoy in the flood of information is not infrequently a watertight strategy of ignorance, closely followed by intelligent agents who are trained to retrieve information.
Steve Jobs explained in an interview [2] that the electronic information will no longer be exclusively characters representing language, but objects that are able to influence one another. Jim White of General Magic predicts [3] that messages will become programs and thereby mutate the matrix into a universal machine, the overall performance of which can be used with any pocket computer. Like messenger substances, information diffuses through the networks and is collected by appropriately designed agents who only need a payment confirmation in order to make the encrypted receivable again or to provide an encoded service. This cycle does not exclude established carriers of information (radio, publishing, libraries), but can integrate, use and develop their stability.

For the interface design, three areas are decisive, the speed of which is largely determined by economic conditions:

1. Knowledge building methods and learning media

2. Artificial intelligence and interaction paradigms

3. Forms of cooperation with other people

Each of these areas is subject to constant change and thus the conditions for the design are also constantly changing. It can be clearly seen that all of these areas overlap more and more and will at some point possibly represent an overall problem. In the control loop between man and machine [4] there will always be a decisive aspect which, at best, can be approached with models. Such models are discussed in the cognitive sciences, which understand the computer as a malleable matter, which represents an ideal terrain for constructivist knowledge formation with direct consequences for communicative action.

The vigor of the information media could be used to revitalize training. The electronic learning media are at the same time an opportunity and a practical necessity, because initially they promise more efficient learning through individually adapted didactics. Therefore, teachers in today's educational institutions are forced to reconstruct increasingly dissimilating forms of cooperation and socialization with fewer and fewer resources [5]. A compensation for inadequate pedagogical competence is therefore not to be expected with the use of electronic learning media. A shift in the assessment standards for learning success can be foreseen. An individual will have to absorb and assess more information than before in order to arrive at relevant knowledge. The information density to be dealt with is of varying quality. Because of its digital character, the writing is easy and large to transfer into the multimedia, while at the same time the possibilities of manipulating and constructing images are increased by breaking them up into barely perceptible pixels. The conflict between the relativity of atomized alphabetical articulations and symbolic image construction is transferred to the recipient, who must not lose their bearings in an ambiguous amount of information, which becomes more difficult with increasing possibilities [7]. The limit for the information design is not determined by the perceptibility, but by how much of the information can be transformed into knowledge. It is true that one has learned to outsource parts of the memory with the help of technical memories, but this now has to be able to find and remember the information and to create or reconstruct a large number of assessments of relationships.

The first records of mnemonics come from ancient times. At the core of the art of memory as a method is the creation of a memorial system in which memories are stored in the form of memory images in imaginary places (loci or topoi). In ancient rhetoric (Cicero, De Oratore) free speech was facilitated by such imagined places - for example the rooms of a house - through which the speaker could wander memorably while he called up the mental images to see the progress of the Control speech. But other imagined topologies such as landscapes, paintings, body parts, stories and the like could serve as memorial systems. Mnemonics has long been considered a secret art which, in the absence of external knowledge stores, was the key to wisdom for scholars. After the preparatory work by authors of the Middle Ages, by whom the forgotten memory technology was taken up again, Johann Heinrich Alsted dealt in 1610 with an encyclopedic order of mnemonics in the "Systema Mnemonica". Alsted's urge to systematize was at odds with his own realization that images of memory should act on the soul and were therefore of fundamental importance in psychology. In some dissertations on the origins of learning theory, Alsted's student Johann Amos Comenius is quoted as taking a decisive step by responding to the criticism of the educators of the Enlightenment, who were driven by a fear of the power of images. In his “Bohemian Didactics” he reworked the mnemonics into a representational-symbolic order. Comenius suggested teaching human anatomy using a model in the form of an inscribed leather replica. In this way he managed to find a match between the facts to be remembered and the memorial system used for this purpose, which in turn was provided with symbols by the labeling. Overrun by the triumphant advance of literal memories - made possible by the invention of the printing press - mnemonic technology was ultimately even considered dubious and obscure. The rediscovery was promoted, among other things, by the electrical mass media, which increased the share of the non-literary media in the world experience and revived the conflict between the topological-sensual imagination and the logical-symbolic representationism that has existed since 17th century had rested.

Knowledge of mnemonics for the design of future graphical user interfaces [8] plays a role above all in relation to those “cognitive tools” that only fulfill their purpose with the use of the knowledge and intelligence of the user. It is not a question of reducing a discussion of information design to an instrumental consideration of mnemonic technology, but it should be noted that a large proportion of theories have been formed with recourse to mnemonic technology. The concept of hypermedia offers the possibility of organizing content topologically, while virtual reality can be used to visually reproduce memory locations.

But you don't have to go that far when looking for mnemonic constructions in the computer: The first widespread VR application of this kind on a computer was a highly abstracted and relatively flat (but by no means two-dimensional) replica of an office with a desk. It is not a pure metaphor, as the now widespread term “desk metaphor” suggests. The degree of explicity - created by metonymic and iconographic elements - is superior to a metaphor that tends to hide the unknown in the known. Even if the desk metaphor still fulfills its function today - after all, a large part of the computers is used in an office situation - the primary advantage lies in the ease of use. Mainly because the organization of the computer in the interface is illustrated and explained at the same time. This manageable organization is basically led to absurdity by the amorphous overall machine with several million CPUs, which reveals its topology in several layers (geographically, chronologically, dialogically, etc.). This multi-dimensionality is not primarily a lack of design, but is downright necessary if constructive handling of one's own memory performance is to be possible, insofar as knowledge arises with the creation of connections and knowledge with the creation of structures [ 9] is documented and sold.
With the invention of the programmable machine, computer science was born from mathematical logic. The conflict sparked off in the discussion about the possibilities of an algorithmization of thinking and the digitization of knowledge. The discourse on the relationship between man and machine continues to this day. On the one hand, certainly because of the ever faster and cheaper processors that make the unthinkable appear feasible all at once, but also because of the thesis advocated by many AI researchers that psychological processes and thought functions can in principle be traced back to physiological conditions and are therefore in principle mechanizable.

The externalization of large parts of the memory - the existence of which is a prerequisite for intelligence - seems at least conceivable in the form of global information stores, which are used with the help of telematics in a kind of computerized collective memory. But between memory and knowledge lies undiscovered land, even if the radical representatives of AI research leave no stone unturned to uncover this secret and all speculations that exclude a causality principle and a purely empirical knowledge paradigm are sentimental exaggeration and mythologizing man. Alan Turing's definition of artificial intelligence as a perfect imitation, which presupposes human judgment but is no longer focused on the definition of intelligence as the result of physical functions, provided a liberation from the constraints and concerns of the humanities at the time. At the experimental level, one tries to set up hypothetical sets of rules that represent unknown functions of human intelligence, and then compares whether the knowledge gained from the processed information is similar or even identical to the human one. By permanently correcting these sets of rules, an alignment should be achieved that goes beyond what can be algorithmized. The information collected in the processing process is used for the autonomous further development of the same. In contrast to Turing's definition of an intelligent machine, this approach regards the ability to learn as the basis of intelligence.

At this point, information scientists begin to be interested in learning theories, or better: knowledge models [10]. It is not necessarily the aim to make a computer “human”, but it should very much be able to enable the machine to behave in ways that one would normally expect from social partners. This includes, for example, that occasional violation of rules is compensated for and individual characteristics (habits, preferences, level of knowledge, tolerance limits, etc.) are taken into account. The machine changes from a purely reactive to an active system. Large software companies are funding projects whose names “Social Interface” or “Knowledge Agent” suggest that cognitive science is important for the next generation of interfaces. In his book “Mentopolis” Marvin Minsky tries - in the sense of Alsted's approach - to create a kind of “Systema Cognita” and to describe the phenomenon of intelligence as the result of combined mechanisms. Minsky calls these entities of the mind agents [11] who need memories in order to achieve consistency and to repeat past actions. A focal point arises at this point, insofar as, for example, sociologists do not rule out the existence of collective intelligence if a form of collective memory is created through appropriate networking and externalization and group-specific agencies are created. Minsky distinguishes between polynemic and isonomous concepts of communication between different agents. While polynemes produce an individual effect on each recipient, isonoms will imprint the same idea on different things [12]. Minsky writes, “Both isonomies and polynemes have to do with memories - but polynemes are essentially memories themselves, while isonoms control how memories are used. […] So the power of the polynemes comes from the way they learn to stimulate many different processes at the same time, while the isonomers derive their power from the exploitation of those skills that are already common to many agencies. ”If ours If communication is based on isonomic concepts (conventions and sign systems), the question arises as to how collective polynemes (memories) that are supposed to come from a collective memory are transported at all. If this concept of the agencies can be carried over in this way at all, the construction of the polynemes must be a joint effort.

Because of its popularity, the desk metaphor should again serve as an example here. The elements of known interfaces are conventionalized, but increasingly they offer the possibility of making individual adjustments and creating unique order (or disorder) that provide external clues to the internal memory medium. Someone who maintains a complex information structure will avail himself of these possibilities, just as he will insist on organizing his office or workplace (or maintaining a mess).Such an arrangement only results from the relationships between the components (structural definition). When designing interfaces, it is always a problem to only be able to make functional definitions [13], because a user must always be implied who is only very generally defined and should nevertheless be made to understand. Just as remembering the relationship between information is a structure-preserving service, the relationships between objects in interfaces must be definable for the user if they are to be used as mnemonic systems.

While the storage in the external memory only requires a motor performance (for example, cutting a notch in a stone with a chisel or clicking with a computer mouse), storage in the biological memory requires an intellectual performance that is simultaneously prevents one from getting lost in disconnectedness [14]. There is only little use in motorized storage as long as storage and medium cannot be merged to form the intermediate medium and allow the intellectual performance to be “written into” the information; this is the only way to transform the act of storage into a structure-creating act. Users of information media, future computer literary writers and information producers must ensure that these structures receive a communicative value in order to act as agents in the matrix, with the help of which new forms of cooperation can be established and the ability of all participants to act can be improved. The graphical interfaces can be a hindrance if they are not designed for this purpose.

The sum of the properties that a user can reveal about an object (stored information or its representation) is often very small. However, there are situations in which far more properties are available than are assigned to an object in the interface and can be made visible or audible. This applies in particular to the relationships between the objects, in which a form of context can be represented in the first place. The interaction grid [15] is usually set in such a way that the user cannot instrumentalize his ability to make statements about the objects. This is partly because the first requirement of the graphical user interface was a replacement of command line input and a visualization of the file systems. From today's point of view, the command line had more to do with mnemonic technology than the graphical user interfaces from which they were replaced, because the user could hardly find his way around than in his memory when he wanted to know what he was had to enter the nextâ € ”a memory ability was almost a prerequisite for the operation. This only appears to contradict the requirement to allow more differentiated forms of externalization. The difference between the command line and the graphical user interface was not primarily in the better or worse promotion of mnemonic technology, but rather in the effort required to learn how to use it and in the tasks that you had to solve - another Comparison would therefore be totally inappropriate. Such a redefinition of the tasks that a user is supposed to solve with the help of a computer is currently taking place. The computer serves not only as a tool for the production of media but also as a tool for the production of information itself.

But the thesis that progressive externalization ultimately diminishes the ability to create memory images is not unfounded - if by externalization one means that the task of memory images can be replaced. It is important to allow forms that provoke and activate moments of memory. Such mnemonic constructions are not always transsubjectivable [16] and determinable outside the context of use, which may be a reason why the interface design has so far been hesitant to take on this challenge. Part of the task has to be handed over to the user who participates in the design process. As long as the concepts of interfaces are based on functional models of the computer rather than on the thinking models of the user, one will not be able to break away from a definition of knowledge as an instrumental form of understanding. The user's cognitive potential is wasted on the task of learning mechanisms and procedures that can only be inherent in the medium if the individual properties and working methods are not to be considered as influencing variables for the interface design [17].

If the interface design does not face this problem and maintains the previous exclusive claim, the increasing audiovisual differentiation ability and designability of the new media will be used to force the user into an increasingly passive role, simply an elegant form of zapping is made possible, instead of understanding the information that is supplied to it as raw material, which is to be actively treated and processed. Interface designers will then expose themselves to criticism that they do not meet the needs of the users, but at best their own. The use of the computer medium becomes an end in itself and the legitimation with reference to the technical inevitability [18] becomes an alibi function.


Kittler, Friedrich (ed.), Matejovski Dirk: Literature in the Information Age, Frankfurt / Main 1996

Kuhlen, Rainer: Hypertext, a non-linear medium between book and knowledge bank, Berlin 1991

Schulmeister, Rolf: Basics of hypermedia learning systems: Theory - Didactics - Design, Bonn 1996

Minsky, Marvin: Mentopolis, Stuttgart 1994, (Original edition: The Society of Mind, New York 1994, CD-ROM)

Weizenbaum, Joseph: The power of computers and the powerlessness of reason, Frankfurt / Main 1994, 9th edition (original edition: Computer Power and Human Reason, 1976)


Bartels, Klaus: The world as memory - mnemonics and virtual spaces, in: Traces, No. 41, April 1993, p.31 ff.

Spangenberg, Peter M .: Observations on a media theory of memorylessness, in: Kunstforum - Konstruktionen des Erinnerns, Vol. 127, July-September 1994, p. 120-123


[1] Cf. on this Kuhlen, On the virtualization of libraries and books, in Kittler / Matejovski 1996, p. 116

[2] Steve Jobs interview with Gary Wolf: The next insanely great thing, from Wired 4.02, February 1996, p. 102-107 and p. 158-163

[3] Cf. The Postscript of Telecommunications, from MACup, February 1994, p. 22 f.

[4] With regard to the possible criticism of the humanities scholars of the information sciences, it should be noted at this point that there is only a gradual difference between the model of human-computer interaction and that of computer-aided human-human interaction. In many cases, both models are used to represent similar facts, although they set different priorities. In the model of human-computer interaction, an exchange of communication between individuals is just as impossible as in computer-aided human-human interaction, the fact that there can be a segment in which only one human being and one Machine play a role. The term human-computer interaction has been adopted from the information sciences, with the addition that a form of interaction with a computer can go beyond an instructional input / output model and the mentioned control circuit does not necessarily have to have a known systematics.

[5] In some final reports of research projects one will find statements related to the observation of possible changes in the social behavior of the test person / group. The observations differ radically depending on the experimental setup and the project sponsor. There is no fundamental determination as to whether negative or positive changes in social behavior occur with computer-aided media. The novelty of the media makes plausible positions between technology euphoria and cultural pessimism possible and these are not atypical in this context.

[7] Critics of hypertext systems see the burden on the user as a result of orientation and navigation activities as one of the greatest weaknesses of the concept. Authors of hypertexts take this criticism as an opportunity to think about the design of the structures and new possibilities of representation in order to counteract ambiguous navigation and orientation features. The "serendipity" effect describes the phenomenon that while navigating hypertexts, a new search target often becomes more dominant than the original one and this is then lost from sight. In most cases, however, there is an undesirable loss of orientation, although in certain situations this can be viewed as a special degree of freedom (cf. Kuhlen 1991, chap. 2.3.2, p. 132-136).

[8] Mnemonics can also be found in discourses about other areas of perception. The reduction in graphic user interfaces is exemplary. It should not be ruled out that comparable design tasks are to be solved in auditory or tactile interfaces (or in combinations) The mnemonic technology plays a role there.

[9] Structures here mean several concepts of networks: Information networks denote the arrangement and design of information for content development; Communicative networks denote defined flows of information between individual people; institutional networks denote defined relationships between groups of people and bodies. This list also supports the thesis that these areas define each other.

[10] Here knowledge is not meant in the sense of wisdom, but of knowledge

[11] Minsky's definition of agents and agencies must be explained here (see Minsky, 1994, p. 328):
Any part or process of the mind that is simple enough on its own to be understood - although interaction in groups of such agents can create phenomena that are far more difficult to understand.
Any collection of parts in terms of what it can accomplish as a unit, regardless of what each of its parts does on its own.

[12] cf. Minsky 1994, p. 227

[13] Functional definition:
to specify an object in terms of its possible useful application rather than in terms of its parts and their relationships to one another (see Minsky 1994, p. 330)

[14] This fact becomes particularly clear when one realizes that it is almost impossible to quote a language which one is not capable of, or to copy characters from memory, their meaning at the time of reading was unknown. The children's game Memory, for example (remembering the positions of hidden pairs of cards) trains the ability to construct an orientation in an unrelated system (abstract images and random arrangement) by developing a translational strategy to infer a position from an image to be able to.

[15] Interaction grid: What is meant is a collection of rules that determine the form of interaction on the software design side. It essentially determines which steps a user has to take for a certain action and which alternatives are available to him. The existence of such an interaction grid allows certain courses of action to be transferred analogously to other situations and thus to reduce the learning curve.

[16] transsubjective: outside the subjective (without metaphysical meaning indicating the object that distinguishes it from the subject)

[17] Kuhlen explains the basic principle of information science (pragmatic primacy), “by which the action relevance of information is to be taken into account. According to this basic understanding, information is knowledge in action. As a rule, information can only become relevant for action if the context conditions of use, e.g. individual information processing capacity or organizational goals, are taken into account. To redeem the pragmatic primacy in hypertext systems, the dialogic principle is proposed in addition to direct manipulation. For this are among others. the user models developed in the context of artificial intelligence are useful. ”(see Kuhlen, 1991, p. 338).

[18] According to Weizenbaum, computer scientists evade contestability by pointing out the technical inevitability of developments, to which there is no alternative, instead of representing their actions ethically (see Weizenbaum, 1976)

Further information on the Internet

(unsorted selection, added later)

William H. Calvin and George A. Ojemann
Conversations with Neil’s Brain
The Neural Nature of Thought & Language

Andreas Dieberger
Navigating Textual Virtual Environments using a City Metaphor

F. Heylighen
From the World-Wide Web to Super-Brain
(from Principia Cybernetica Web)

Mind Tools Ltd.
Memory Techniques and Mnemonics

Jeff Conklin
Designing Organizational Memory: Preserving Intellectual Assets in a Knowledge Economy

Terje Norderhaug
The Effect of the Media User Interface on Interactivity and Content

Workshop on Information Theory and the Brain Abstracts
(collected by Peter Hancock)

Wayne L. Abbott
The power of the human brain
A Computer Hardware and Software Representation

Paul J. Werbos
Optimization methods for brain-like intelligent control

Hartmut Winkler
The metaphor of the 'network'

Hermann Rotermund
From cuneiform to the Internet
Do the subjects disappear in memory?