Structural information theory
Structural information theory (SIT) is a theory about human perception and, in particular, about perceptual organization, that is, about the way the human visual system organizes a raw visual stimulus into objects and object parts. SIT was initiated, in the 1960s, by Emanuel Leeuwenberg [Leeuwenberg, E. L. J. (1968). "Structural information of visual patterns: an efficient coding system in perception." The Hague: Mouton.] [Leeuwenberg, E. L. J. (1969). Quantitative specification of information in sequential patterns. "Psychological Review, 76," 216-220.] [Leeuwenberg, E. L. J. (1971). A perceptual coding language for visual and auditory patterns. "American Journal of Psychology, 84," 307-349.] and has been developed further by Hans Buffart, [http://www.nici.ru.nl/~peterh Peter van der Helm] , and [http://www.nici.kun.nl/~robvl Rob van Lier] . It has been applied to a wide range of research topics, mostly in visual form perception but also in, for instance, visual ergonomics, data visualization, and music perception.
SIT began as a quantitative model of visual pattern classification. Nowadays, it also includes quantitative models of symmetry perception and amodal completion, and it is theoretically founded in formalizations of visual regularity and viewpoint dependency. SIT has been argued [Palmer, S. E. (1999). "Vision science: Photons to phenomenology." Cambridge, MA: MIT Press.] to be the best defined and most successful extension of Gestalt ideas. It is the only Gestalt approach providing a formal calculus that generates plausible perceptual interpretations.
The simplicity principle
Although visual stimuli are fundamentally multi-interpretable, the human visual system usually has a clear preference for only one interpretation. To explain this preference, SIT introduced a formal coding model starting from the assumption that the perceptually preferred interpretation of a stimulus is the one with the simplest code. A simplest code is a code with minimum information load, that is, a code that enables a reconstruction of the stimulus using a minimum number of descriptive parameters. Such a code is obtained by capturing a maximum amount of visual regularity and yields a hierarchical organization of the stimulus in terms of wholes and parts.
The assumption that the visual system prefers simplest interpretations is called the simplicity principle. [Hochberg, J. E., & McAlister, E. (1953). A quantitative approach to figural "goodness". "Journal of Experimental Psychology, 46," 361-364.] Historically, the simplicity principle is an information-theoretical descendant of the Gestalt law of Prägnanz, [Koffka, K. (1935). "Principles of gestalt psychology." London: Routledge & Kegan Paul.] which was based on the natural tendency of physical systems to settle into stable minimum-energy states. Furthermore, just as the later-proposed minimum description length principle in
algorithmic information theory(AIT), it can be seen as a formalization of Occam's Razorin which the best hypothesis for a given set of data is the one that leads to the largest compression of the data.
Structural versus algorithmic information theory
Since the 1960s, SIT (in psychology) and AIT (in computer science) evolved independently as viable alternatives for Shannon's classical
information theorywhich had been developed in communication theory. [Shannon, C. E. (1948). A mathematical theory of communication. "Bell System Technical Journal, 27," 379-423, 623-656.] In Shannon's approach, things are assigned codes with lengths based on their probability in terms frequencies of occurrence (as, e.g., in the Morse code). In many domains, including perception, such probabilities are hardly quantifiable if at all, however. Both SIT and AIT circumvent this problem by turning to descriptive complexities of individual things.
Although SIT and AIT share many starting points and objectives, there are also several relevant differences:
* First, SIT makes the perceptually relevant distinction between structural and metrical information, whereas AIT does not;
* Second, SIT encodes for a restricted set of perceptually relevant kinds of regularities, whereas AIT encodes for any imaginable regularity;
* Third, in SIT, the relevant outcome of an encoding is a hierarchical organization, whereas in AIT, it is a complexity value.
Simplicity versus likelihood
In visual perception research, the simplicity principle contrasts with the
Helmholtzian likelihood principle, [von Helmholtz, H. L. F. (1962). "Treatise on Physiological Optics" (J. P. C. Southall, Trans.). New York: Dover. (Original work published 1909)] which assumes that the preferred interpretation of a stimulus is the one with the highest probability of being correct in this world. As shown within a Bayesian framework using AIT findings, [van der Helm, P. A. (2000). Simplicity versus likelihood in visual perception: From surprisals to precisals. "Psychological Bulletin, 126," 770-800.] the simplicity principle would imply that perceptual interpretations are fairly veridical (i.e., thruthful) in many worlds rather than, as assumed by the likelihood principle, highly veridical in only one world. In other words, whereas the likelihood principle suggests that the visual system is a special-purpose system (i.e., dedicated to one world), the simplicity principle suggests that it is a general-purpose system (i.e., suited in many worlds).
Crucial to the latter finding is the distinction between, and integration of, viewpoint-independent and viewpoint-dependent factors in vision, as proposed in SIT's empirically successful model of amodal completion. [van Lier, R. J., van der Helm, P. A., & Leeuwenberg, E. L. J. (1994). Integrating global and local aspects of visual occlusion. "Perception, 23," 883-903.] In the Bayesian framework, these factors correspond to prior probabilities and conditional probabilities, respectively. In SIT's model, however, both factors are quantified in terms of complexities, that is, complexities of objects and spatial relationships, respectively. This approach is consistent with neuroscientific ideas about the distinction and interaction between the ventral ("what") and dorsal ("where") streams in the brain. [Ungerleider, L. G., & Mishkin, M. (1982). Two cortical visual systems. In D. J. Ingle, M. A. Goodale, & R. J. W. Mansfield (Eds.), "Analysis of Visual Behavior" (pp. 549--586). Cambridge, MA: MIT Press.]
SIT versus connectionism and dynamic systems theory
On the one hand, a representational theory like SIT seems opposite to
dynamic systems theory(DST). On the other hand, connectionismcan be seen as something in between, that is, it flirts with DST when it comes to the usage of differential equationsand it flirts with theories like SIT when it comes to the representation of information. In fact, the analyses provided by SIT, connectionism, and DST, correspond to what Marr called the computational, the algorithmic, and the implementational levels of description, respectively. According to Marr, such analyses are complementary rather than opposite.
What SIT, connectionism, and DST have in common is that they describe nonlinear system behavior, that is, a minor change in the input may yield a major change in the output. Their complementarity expresses itself in that they focus on different aspects:
* First, DST focuses primarily on how the state of a physical system as a whole (in this case, the
brain) develops over time, whereas both SIT and connectionism focus primarily on what a system does in terms of information processing; according to both SIT and connectionism, this information processing (which, in this case, can be said to constitute cognition) thrives on interactions between bits of information.
* Second, regarding these interactions between bits of information, connectionism focuses primarily on the nature of concrete interaction mechanisms (assuming existing bits of information suited for any input), whereas SIT focuses primarily on the nature of the (assumed to be transient, i.e., input-dependent) bits of information involved and on the nature of the outcome of the interaction between them (modelling the interaction itself in a more abstract way).
In SIT, candidate interpretations of a stimulus are represented by symbol strings, in which identical symbols refer to identical perceptual primitives (e.g., blobs or edges). Every substring of such a string represents a spatially contiguous part of an interpretation, so that the entire string can be read as a reconstruction recipe for the interpretation and, thereby, for the stimulus. These strings then are encoded (i.e., they are searched for visual regularities) to find the interpretation with the simplest code.
In SIT's formal coding model, this encoding is modelled by way of symbol manipulation. In psychology, this has led to critical statements of the sort of "SIT assumes that the brain performs symbol manipulation". Such statements, however, fall in the same category as statements such as "physics assumes that nature applies formulas such as Einstein's "E=mc"2 or Newton's "F=ma" and "DST models assume that dynamic systems apply differential equations". That is, these statements ignore that the very concept of
formalizationmeans that things are represented by symbols and that relationships between these things are captured by formulas or, in the case of SIT, by simplest codes.
To obtain simplest codes, SIT applies coding rules that capture the kinds of regularity called iteration, symmetry, and alternation. These have been shown [van der Helm, P. A., & Leeuwenberg, E. L. J. (1991). Accessibility, a criterion for regularity and hierarchy in visual pattern codes. "Journal of Mathematical Psychology, 35," 151-213.] to be the only regularities that satisfy the formal accessibility criteria of
* (a) being so-called holographic regularities that
* (b) allow for so-called hierarchically transparent codes.
A crucial difference with respect to the traditional, so-called transformational, formalization of visual regularity is that, holographically, mirror symmetry is composed of many relationships between symmetry pairs rather than one relationship between symmetry halfs. Whereas the transformational characterization may be better suited for
object recognition, the holographic characterization seems more consistent with the build up of mental representations in object perception.
The perceptual relevance of the criteria of holography and transparency has been verified in the so-called holographic approach to visual regularity. [van der Helm, P. A., & Leeuwenberg, E. L. J. (1996). Goodness of visual regularities: A nontransformational approach. "Psychological Review, 103," 429-456.] This approach provides an empirically successful model of the detectability of single and combined visual regularities, whether or not perturbed by noise. Furthermore, the transparent holographic regularities have been shown to lend themselves for
transparallel processingwhich means that, in the process of selecting a simplest code from among all possible codes, "O"(2"N") codes can be taken into account as if only one code of length "N" were concerned. [van der Helm, P. A. (2004). Transparallel processing by hyperstrings. "Proceedings of the National Academy of Sciences USA, 101 (30)}," 10862-10867.] This supports the computational tractability of simplest codes and, thereby, the feasibility of the simplicity principle in perceptual organization.
Wikimedia Foundation. 2010.
Look at other dictionaries:
Structural functionalism — Sociology … Wikipedia
Information revolution — The term information revolution (sometimes called also the information al revolution ) describes current economic, social and technological trends. Many competing terms have been proposed that focus on different aspects of this societal trend.The … Wikipedia
Structural equation modeling — (SEM) is a statistical technique for testing and estimating causal relations using a combination of statistical data and qualitative causal assumptions. This definition of SEM was articulated by the geneticist Sewall Wright (1921), the… … Wikipedia
Structural analysis — comprises the set of physical laws and mathematics required to study and predict the behavior of structures. The subjects of structural analysis are engineering artifacts whose integrity is judged largely based upon their ability to withstand… … Wikipedia
Structural alignment — is a form of sequence alignment based on comparison of shape. These alignments attempt to establish equivalences between two or more polymer structures based on their shape and three dimensional conformation. This process is usually applied to… … Wikipedia
Structural anthropology — is based on Claude Levi Strauss s idea that people think about the world in terms of binary opposites such as high and low, inside and outside, person and animal, life and death and that every culture can be understood in terms of these opposites … Wikipedia
Structural communication — was developed in the 1960s by John G. Bennett and his research team to simulate the structure and quality of a small group tutorial through automated means. It provides access to high level learning for many students without much supervision. A… … Wikipedia
Information society — For other uses, see Information society (disambiguation). The aim of the information society is to gain competitive advantage internationally through using IT in a creative and productive way. An information society is a society in which the… … Wikipedia
theory — A reasoned explanation of known facts or phenomena that serves as a basis of investigation by which to seek the truth. SEE ALSO: hypothesis, postulate. [G. theoria, a beholding, speculation, t., fr. theoros, a beholder] adsorption t. of narcosis… … Medical dictionary
Information Age — A visualization of the various routes through a portion of the Internet. The Information Age, also commonly known as the Computer Age or Digital Age, is an idea that the current age will be characterized by the ability of individuals to transfer… … Wikipedia