Languages are sets of symbols used to encode knowledge in exchangeable artefacts. Specific languages that are associated with rich semantics in the context of a specific team or system are called jargons, as these semantics are typically unfamiliar to teams and systems outside the specific context.
Humans are familiar with spoken “natural” language, and with written “natural” language, encoded in one of the established symbol systems (or alphabets) that predate the invention of modern computers. Additionally, humans have developed specialised symbol systems for recording definitions of music, for expressing mathematics, for traffic signs and signals, electronic circuit designs, etc. – all these symbol systems are considered as languages in the mathematical discipline of model theory. In fact, symbol systems predate humans by billions of years; the genetic code is clearly a language in the model theoretic sense – and even pheromones constitute a language.
Without delving into the formal mathematical details, the significance of model theory is best appreciated intuitively by considering the following observations:
- Linguistics as pioneered by Noam Chomsky in the 1950s and 1960s as well as the work on generative semantics and metaphors by George Lakoff can be formalised via model theory.
- The work of model theorists goes back to the beginning of the 20th century, and was motivated by mathematicians who were concerned about potential logical inconsistencies in the mathematical symbol system and the conventions governing its use.
- The resulting introspective research into symbol systems has led to a mathematical theory that can be used to formalise any symbol system, not limited to the languages invented by humans, and including the genetic code.
- The diagramming notations used on this web site constitute a language as well, and they can easily be formalised mathematically, using a specialised software tool such as the S23M Cell Platform.
A very short history of human language development
If human cognitive abilities are better utilised by visual language, why does natural language have a linear structure? The linear structure of natural language is no accident, it is the only viable structure for human oral communication. Any attempt to exploit the spatial positioning of sounds to encode semantics quickly reaches cognitive and practical limits.
Even the emergence of written language did not do much to improve the exploitation of human visual abilities. Written language does exploit visual cognition in terms of small syntactical elements (characters and words) that follow the patterns of spoken language, but not in terms of larger syntactical elements that correspond to unique semantics.
Human eyes are much better than human ears at picking up positional clues – hence the significance of body language, and the human ability to learn hand-sign languages. The untapped potential of visual languages can be extrapolated from the ease with which human children learn to operate visual graphical user interfaces.
Of course all programming languages used by software developers are languages in the model theoretic sense as well. The usefulness of such technical languages is however increasingly reaching practical limits. It is somewhat ironic that most programming languages are limited to a purely linear format, and do not in any significant way exploit human visual abilities.
Often, a picture is worth a thousand words, and there are very good reasons why architects and engineers make extensive use of two and three dimensional models. Computers and programming languages allow the creation of powerful graphical user interfaces and notations, but to date nearly all of these user interfaces and notations are encoded, and must be maintained, in a linear format. Modern software tools such as the S23M Cell Platform pave the path for visual languages that are easily understood by humans, but that are not restricted to the linear, one-dimensional structure of natural language.