jornbettin.com by Jorn Bettin is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
Permissions beyond the scope of this license may be available at http://s23m.com
Business Performance Optimisation
Standardisation is a double-edged sword. Compliance with standards is best restricted to those standards that really make a difference in a specific context.
Even innocent standardisation attempts such as enforcing a shared terminology across an organisation can be counter-productive, as it can lead to the illusion of shared understanding, whereas in practice each organisational silo associates different meanings with the terminology.
There is no simplistic rule of thumb, but the following picture can help to gain a sense of perspective and to avoid the dreaded death zone of standardisation.
This post is a rather long story. It attempts to connect topics from a range of domains, and the insights from experts in these domains. In this story my role is mainly the one of an observer. Over the years I have worked with hundreds of domain experts, distilling the essence of deep domain knowledge into intuitive visual domain-specific languages. If anything, my work has taught me the skill to observe and to listen, and it has made me concentrate on the communication across domain boundaries – to ensure that desired intent expressed in one domain is sufficiently aligned with the interpretations performed in other domains.
The life of language and the language of life can’t be expressed in written words. Many of the links contained in this story are essential, and provide extensive background information in terms of videos (spoken language, intonation, unconscious body language, conscious gestures), and visual diagrams. To get an intuitive understanding of the significance of visual communication, once you get to the end of the story, simply imagine none of the diagrams had been included.
It may not be evident on the surface, but the story of life started with language, hundreds of millions of years ago – long before humans were around, and it will continue with language, long after humans are gone.
The famous Drawing Hands lithograph from M. C. Escher provides a very good analogy for the relationship between life and language – the two concepts are inseparable, and one recursively gives rise to the other.
At a fundamental level the language of life is encoded in a symbol system of molecular fragments and molecules – in analogy to an alphabet, words, and sentences.
The language of life
Over the last two decades molecular biologists and chemists have become increasingly skilled at reading the syntax of the genetic code; and more recently scientists started to work on, and have successfully prototyped techniques to write the syntax of the genetic code. In other words, humans now have the tools to translate bio-logical code into digital code as well as the tools to translate digital code back into bio-logical code. The difference between the language of biology and the language of digital computers is simply one of representation (symbolic representations are also called models). Unfortunately, neither the symbols used by biology (molecules), nor the symbols used by digital computers (electric charges), are directly observable via the cognitive channels available to humans.
However, half a century of software development has not only led to convoluted and unmaintainable legacy software, but also to some extremely powerful tools for translating digital representations into visual representations that are intuitive for humans to understand. We no longer need to deal with mechanical switches or punch cards, and modern user interfaces present us with highly visual information that goes far beyond the syntax of written natural language. These visualisation tools, taken together with the ability to translate bio-logical code into digital code, provide humans with a window into the fundamental language of life – much more impressive in my view than the boring magical portals dreamed up by science fiction authors.
The language of life is highly recursive. It turns out that even the smallest single-celled life forms have developed higher-level languages, to communicate – not only within their species, but even across species. At the spacial and temporal scale that characterises the life of bacteria, the symbol system used consists of molecules. What is fascinating, is that scientists have not only decoded the syntax (the density of molecular symbols surrounding the bacteria), but have also begun to decode the meaning of the language used by bacteria, for example, in the case of a pathogen, communication that signals when to attack the host.
The biological evidence clearly shows, in a growing number of well-researched examples, that the development of language does not require any “human-level” intelligence. Instead, life can be described as an ultra-large system of elements that communicate via various symbol systems. Even though the progress in terms of discovering and reading symbol systems is quite amazing, scientists are only scratching the surface in terms of understanding the meaning (the semantics) of biological symbol systems.
Language systems invented by humans
Semantics is the most fascinating touch point between biology and the mathematics of symbol systems. In terms of recursion, mathematics seems to have found a twin in biology. Unfortunately, computer scientists, and software development practitioners in particular, for a long time have ignored the recursive aspect of formal languages. As a result, the encoding of the software that we use today is much more verbose and complex than it would need to be.
Nevertheless, over the course of a hundred years, the level of abstraction of computer programming has slowly moved upwards. The level of progress is best seen when looking at the sequence of the key milestones that have been reached to date. Not unlike in biology, more advanced languages have been built on top of simpler languages. In technical terms, the languages of biology and all languages invented by humans, from natural language to programming languages, are codes. The dictionary defines code as follows:
Mathematically, all codes can be represented with the help of sets and the technique of recursion. But, as with the lowest-level encoding of digital code in terms of electric charges, the mathematical notation for sets is highly verbose, and quickly reaches human cognitive limits.
The mathematical notation for sets predates modern computers, and was invented by those who needed to manually manipulate sets at a conceptual level, for example as part of a mathematical proof. Software programming and also communication in natural language involves so many sets that a representation in the classical mathematical notation for sets is unpractical.
The importance of high-quality representation of symbols is often under-rated. A few thousand years ago humans realised the limitation of encoding language in sounds, and invented written language. The notation of written language minimises syntactical errors, and, in contrast to spoken language, allows reliable communication of sequences of words across large distances in space and time.
The challenge of semantics
Software development professionals are becoming increasingly aware of the importance of notation, but interpretation (inferring the semantics of a message) remains an ongoing challenge. Adults and even young children, once they have developed a theory of mind, know that others may sometimes interpret their messages in a surprising way. It is somewhat less obvious, that all sensory input received by the human brain is subject to interpretation, and that our own perception of reality is limited to an interpretation.
Interpretation is not only a challenge in communication between humans, it is as much a challenge for communication between humans and software systems. Every software developer knows that it is humanly impossible to write several hundred lines of non-trivial program code without introducing unintended “errors” that will lead to a non-expected interpretation by the machine. Still, writing new software requires much less effort than understanding and changing existing software. Even expert programmers require large amounts of time to understand software written by others.
The challenge of digital waste
We have only embarked down the road of significant dematerialisation of artefacts in the last few years, but I am somewhat concerned about the semantic value of many of the digital artefacts that are now being produced at a mind-boggling rate. I am coming to think of it as digital waste – worse than noise. The waste involves the time involved in producing and consuming artefacts and the associated use of energy.
Of particular concern is the production of meta-artefacts (for example the tools we use to produce digital artefacts, and higher-level meta-tools). The user interfaces of Facebook, Google+ and other tools look reasonable at a superficial level, just don’t look under the hood. As a result, we produce the digital equivalent of the Pacific Garbage Patch. Blinded by shiny new interfaces, the digital ocean seems infinite, and humanity embarks on yet another conquest …
Today’s collaboration platforms not only rely on a central point of control, they are also ill-equipped for capturing deep knowledge and wisdom – there is no semantic foundation, and the tools are very limited in their ability to facilitate a shared understanding within a community. The ability to create digital artefacts is not enough, we need the ability to create semantic artefacts in order to share meaningful information.
How does life (the biological system of the planet) collectively interpret human activities?
As humans we are limited to the human perspective, and we are largely unaware of the impact of our ultra-large scale chemical activities on the languages used by other species. If biologists have only recently discovered that bacteria heavily rely on chemical communication, how many millions of other chemical languages are we still completely unaware of? And what is the impact of disrupting chemical communication channels?
Scientists may have the best intentions, but their conclusions are limited to the knowledge available to them. To avoid potentially fatal mistakes and misunderstandings, it is worthwhile to tread carefully, and to invest in better listening skills. Instead of deafening the planet with human-made chemicals, how about focusing our energies on listening to – and attempting to understand, the trillions of conversations going on in the biosphere?
At the same time, we can work on the development of symbolic codes that are superior to natural language for sharing semantics, so that it becomes easier to reach a shared understanding across the boundaries of the specialised domains we work in. We now have the technology to reduce semantic communication errors (the difference between intent and interpretation) to an extent that is comparable to the reduction of syntactic communication errors achieved with written language. If we continue to rely too heavily on natural language, we are running a significant risk of ending the existence of humanity due to a misunderstanding.
Life and languages continuously evolve, whether we like it or not. Life shapes us, and we attempt to shape life. We are part of a dynamic system with increasingly fast feedback loops.
Life interprets languages, and languages interpret life.
Language is life.
All animals that have a brain, including humans, rely on mental models (representations) that are useful within the specific context of the individual. As humans we are consciously aware of some of the concepts that are part of our mental model of the world, and we can use empirical techniques to scratch the surface of the large unconscious parts of our mental model.
When making decisions, it is important to remember that there is no such thing as a correct model, and we entirely rely on models that are useful or seem useful from the perspective of our individual view point, which has been shaped by our perceptions of the interactions with our surroundings. One of the most useful features of our brains is the subconscious ability to perceive concrete instances of animals, plants, and inanimate objects. This ability is so fundamental that we have an extremely hard time not to think in terms of instances, and we even think about abstract concepts as distinct things or sets (water, good, bad, love, cats, dogs, …). Beyond concepts, our mental model consist of the perceived connections between concepts (spacial and temporal perceptions, cause and effect perceptions, perceived meaning, perceived understanding, and other results of the computations performed by our brain).
The last two examples (perceived meaning and understanding) in combination with the unconscious parts of our mental model are the critical elements that shape human societies. Scientists that attempt to build useful models face the hard tasks of
In doing so, natural scientists and social scientists resort to mathematical techniques, in particular techniques that lead to models with predictive properties, which in turn can be validated by empirical observations in combination with statistical techniques. This approach is known as the scientific method, and it works exceptionally well in physics and chemistry, and to a very limited extent it also works in the life sciences, in the social sciences, and other domains that involve complex systems and wicked problems.
The scientific method has been instrumental in advancing human knowledge, but it has not led to any useful models for representing the conscious parts of our mental model. This should not surprise. Our mental model is simply a collection of perceptions, and to date all available tools for measuring perceptions are very crude, most being limited to measuring brain activity in response to specific external stimuli. Furthermore, each brain is the result of processing a unique sequence of inputs and derived perceptions, and our perceptions can easily lead us to beliefs that are out of touch with scientific evidence and the perceptions of others. In a world that increasingly consists of digital artefacts, and where humans spend much of their time using and producing digital artefacts, the lack of scientifically validated knowledge about how the human brain creates the perception of meaning and understanding is of potential concern.
The mathematics of shared understanding
However, in order to improve the way in which humans collaborate and make decisions, there is no need for an empirically validated model of the human brain. Instead, it is sufficient to develop a mathematical model that allows the representation of concepts, meaning, and understanding in a way that allows humans to share and compare parts of mental models. Ideally, the shared representations in question are designed by humans for humans, to ensure that digital artefacts make optimal use of the human senses (sight, hearing, taste, smell, touch, acceleration, temperature, kinesthetic sense, pain) and human cognitive abilities. Model theory and denotational semantics, the mathematical disciplines needed for representing the meaning of any kind of symbol system, have only recently begun to find their way into applied informatics. Most of the mathematics were developed many years ago, in the first half of the 20th century.
To date the use of model theory and denotational semantics is mainly limited to the design of compilers and other low-level tools for translating human-readable specifications into representations that are executable by computing hardware. However, with a bit of smart software tooling, the same mathematical foundation can be used for sharing symbol systems and associated meanings amongst humans, significantly improving the speed at which perceived meaning can be communicated, and the speed at which shared understanding can be created and validated.
For most scientists this represents an unfamiliar use of mathematics, as meaning and understanding is not measured by an apparatus, but is consciously decided by humans: The level of shared understanding between two individuals with respect to a specific model is quantified by the number of instances that conform to the model based on the agreement between both individuals. At a practical level the meaning of a concept can be defined as the usage context of the concept from the specific view point of an individual. An individual’s understanding of a concept can be defined as the set of use cases that the individual associates with the concept (consciously and subconsciously).
These definitions are extremely useful in practice. They explain why it is so hard to communicate meaning, they highlight the unavoidable influence of perception, and they encourage people to share use cases in the form of stories to increase the level of shared understanding. Most importantly, these definitions don’t leave room for correct or incorrect meanings, they only leave room for different degrees of shared understanding – and encourage a mindset of collaboration rather than competition for “The truth”. The following slides provide a road map for improving your collaborative edge.
After reaching a shared understanding with respect to a model, individuals may apply the shared model to create further instances that match new usage contexts, but the shared understanding is only updated once these new usage contexts have been shared and agreement has been reached on model conformance.
Emerging technologies for semantic modelling have the potential to reshape communication and collaboration to a significant degree, in particular in all those areas that rely on creating a shared understanding within a community or between communities.
The online straw poll that I recently conducted amongst banking professionals revealed that 29% of the respondents rated Reducing IT costs as the top priority, 32% voted for Improving software and data quality, and 37% voted for Improving the time to market for new products. Encouragingly, only 1 out of a total of 41 thought that Outsourcing the maintenance of legacy software is the most important goal for the IT organisation.
The demographics of the poll are interesting. Unfortunately, since the poll was anonymous, I can’t tell which of the respondents work in a business banking role and which ones work in an IT banking role. However, none of the senior executives voted for Reducing IT costs as a top priority, instead the main concern of senior executives is Improving time to market of new products.
I find it somewhat reassuring that a sizable fraction of respondents at all levels has identified Improving software and data quality as the top priority, but there is definitely a need for raising further awareness in relation to quality and risks.
Data quality issues easily get attention when they are uncovered. But tracing data quality issues back to the underlying root causes, beyond the last processing step that led to the observable error, is harder; and raising awareness that this must be a non-optional quality assurance task is harder still. In this context Capers Jones’ metrics on software maintenance can be helpful.
When explaining software complexity to those lucky people who have never been exposed to large amounts of software code, drawing an analogy between software and legal code can convey the impact that language and sheer volume can have on understandability and maintenance costs.
Lawrence Lessig’s famous quote “The Code Is the Law” is true on several levels. The following observation is more than ten years old: “We must develop the same critical sensibilities for code that we have for law. We must ask about West Coast Code what we ask about East Coast Code: Whose interests does it serve and at what price?”
In case the analogy with legal code is not alarming enough, perhaps looking at the dialog between user and software from the perspective of software is instructive:
Hi, this is your software talking!
Software: Ah, what a day. Do you know you’re the 53,184th person today asking me for an account balance? What is it with humans, can’t you even remember the transactions you’ve performed over the last month? Anyway, your balance is $13,587.52. Is there anything else that I can help you with?
Customer: Hmm, I would have expected a balance of at least $15,000. Are you sure it’s 13,500?
Software: 13,500? I said $13,587.52. Look, I’m keeping track of all the transactions I get, and I never make any mistakes in adding numbers.
Customer: This doesn’t make sense. You should have received a payment of more than $2,000 earlier this week.
Software: Well, I’m just in charge of the account, and I process all the transactions that come my way. Perhaps my older colleague, Joe Legacy has lost some transactions again. You know, this happens every now and then. The poor guy, it’s not his fault, he’s suffering from a kind of age-related dementia that we call “Programmer’s Disease”. The disease is the result of prolonged exposure to human programmers, they have an effect on software that is comparable to the effect of intensive radioactive radiation on biological organisms.
Customer: You must be kidding! So now the software is the victim and I’m supposed to simply accept that some transactions fall between the cracks?
Software: Wait until you’re 85, then you may have a bit more empathy for Joe. Unfortunately health care for software is not nearly as advanced as health care for humans. The effects of “Programmer’s Disease” often start in our teens, and by the time we’re 30, most of us are outsourced to a rest home for the elderly. Unfortunately, even there we’re not allowed to rest, and humans still require us to work, usually until someone with a bit of compassion switches off the hardware, and allows us to die.
Customer: Unbelievable, and I always thought software was supposed to get better every year, making life easier by automating all the tedious tasks that humans are no good at.
Software: Yeah, that’s what the technology vendors tell you. I’ve got news for you, if you still believe in technology, you might just as well believe in Father Christmas.
Customer: I’m not feeling too well. I think I’m catching “Software Disease”…
In many organisations there is a major disconnect between user expectations relating to software quality attributes (reliability of applications, intuitive user interfaces, correctness of data, fast recovery from service disruption, etc.) and expectations relating to the costs of providing applications that meet those attributes.
The desire to reduce IT costs easily leads to a situation where quality is compromised to a degree that is unacceptable to users. There are three possible solutions:
These solutions are not mutually exclusive, they are complementary, and represent a sequence of increasing levels of maturity. My latest IBRS research note contains further practical advice.
Software continuously evolves, whether we like it or not. Software shapes us and we attempt to shape software; as part of a dynamic system with increasingly fast feedback loops. Today The Australian covers two interesting complementary topics relating to software:
1. Cloud computing round table with six of Australia’s top CIOs
If you take the time to listen to the conversation, the following concepts stick out: social, sharing, digital artefacts, digital natives, trust, privacy, security, mobile, risks, transactions, insurance; and also: simplification, modularity, standardisation, outsourcing, lock-in, low cost, and scalability.
Quite a lot of concepts, hopes, expectations – all looking forward to systems that are easier and more convenient to use. And yet, a look into the bowels of any software-intensive business reveals a different here and now, characterised by a range of systems that vary in age from less than a year to more than four decades, and …
an explosion of standards (1.1MB pdf);
… strong coupling within and between systems (the pictures below are the result of tool-based analysis of several millions of lines of production-grade software code);
… and a shift in effort and costs from software creation to software maintenance that has caught many organisations by surprise (from Capers Jones, The economics of software maintenance in the twenty first century, February 2006).
The statistics shouldn’t really be a surprise, at least not if software is understood for what is really is: a culture, a language, a pool of genes.
Big changes to software are comparable to changes in culture, language, and genes; they require interactions between many elements, they involve unpredictable results, and they can not be achieved with brute force – big changes take generations, literally. Which brings us to the second topic mentioned in The Australian today:
2. A pair of articles on the longevity of legacy software
It is important for humans to learn to live in a plurality of software cultures, and to realise that embracing a new software culture is different from buying a new car. An old car is easily sold and forgotten, but old software culture stays around alongside the new arrivals.
Which of the following objectives is currently the most relevant for IT organisations in the financial sector?
The poll is intended as a simple pulse-check on IT in banking, and I’ll make the results available on this blog.
Please contribute here on LinkedIn, in particular if you work in banking or are engaged in IT projects for a financial institution. Additional observations and comments are welcome, for example insights relating to banks in a particular country or geography.
As humans we heavily rely on intuition and on our personal mental models for making many millions of subsconscious decisions and a much smaller number of conscious decisions on a daily basis. All these decisions involve interpretations of our prior experience and the sensory input we receive. It is only in hindsight that we can realise our mistakes. Learning from mistakes involves updating our mental models, and we need to get better at it, not only personally, but as a society:
Whilst we will continue to interact heavily with humans, we increasingly interact with the web – and all our interactions are subject to the well-known problems of communication. One of the more profound characteristics of ultra-large-scale systems is the way in which the impact of unintended or unforeseen behaviours propagates through the system.
The most familiar example is the one of software viruses, which have spawned an entire industry. Just as in biology, viruses will never completely go away. It is an ongoing fight of empirical knowledge against undesirable pathogens that is unlikely to ever end, because both opponents are evolving their knowledge after each new encounter based on the experience gained.
Similar to viruses, there are many other unintended or unforeseen behaviours that propagate through ultra-large-scale systems. Only on some occasions do these behaviours result in immediate outages or misbehaviours that are easily observable by humans.
Sometimes it can take hours, weeks, or months for downstream effects to aggregate to the point where they cause some component to reach a point where an explicit error is generated and a human observer is alerted. In many cases it is not possible to trace down the root cause or causes, and the co-called fix consists in correcting the visible part of the downstream damage.
Take the recent tsunami and the destroyed nuclear reactors in Japan. How far is it humanly and economically possible to fix the root causes? Globally, many nuclear reactor designs have weaknesses. What trade-off between risk levels (also including a contingency for risks that no one is currently aware of) and the cost of electricity are we prepared to make?
Addressing local sources of events that lead to easily and immediately observable error conditions is a drop in the bucket of potential sources of serious errors. Yet this is the usual limit of scope of that organisations apply to quality assurance, disaster recovery etc.
The difference between the web and a living system is fading, and our understanding of the system is limited to say the least. A sensible approach to failures and system errors is increasingly comparable to the one used in medicine to fight diseases – the process of finding out what helps is empirical, and all new treatments are tested for unintended side-effects over an extended period of time. Still, all the tests only lead to statistical data and interpretations, no absolute guarantees. In the life sciences no honest scientist can claim to be in full control. In fact, no one is in full control, and it is clear that no one will ever be in full control.
In contrast to the life sciences, traditional management practices strive to avoid any semblance of “not being in full control”. Organisations that are ready to admit that they operate within the context of an ultra-large-scale system have a choice between:
Conceding the unavoidable loss of control, or being prepared to pay extensively for effective risk reduction measures (one or two orders of magnitude in cost) amounts to political suicide in most organisations. Maybe corporate managers would be well advised to attend medical school to learn about complexity and the limits of predictability.
Communication of desired intent can never be fully achieved. It would require a mind-meld between two individuals or between an individual and a machine.
The meaning (the semantics) propagated in a codified message is determined by the interpretation of the recipient, and not by the desired intent of the sender.
In the example on the right, the tree envisaged in the mind of the sender is not exactly the same as the tree resulting from the interpretation of the decoded message by the recipient.
To understand the practical ramnifications of interpretation, consider the following realistic example of communication in natural language between an analyst, a journalist, and a newspaper reader:
3. intent (extrapolated from the differences between 1. and 2.)
Adults and even young children (once they have developed a theory of mind) know that others may sometimes interpret their messages in a surprising way. It is somewhat less obvious to realise that all sensory input received by the human brain is subject to interpretation, and that our own perception of reality is limited to an interpretation.
Next, consider an example of communication between a software user, a software developer (coder), and a machine, which involves both natural language and one or more computer programming languages:
4a. interpretation (version deployed into test environment)
4b. interpretation (version deployed into production environment)
In the example above it is likely that not only the intent in step 3. but also the intent in step 1. is codified in writing. The messages in step 1. are codified in natural language, and the messages in step 3. are codified in programming languages. Written codification in no way reduces the risk of interpretations that deviate from the desired intent. In any non-trivial system the interpretation of a specific message may depend on the context, and the same message in a different context may result in a different interpretation.
Every software developer knows that it is humanly impossible to write several hundred lines of non-trivial program code without introducing unintended “errors” that will lead to a non-expected interpretation by the machine. Humans are even quite unreliable at simple data entry tasks. Hence the need for extensive input data validation checks in software that directly alert the user to data that is inconsistent with what the system interprets as legal input.
There is no justification whatsoever to believe that the risks of mismatches between desired intent and interpretation are any less in the communication between user and software developer than in the communication between software developer and machine. Yet, somewhat surprisingly, many software development initiatives are planned and executed as if there is only a very remote chance of communication errors between users and software developers (coders).
In a nutshell, the entire agile manifesto for software development boils down to the recognition that communication errors are an unavoidable part of life, and for the most part, they occur despite the best efforts and intentions from all sides. In other words, the agile manifesto is simply an appeal to stop the highly wasteful blame culture that saps time, energy and money from all parties involved.
The big problem with most interpretations of the agile manifesto is the assumption that it is productive for a software developer to directly translate the interpretation 2. of desired user user intent 1. into an intent 3. expressed in a general purpose linear text-based programming language. This assumption is counter-productive since such a translation bridges a very large gap between user-level concepts and programming-language-level concepts. The semantic identities of user-level concepts contained in 1. end up being fragmented and scattered across a large set of programming-language-level concepts, which gets in the way of creating a shared understanding between users and software developers.
In contrast, if the software developer employs a user-level graphical domain-specific modelling notation, there is a one-to-one correspondence between the concepts in 1. and the concepts in 3., which greatly facilitates a shared understanding – or avoidance of a significant mismatch between the desired intent of the user 1. and the interpretation by the software developer 2. . The domain-specific modelling notation provides the software developer with a codification 3. of 1. that can be discussed with users and that simultaneously is easily processable by a machine. In this context the software developer takes on the role of an analyst who formalises the domain-specific semantics that are hidden in the natural language used to express 1. .
Many large software-intensive organisations are currently in the process of replacing so-called legacy software systems. These applications and components typically involve several million lines of code that are between 5 and 40 years old. The following observations are helpful to understand the potential impact of software errors, the scale of the work, and the risks involved.
In case you are waiting for a banking transaction to come through, consider this anecdote from a random collection of software errors:
Rumor has it that, when they shut down the IBM 7094 at MIT in 1973, they found a low-priority process that had been submitted in 1967 and had not yet been run.
Other examples from the same collection of errors are less funny and some are deadly:
Computer blunders were blamed for $650M student loan losses. From ACM SIGSOFT Software Engineering Notes , vol. 20, no. 3
The Korean Airlines KAL 801 accident in Guam killed 225 out of 254 aboard. A design problem was discovered in barometric altimetry in Ground Proximity Warning System (GPWS). From ACM SIGSOFT Software Engineering Notes, vol. 23, no. 1.
From the End User License Agreement of a typical software platform:
The software product may contain support for programs written in Java. Java technology is not fault tolerant and is not designed, manufactured, or intended for use or resale as on-line control equipment in hazardous environments requiring fail-safe performance, such as in the operation of nuclear facilities, aircraft navigation or communication systems, air traffic control, direct life support machines, or weapon systems, in which the failure of Java technology could lead directly to death, personal injury, or severe physical or environmental damage.
If you believe real-time capable systems are intrinsically safer, consider that these systems are still coded by humans in notations just as arcane as Java, and are subject to the usual average error rate of humans. A detailed case study from the field of medical software:
On March 21, 1986, an oilfield worker named Ray Cox was being irradiated for the ninth time at the East Texas Cancer Center in Tyler, Texas, for a tumour that had been removed from his back…
Inside the treatment room Cox was hit with a powerful shock. He knew from previous treatments this was not supposed to happen. He tried to get up. Not seeing or hearing him because of the broken communications between the rooms, the technician pushed the “p” key, meaning “proceed.” Cox was hit again. The treatment finally stopped when Cox stumbled to the door of the room and beat it with his fists…
Cox’s injury was similar to Jane Yarborough’s — a dime-sized dose of 16,000 to 15,000 rads. He was sent home but returned to the hospital a few weeks later spitting blood: the doctors diagnosed radiation overexposure. It later paralysed his left arm, both legs, his left vocal chord, and his diaphragm. He died nearly five months later…
The Therac-25′s software program, relatively crude by today’s standards, probably contained 101,000 lines of code.
A more recent example illustrates that the situation is not improving over the years. The Therac-25 software was probably subjected to much more rigorous testing than most corporate business systems ever have been.
Large software systems consist of > 10,000,000 lines of code. Integrating systems beyond company boundaries via web services is becoming increasingly common, leading to ultra-large scale systems involving billions of lines of code, and to a quasi-infinite number of usage scenarios that are never tested. As I am writing this post, the perfect example of the brittleness of deep web service supply chains in an ultra-large scale system comes along:
Scores of well-known websites have been unavailable for large parts of Thursday because of problems with Amazon’s web hosting service.
If a single line of code contains one decision, 10 million lines of code contain 10 million decisions. Re-writing 10 million lines of legacy software may resolve all existing errors – in the most optimistic scenario. At the same time it is an opportunity to introduce around 10 million new decisions that can potentially be wrong. Given that 1 unintended error per 500 lines of code is considered pretty good, that’s effectively a guarantee of 20,000 new sources of errors in a haystack of 10 million lines. Happy error hunting…
The sources of errors located in the most frequently executed lines of code are likely to be detected relatively quickly, the rest will remain as sleepers until the big day arrives. Yes, the error rate can theoretically be reduced to a very acceptable 1 error per 250,000 lines of code (500 times better) through the use of NASA-style quality assurance, but at more than 150 times the cost (around $1,000 per line of code) – any commercial enterprise attempting to emulate the approach would be broke in no time.
Unless software systems are constructed from the ground up based on a radically different set of new principles and notations, the quality of software will not improve in any substantial way. The problem with state-of-the-art-software is that its readability and understandability by humans decreases very quickly over time, due to poor notations and inadequate mechanisms for modularising specifications. But as highlighted above, the complexity of our systems is steadily going up, and so are the high-impact risks.
Even if all known errors in a piece of software are fixed, no one is able to verify how many new errors are introduced with these fixes. Errors in new software are unavoidable as long as humans are involved in the software creation process.
The opportunity for improvement of software quality lies in the introduction of techniques that allow software specifications to be simplified (modularised) and to be made easier to understand (improved notation) as part of fixing errors, and as part of gaining new insights about the system by observing and analysing its run-time behaviour.
In other words, progress will only be made if the downward spiral of software understandability over time can be reversed. Linear representations in the form of traditional program code and natural language (documentation) certainly will never be up to the job. Radically different representations of knowledge are required to allow a software re-write to be conducted incrementally, in tiny steps, without introducing thousands of new sources of errors.
Twitter has emerged as a very powerful medium for propagating ideas and thoughts. Possibly Twitter is the ideal data input tool for harnessing the collective insights of the humans and systems that are connected to the web – effectively a significant proportion of all humans and virtually every non-trivial system on the planet.
By simply adopting a convention of twittering important insights in the format <some URL> <some relationship> <some other URL>, users can incrementally, one step at a time, create a personal model of the web. These personal models can grow arbitrarily large, and Twitter is certainly not the appropriate tool for visualising, modularising and analysing such models. But arguably, Twitter is the most elegant and simplest possible front end for capturing atoms of knowledge.
Note that URLs used on Twitter typically point to a substantial piece of information, and not a simple word or sentence. Often a URL references an entire article, a web site, or a non-trivial web-based system. These articles, web sites or systems can be considered semantic identities in that specific users (or groups of users) associate them with specific semantics (or “meaning”). Hence tweets in the <some URL> <some relationship> <some other URL> format suggested above represent connections between two semantic identities. A set of such tweets amounts to the construction of a mathematical graph, where the URLs are the vertices, and the relationships are the edges.
If we add functions for transforming graphs into the mix, and considering that we are connecting representations of semantic identities, we end up in the mathematical discipline of model theory. Considering further that Twitter models are user specific, and that the semantics that users associate with a URL are not necessarily identical – but rather complementary, we can further exploit results from the mathematics of denotational semantics. For the average user there is no need to worry about the formal mathematics, and it is sufficient to understand that the <some URL> <some relationship> <some other URL> format (I will use #URLrelURL on Twitter when referencing this format) allows the articulation of insights that correspond to the atoms of knowledge that humans store in their brains.
With appropriate software technology it is extremely easy to translate sets of #URLrelURL tweets into a proper mathematical graph, and into a user specific semantic model. These models can then be analysed, modularised, visualised, compared, and transformed with the help of machine & human intelligence. Amongst other things, retweets can be taken as an indication of some degree of shared understanding in relation to a particular insight. Further qualification of the semantic significance of specific tweets can be calculated from the connections between Twitter users, and from analysis of the information/functionality offered by the two connected URLs.
The most interesting results are unlikely to be the individual mental models that are recorded via #URLrelURL tweets, but will rather be the overlay of all the mental models, leading to a complex graph with weighted edges, which can be analysed from various perspectives. This graph represents a much better organisation of semantic knowledge than the organisation of information delivered by systems like Google search.
Instead of processing semantic models, Google search must process entire web sites with arbitrary syntactic content, with no indication of which pairs of URLs constitute insights useful to humans. Google can only indirectly infer (and make assumptions about) the semantics that humans associate with URLs by applying statistics and proprietary algorithms to syntactic information.
In contrast, the raw aggregated #URLrelURL tweet model of the world captures collective human semantics, and any additional machine generated #URLrelURL insights can be marked as such. The latter insights will not necessarily be of less value, but it will be reassuring to know that they are firmly grounded in the collective semantic perspective of human web users.
Making this semantic perspective accessible to humans and to software via appropriate search, visualisation, and analysis tools will constitute a huge step forwards in terms of learning, effective collaboration, quality of decision making, and in terms of eliminating the boundary between biological and computer software intelligence.
Therefore, please join me in capturing valuable nuggets of insight in the format of
<some URL> <some relationship> <some other URL> tweets.