Are you a model builder or a story teller?

Have you ever wondered why “storytelling” is such a trendy topic? If this question bothers you and makes you uncomfortable, your perspective on human affairs and your cognitive lens is rather unusual.

Once upon a time, in the 1970s, model building gained popularity and nearly caught up with the older art of storytelling. But half a century later the popularity of model building is back to what it was in the 1960s, and according to the data analysis by Google Ngram “storytelling” has reached new heights, the word being used twice as often as the word “agile”.


The “art of storytelling” and “agile software” are strong contenders for being the catchphrase of the current millennium. Is is not surprising that the latter term was not in circulation in the 20th century, but it is perhaps somewhat surprising that the “art of storytelling” was pretty much non-existent before the 20th century.

What has happened to model building?

Unfortunately the public interface to Google Ngram only provides access to data up to 2008, but it seems that model building has given way to machine learning – and I would guess to “artificial intelligence” in recent years.

Model building is linked to the success of the scientific method. Researchers create, validate, and refine models to improve our level of understanding about various aspects of the world we live in, and to articulate their understanding in formal notations that facilitate independent critical analysis.

What is the usefulness of models that are only understandable for very few human?

The scientific revolution undoubtedly led to a better understanding of some aspects of the world we live in by enabling humans to create more and more complex technologies. But it also created new levels of ignorance about externalities that went hand in hand with the development of new technologies, fuelled by specific economic beliefs about efficiency and abstractions such as money and markets.

In the early days of the industrial revolution modelling was concerned with understanding and mastering the physical world, resulting in progress in engineering and manufacturing. Over the last century formal model building was found to be useful in more and more disciplines, across all the natural sciences, and increasingly as well in medicine and the social sciences, especially in economics.

With 20/20 hindsight it becomes clear that there is a significant lag between model building and the identification of externalities that are created by systematically applying models to accelerate the development and roll-out of new technologies.

Humans are biased to thinking they understand more than they actually do, and this effect is further amplified by technologies such as the Internet, which connects us to an exponentially growing pool of information. New knowledge is being produced faster than ever whilst the time available to independently validate each new nugget of “knowledge” is shrinking, and whilst the human ability to learn new knowledge at best remains unchanged – if it is not compromised by information overload.

Those who engage in model building face the challenge of either diving deep into a narrow silo, to ensure and adequate level of understanding of a particular niche domain, or to restrict their activity to an attempt of modelling the dependencies between subdomains, and to coordinating the model building of domain experts across a number of silos. As a result:

  • Many models are only understandable for their creators and a very small circle of collaborators.
  • Each model integrator can only be effective at bridging a very limited number of silos.
  • The assumptions associated with each model are only known understood locally, some of the assumptions remain tacit knowledge, and assumptions may vary significantly between the models produced by different teams.
  • Many externalities escape early detection, as there is hardly anyone or any technology continuously looking for unexpected results and correlations across deep chains of dependencies between subdomains.

When the translation of new models into new applications and technologies is not adequately constrained by the level to which models can be independently validated and by application of the precautionary principle, potentially catastrophic surprises are inevitable.

Does it make sense to talk about models that are not understandable for any human?

Good models are not only useful, they are also understandable and have explanatory power – at least for a few people. Additionally, from the perspective of a mathematician, many of the most highly valued models also conform to an aesthetic sense of beauty, by surfacing a surprising degree of symmetry, by bringing non-intuitive connections into focus, and simply by their level of compactness.

Scientific model building is a balancing act between simplicity and usefulness. An overly complex model is less easy to understand and therefore less easy to integrate with complementary models, and an over-simplified model may be easy to work with, but may be so limited in its scope of applicability that it becomes useless.

What is not widely recognised beyond the mathematical community is that the so-called models generated by machine learning algorithms / artificial intelligence systems are not human understandable, for the same reasons that the physical representations of knowledge within a human brain are not understandable by humans. Making any sense of knowledge representations in a brain requires not only highly specialised scanning technologies but also non-trivial visualisation technologies – and the resulting pictures only give us a very crude and indirect understanding of what a person experiences and thinks about at any given moment.

Do correlation models without explanatory power qualify as models? There are many useful applications of machine learning, but if the learning does not result in models that are understandable, then the results of machine learning should perhaps be referred to as digital correlation maps to avoid confusion with models that are designed for human consumption. Complex correlation maps can be visualised in ways similar to the results of brain scans, and the level of insights that can be deduced from such visualisations are correspondingly limited.

It is not yet clear how to construct conscious artificial intelligence systems, i.e. systems that can not only establish correlations between data streams, but that are also capable of developing conceptual models of themselves and their environment that can be shared with and can be understood by humans. In particular current machine learning systems are not able to explain how they arrive at specific conclusions.

The limitations of machine learning highlights what is being lost by neglecting model building and by leaving modelling entirely to individual experts working in deep and narrow silos. Model validation and integration has largely been replaced with over-simplified storytelling – the goal has shifted from improving understanding to applying the tools of persuasion.

What’s the story with storytelling?


The art of storytelling is linked to the rise of marketing and persuasive writing. Edward Bernayse was one of the original shapers of the logic of marketing:

Bernays’ vision was of a utopian society in which individuals’ dangerous libidinal energies, the psychic and emotional energy associated with instinctual biological drives that Bernays viewed as inherently dangerous given his observation of societies like the Germans under Hitler, could be harnessed and channelled by a corporate elite for economic benefit. Through the use of mass production, big business could fulfil the cravings of what Bernays saw as the inherently irrational and desire-driven masses, simultaneously securing the niche of a mass production economy (even in peacetime), as well as sating what he considered to be dangerous animal urges that threatened to tear society apart if left unquelled.

Bernays touted the idea that the “masses” are driven by factors outside their conscious understanding, and therefore that their minds can and should be manipulated by the capable few. “Intelligent men must realize that propaganda is the modern instrument by which they can fight for productive ends and help to bring order out of chaos.”

The conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in democratic society. Those who manipulate this unseen mechanism of society constitute an invisible government which is the true ruling power of our country. …In almost every act of our daily lives, whether in the sphere of politics or business, in our social conduct or our ethical thinking, we are dominated by the relatively small number of persons…who understand the mental processes and social patterns of the masses. It is they who pull the wires which control the public mind.

Propaganda was portrayed as the only alternative to chaos.

The purpose of storytelling is the propagation of beliefs and emotions.

What is the usefulness of stories if they do nothing to improve our level of understanding of the world we live in?

Sure, if stories help to increase the number of shared beliefs within a group, then the people involved may understand more about the motivations and behaviours of the others within the group. But at the same time, in the absence of building improved models about the non-social world, the behaviour of the group easily drifts into more and more abstract realms of social games, making the group increasingly blind to the effects of their behaviours on outsiders and on the non-social world.

Stories are appealing and hold persuasive potential because of their role in cultural transmission is the result of gene-culture co-evolution in tandem with the human capability for symbolic thought and spoken language. In human culture stories are involved in two functions:

  1. Transmission of beliefs that are useful for the members of a group. Shared beliefs are the catalyst for improved collaboration.
  2. Deception in order to protect or gain social status within a group or between groups. In the framework of contemporary competitive economic ideology deception is often referred to as marketing.

Storytelling thus is a key element of cultural evolution. Unfortunately cultural evolution fuelled by storytelling is a terribly slow form of learning for societies, even though storytelling is an impressively fast way for transmitting beliefs to other individuals. Not entirely surprisingly some studies find the prevalence of psychopathic traits in the upper echelons of the corporate world to be between 3% and 21%, much higher than the 1% prevalence in the general population.

Storytelling with the intent of deception enables individuals to reap short-term benefits for themselves to the longer-term detriment of society

The extent to which deceptive storytelling is tolerated is influenced by cultural norms, by the effectiveness of institutions and technologies entrusted with the enforcement of cultural norms, and the level of social inequality within a society. The work of the disciples of Edward Barneyse ensured that deceptive storytelling has become a highly respected and valued skill.

However, simply focusing on minimising deception is no fix for all the weaknesses of storytelling. When a society with highly effective norm enforcement insists on rules and behavioural patterns that create environmental or social externalities, some of which may be invisible from within the cultural framework, deception can become a vital tool for those who suffer as a result of the externalities.

Furthermore, even in the absence of intentional deception, the maintenance, transmission, and uncritical adoption of beliefs via storytelling can easily become problematic if beliefs held in relation to the physical and living world are simply wrong. For example some people and cultures continue to hold scientifically untenable beliefs about the causes of specific diseases.

All political and economic ideologies rely on storytelling

Human societies are complex adaptive systems that can’t be described by any simple model. More precisely, it is not possible to develop long-range and detailed predictive models for social and economic behaviour. However, in a similar way that extensive sensor networks and modern computing technology allows the development of useful short-range weather forecasts, it is possible to use social and economic data to look for externalities and attempts of corruption.

Nothing stands in the way of monitoring the results of significant social and economic changes with a level of diligence that is comparable to the diligence expected from researchers when conducting scientific experiments in the medical field. Of course the pharmaceutical industry also has a reputation for colourful storytelling, and the healthcare sector is not spared from ethical corruption and the tools of marketing. But at least the healthcare sector is heavily regulated, academic research is an integral part of the sector, and independent validation of results is part of the certification process for all new products and treatments.

One has to wonder why economic and social policies are not subject to a comparable level of independent oversight. The model of governance in modern democracies typically includes a separation of power between legislature, executive, and judiciary, but the question is whether effective separation of power can be maintained over decades and centuries.

Human societies and social structures are far from static. Concepts such as the nation state are only a couple of hundred years old and the lifespan of economic bubbles and the structures created by within such bubbles is measured in years rather than centuries. And yet, many people and institutions are incapable of considering possible economic or social arrangements that lie outside consumerism and the cultural norms that currently dominate within a particular nation state. Cultural inertia is beneficial for societies whenever the environment in which they are embedded is highly stable, but it becomes problematic when the environment is undergoing rapid change.

Historically a rapidly changing environment used to be associated with local wars or local natural disasters such as extended periods of draughts or earthquakes. The industrial revolution has significantly shifted the main triggers of rapid change:

  1. Improvements in technology, hygiene and medicine have facilitated significant population growth and ushered in a new geological era – the anthropocene, human activity is changing the physical environment faster than ever before
  2. Machine powered technology has enabled wars of unprecedented scale, speed, and levels of destructiveness
  3. The paradigm of growth based economics fuelled by interest bearing debt and aggressive marketing dominates on all continents in most societies, and facilitates global economic bubbles
  4. Carbon emissions and other physical externalities of modern economic activity have no physical boundaries

Given this context it is extremely tempting for professional politicians within government and corporations to subscribe to the elitist logic of Edward Barneyse and to exploit storytelling for local or personal gains. An alien observer of human societies would probably be amazed that some humans (and large organisations) are given a platform for virtually unlimited storytelling at a scale that affects billions and hundreds of millions people, and that delusional and misleading stories are let lose on the population of a species that is the local champion of cultural transmission on this planet.

Within growth based economics the effectiveness of marketing can never be good enough. Desperate corporations are hoping machine learning algorithms can take storytelling to yet another level. High frequency trading is one example of “successful” automated marketing, where algorithms try to trick each other into believing stories that are beyond human comprehension.

End of story?

If we continue to believe that the world is shaped exclusively by human delusions, then the human story may come to a fairly unspectacular end rather soon. It also won’t help us if we focus on building technologies that provide even more powerful delusions.

If there is anything that has led to significant improvements in human well-being and life expectancy in the last thousand years it would undoubtedly have to be model building and the scientific method. The power tools of systematic experimentation and modelling facilitated much of what we call progress but they also facilitated dangerous social games at a planetary scale.


Just as medical science no longer relies on unsubstantiated stories, the stories that we tell each other in business, government, and academic administration need to be subjected to critical analysis, and the public needs to be made aware of the evidence (or lack thereof) that underpins the claims of politicians and executives in the corporate world, so that experiments are clearly identified, and most importantly, that experiments are carefully  monitored and subjected to independent review before being sold as solutions.

In this context lessons can be learned from the fast moving world of digital technology. On the positive side the software development community is acutely aware of the need to conduct experiments, on the negative side, outside a few life critical industries, the lack of rigour when conducting experiments in the development and deployment of new software solutions is embarrassing. In the software development community conducting multiple independent experiments is generally considered a waste of time, and the interests of financial investors determine the kinds of “solutions” that receive funding:

All human artefacts are technology. But beware of anybody who uses this term. 
Like “maturity” and “reality” and “progress”, the word “technology” 
has an agenda for your behaviour: usually what is being referred to as 
“technology” is something that somebody wants you to submit to.

“Technology” often implicitly refers to something you are expected to turn over to “the guys who understand it.” This is actually almost always a political move. Somebody wants you to give certain things to them to design and decide. 
Perhaps you should, but perhaps not.
– Ted Nelson, a pioneer of information technology, 
philosopher, and sociologist who coined the terms hypertext 
and hypermedia in 1963.


The software industry is an interesting economic subsystem for observing human social behaviour at large scale. Today this sector is interwoven with virtually all other economic subsystems and even with the most common tools that we use for communicating with each other.

David Graeber has analysed the phenomenon of “bullshit jobs” in detail.

“In the year 1930, John Maynard Keynes predicted that technology would have advanced sufficiently by century’s end that countries like Great Britain or the United States would achieve a 15-hour work week. There’s every reason to believe he was right. In technological terms, we are quite capable of this. And yet it didn’t happen. Instead, technology has been marshalled, if anything, to figure out ways to make us all work more. In order to achieve this, jobs have had to be created that are, effectively, pointless. Huge swathes of people, in Europe and North America in particular, spend their entire working lives performing tasks they secretly believe do not really need to be performed. The moral and spiritual damage that comes from this situation is profound. It is a scar across our collective soul. Yet virtually no one talks about it. …”

Silicon Valley innovation pop-culture?

Students of software engineering and computer science are often attracted by the idea of “innovation” and by the prospect of exciting creative work, contributing to the development of new services and products. The typical reality of software development has very little if anything to do with innovation and much more with building tools that support David Graeber’s “bullshit jobs” and Edward Bernayse’s elitist “utopia” of conscious manipulation of the habits and opinions of the masses by a small number of “leaders” suffering from narcissistic personality disorder.


The culture within the software development community is shaped much less by mathematics and scientific knowledge about the physical world than by the psychology of persuasion – and an anaemic conception of innovation based on social popularity and design principles that encourage planned obsolescence. A few years ago Alan Kay, a pioneer of object-oriented programming and windowing graphical user interface design observed:

It used to be the case that people were admonished to “not re-invent the wheel”. We now live in an age that spends a lot of time “reinventing the flat tire!”

The flat tires come from the reinventors often not being in the same league as the original inventors. This is a symptom of a “pop culture” where identity and participation are much more important than progress. … In the US we are now embedded in a pop culture that has progressed far enough to seriously hurt places that hold “developed cultures”. This pervasiveness makes it hard to see anything else, and certainly makes it difficult for those who care what others think to put much value on anything but pop culture norms.

Mainstream software development practices are geared towards dealing with the characteristics of big ball of mud architectures and the reality of the curse of software maintenance.

Do we need a better language for model building?

Making model building accessible to a wider audience may require developing a cognitively simple visual language for articulating resource and information flows in living and economic systems in a format that is not influenced by any particular economic ideology.

Many of the languages of mathematics already make use of visual concept graphs. Digital devices open up the possibility of highly visual languages and user interfaces that enable everyone to create concept graphs that are formal in a mathematical sense, understandable for humans, and easily processable by software tools. The only formal foundations needed implementing such a visual language system are axioms from model theory, category theory, and domain theory.

In terms of usability, a formal software-mediated visual language system that takes into consideration human cognitive limits has the potential to:

  1. Improve the speed and quality of knowledge transfer between human domain experts
  2. Improve the speed and quality of knowledge transfer between human domain experts and software tools
  3. Facilitate innovative approaches to extracting human understandable semantics from informal textual artefacts, in a format that is easily processable by software tools
  4. Facilitate innovative approaches to unsupervised machine learning that deliver results in a format that is compatible with familiar representations used by human domain experts, enabling the construction of knowledge repositories capable of receiving inputs from:
    • human domain experts
    • informal textual sources of human knowledge
    • machine learning systems

All scientists, engineers, and technologists are familiar with a language that is more expressive and less ambiguous than spoken and written language. The language of concept graphs with highly domain and context-specific iconography regularly appears on white boards whenever two or more people from different disciplines engage in collaborative problem solving. Such languages can easily be formalised mathematically and can be used in conjunction with rigorous validation by example / experiments.

Model building and digital correlation maps can go hand in hand

Machine learning need not result in opaque systems that are as difficult to understand as humans, and a formal visual language may represent the biggest breakthrough for improving the understanding between humans since the development of spoken language.

… And storytelling and social transmission need not result in a never ending sequence of psychopathic social games if we get into the habit of explicitly tagging all stories with the available supporting evidence, so that untested ideas and attempts of corruption become easier to identify.

semantic lensIn all domains where decisions and actions may have significant impact on others and on the environment we live in, adopting a more autistic mindset in relation to human stories may improve human decision making. In the Asch conformity experiment, autists were found to resist changing their spontaneous judgement to an array of graphic lines despite social pressure to change by conforming to the erroneous judgement of an authoritative confederate.

Mathematics – the language of explanation and validation

Paul Lockhart describes mathematics as the art of explanation. He is correct. Mathematical proofs are the one type of storytelling that is committed to being entirely open regarding all assumptions and to the systematically exploring all the possible implications of specific sets of assumptions. Foundational mathematical assumptions are usually refereed to as axioms.

Formal proofs are parametrised formal stories (sequences of reasoning steps) that explore the possibilities of entire families of stories and their implications. Mathematical beauty is achieved when a complex family of stories can be described by a small elegant formal statement. Complexity does not melt away accidentally. It is distilled down to the its essence by finding a “natural language” (or “model”) for the problem space represented by a family of formal stories.

A useful model encapsulates all relevant commonalities of the problem space – it provides an explanation that is understandable for anyone who is able to follow the reasoning steps leading to the model.

The more parameters and relationships between parameters come into play, the more difficult it typically is to uncover cognitively simple models that shed new light onto a particular problem space and the underlying assumptions. If a particular set of formal assumptions is found to have a correspondence in the physical or living world, the potential for positive and negative technological innovation can be profound.

Whether the positive or negative potential prevails is determined by the motivations, political moves, and stories told by those who claim credit for innovation.

Any hope of progress beyond stories?

From within a large organisation culture is often perceived as being static or very slow moving locally, and changes in the environment are being perceived as being dynamic and fast moving. This is an illusion. It is easy to lose sight of the bigger picture.

Outside the context of “work” the people within a large organisation are part of many other groups and part of the rapidly evolving “external” context. The larger an organisation, the greater the inertia of the internal organisational culture, and the faster this culture disconnects from the external cultural identities of employees, customers, and suppliers.

The resulting cognitive dissonance manifests itself in terms of low levels of employee engagement, high levels of mental illness, and the increasingly short life expectancy of large corporations. Group identities and concepts such as intelligence and success are cultural constructs that are subject to evolutionary pressure and phase transitions.

Marketing may well become a taboo in the not-too-distant future

Over the next few weeks, to my knowledge, there are at least four dedicated conferences on the topics of redefining intelligence, new economics, and cultural evolution.


Conference on Interdisciplinary Innovation and Collaboration

Melbourne, Australia – 2 September 2017
Auckland, New Zealand – 16 September 2017

These events are part of the quarterly CIIC unconference series, addressing challenges that go beyond the established framework of research in industry, government and academia. The workshops in September will build on the results from earlier workshops to explore the essence of humanity and how to construct organisations that perform a valuable function in the living world.

The historic record of societies and large organisations being aware of the limitations of their culture is highly unimpressive. Redefining intelligence is our chance to break out of self-destructive patterns of behaviour. It is a first step towards a better understanding of the positive and negative human potential within the ecological context of the planet.

More information on CIIC and the theme for the upcoming unconference:


Thanks to Pete Rive and Arthur Shelley at AUT and RMIT for providing CIIC with superb venues!

New Economy Conference

Brisbane, Australia – 1-3 September 2017

Building on the inaugural 2016 conference held in Sydney, the 2017 gathering invites people to come together to share stories of success, address challenges and join the broader movement so we can continue working together to build a ‘new’ economic system. The 2017 New Economy Conference will bring together hundreds of people and organisations to launch powerful new collective strategies for creating positive social and economic change, to achieve long term, liveable economies that fit within the productive capacity of a healthy environment.

More information on NENA and the Building the New Economy conference:


Thanks to Donnie Maclurcan and Tirrania Suhood for making me aware of this event!

Inaugural Cultural Evolution Society Conference

Jena, Germany – 13 – 15 September 2017

The Cultural Evolution Society supports evolutionary approaches to culture in humans and other animals. The society welcomes all who share this fundamental interest, including in the pursuit of basic research, teaching, or applied work. We are committed to fostering an integrative interdisciplinary community spanning traditional academic boundaries from across the social, psychological, and biological sciences, and including archaeology, computer science, economics, history, linguistics, mathematics, philosophy and religious studies. We also welcome practitioners from applied fields such as medicine and public health, psychiatry, community development, international relations, the agricultural sciences, and the sciences of past and present environmental change.

More information on CES and the related conference:


Thanks to Joe Brewer and his team for coordinating this unique event!

Beyond busyness as usual

I have always been irritated by people for whom business is first and foremost about “monetisation”. Extrinsically motivated busyness people are incapable of understanding any non-trivial innovation. The worship of monetisation often goes hand-in-hand with the introduction of so-called “organisational values” as hollow slogans, with no thoughts spared for how these values are going to be enacted, and how they might create something that people within and beyond the organisation actually recognise and appreciate as valuable.

Existing approaches like the highly popular business model canvas or the OMG’s business motivation model miss the bigger picture of cultural evolution in the context of zero marginal cost communication and assume a very traditional business mindset.

Value systems

We live in a context of rapid and multidimensional cultural evolution. A few years ago the need for agreeing of what constitutes a useful direction and the need for assessing progress prompted me to design a simple modelling language for purpose and value systems.

semantic lens

The semantic lens is a simple tool for agreeing what is considered valuable, and it assists in identifying suitable metrics for keeping track of output or progress. As a nice side effect the metrics encouraged by the semantic lens prevent results from being dumbed down prematurely to easily corruptible monetary numbers.


Example of an instantiated semantic lens

The semantic lens is a visual language for describing human motivations. Four of the five core concepts directly relate to the outputs of human creativity, and nature, and the fifth concept, is directly connected to the first four concepts. The element of critical self-reflection invites the questioning of established values and the consideration of alternative candidate values.

A configured semantic lens assists in surfacing the cultural context and assumed value system that underpins the value proposition of a potential innovation. In the absence of an explicit value system it is impossible to reason about innovation in any meaningful way – the discussion is limited to thinking within the established cultural box and very easily deteriorates into a discussion of “ingenious ways of monetising data”.


The S23M semantic lens explains why S23M exists

  1. Critical self-reflection : regarding all other elements of the semantic lens (in no particular order) towards sustainability, resilience, and happiness
  2. Symbols : Co-creating organisations and systems which are understandable by future generations of humans and software tools
  3. Nature : Maximising biodiversity
  4. Artefacts : Minimising human generated waste
  5. Society : Creating a more human and neurodiversity friendly environment
    • Generating more trust – less surprising misunderstandings, 
more collaborative risk taking, less exploitation, more mutual aid
    • Generating more learning – more open knowledge sharing, 
less indirect language, less hierarchical control, deeper understanding
    • Generating more diversity – more tolerance of difference, less coercion, more curiosity

We are in the business of strengthening / weakening specific feedback loops

The S23M semantic lens is supported by 
26 principles that form the backbone of our operating model, and which assist us in building out a unique niche in the living world.

Value creating activities

To go beyond motivations and intent, and to describe the value creating activities within an economic system, or the activities of a specific organisation or individual economic agent, requires a dedicated modelling language beyond the semantic lens.

logistic lens

Understanding the human value creation process is not helped by the multitude of completely arbitrary and internally overlapping categorisation schemes that economists and business people use to talk about industries and sectors. The logistic lens has the potential to put an end to the distracting proliferation of jargons via five simple categories. In the logistic lens models can be nested in a fractal structure as needed to reflect the reality of complex systems.

Four of the five core concepts of the logistic lens deal with activities that produce observable results in the physical and natural environment, and all human cultural activities that are one or more levels removed from being measurable in the physical and natural environment are confined to the culture concept.

  1. Energy and food production provide the fuel for all our human endeavours.
  2. Design and engineering are the focus of many human creative endeavours, and have resulted in the tools that power our societies.
  3. Transportation and communication allow human outputs, both in terms of concrete and abstract artefacts, to be shared and made available to others, and allow resources and knowledge to be deployed wherever they are needed.
  4. Maintenance and quality related activities are those that are needed to keep human societies and human designed technologies operational.

Example of an instantiated logistic lens to structure and optimise activities within a given culture

Economic progress and value creation can be understood in terms of the cultural activities of playing and learning, and related design and engineering activities that lead to technological innovation.

Truly disruptive innovations have the characteristic of not only resulting in a new player in the economic landscape, but they also trigger or tap into a shift in value systems. Thus the semantic lens is a useful gauge for identifying and exploring potentially disruptive innovations.

Taken together the semantic and logistic lenses provide a very small and powerful language for reasoning about human behaviour and human creativity – even beyond the confines of established social norms and best business practices.

Innovation and cultural change can only be transformative if it substantially redefines social norms and so-called best practice.

The antidote to misuse of mathematics and junk data

transparencyDepending on who you ask, the perceptions of mathematics range from an esoteric discipline that has little relevance to everyday life to a collection of magical rituals and tools that shape the operations of human cultures. In an age of exponentially increasing data volumes, the public perception has increasingly shifted towards the latter perspective.

On the one hand it is nice to see a greater appreciation for the role of mathematics, and on the other hand the growing use of mathematical techniques has led to a set of cognitive blind spots in human society:

  1. Blind use of mathematical formalisms – magical rituals
  2. Blind use of second hand data – unvalidated inputs
  3. Blind use of implicit assumptions – unvalidated assumptions
  4. Blind use of second hand algorithms – unvalidated software
  5. Blind use of terminology – implicit semantic integration
  6. Blind use of numbers – numbers with no sanity checks

Construction of formal models is no longer the exclusive domain of mathematicians, physical scientists, and engineers. Large and fast flowing data streams from very large networks of devices and sensors have popularised the discipline of data science, which is mostly practiced within corporations, within constraints dictated by business imperatives, and mostly without external and independent supervision.

The most worrying aspect of corporate data science is the power that corporations can wield over the interpretation of social data, and the corresponding lack of power of those that produce and share social data. The power imbalance between corporations and society is facilitated by the six cognitive blind spots, which affect the construction of formal models and their technological implementations in multiple ways:

  1. Magical rituals lead to a lack of understanding of algorithm convergence criteria and limits of applicability, to suboptimal results, and to invalid conclusions. Examples: Naive use of frequentist statistical techniques and incorrect interpretations of p-values by social scientists, or naive use of numerical algorithms by developers of machine learning algorithms.
  2. Unvalidated inputs open the door for poor measurements and questionable sampling techniques. Examples: use of data sets collected by a range of different instruments with unspecified characteristics, or incorrect priors in Bayesian probabilistic models.
  3. Unvalidated assumptions enable the use of speculative causal relationships, simplistic assumptions about human nature, and create a platform for ideological bias. Examples: many economic models rest on outdated assumptions about human behaviour, and consciously ignore evidence from other disciplines that conflicts with established economic dogma.
  4. Unvalidated software can produce invalid results, contradictions, and unexpected error conditions . Examples: outages of digital services from banks and telecommunications service providers are often treated as unavoidable, and computational errors sometimes cost hundreds of millions of dollars or hundreds of lives.
  5. Unvalidated semantic links between mathematical formalisms, data, assumptions and software facilitate further bias and spurious complexity. Examples: Many case studies show that formalisation of semantic links and systematic elimination of spurious complexity can reduce overall complexity by factors between 3 and 20, whilst improving computational performance.
  6. Unvalidated numbers can enable order of magnitude mistakes and obvious data patterns to remain undetected. Example: Without adequate visual representations, even simple numbers can be very confusing for a numerically challenged audience.

Whilst a corporation may not have an explicit agenda for creating distorted and dangerously misleading models, the mechanics of financial economics create an irresistible temptation to optimise corporate profit by systematically shifting economic externalities into cognitive blind spots. A similar logic applies to government departments that have been tasked to meet numerically specified objectives.

Mathematical understanding and numerical literacy is becoming increasingly important, but it is unrealistic to assume that the majority of the population will become sufficiently proficient in mathematics and statistics to be able to validate and critique the formal models employed by corporations and governments. Transparency, including open science, open data, and open source software are are emerging as essential tools for independent oversight of cognitive blind spots:

  1. Mathematicians must be able to review the formalisms that are being used
  2. Statisticians must be able to review measurement techniques and input data sources
  3. Scientists and experts from disciplines relevant to the problem domain must be able to review assumptions
  4. Software engineers familiar with the software tools that are being used must be able to review software implementations
  5. Mathematicians with an understanding of category theory, model theory, denotational semantics, and conceptual modelling must be able to review semantic links between mathematical formalisms, terminology, data, assumptions, and software
  6. Mathematicians and statisticians must be able to review data representations

In a zero marginal cost society, transparency allows scarce and highly specialised mathematical knowledge to be used for the benefit of society. It is very encouraging to note the similarity in knowledge sharing culture between the mathematical community and the open source software community, and to note the decreasing relevance of opaque closed source software.

The more society depends on decisions made with the help of mathematical models, the more important it becomes that these decisions adequately accommodate the concrete needs of individuals and local communities, and that the language used to reason about economics remains understandable, and enables the articulation of economic goals in simple terms.

The big human battle of this century

The big human battle of this century is going to be the democratisation of data and all forms of knowledge, and the introduction of digital government with the help of free and open source software

Whilst undoubtedly the reaction of the planet to the explosion of human activities with climate change and other symptoms is the largest change process that has ever occurred in human history in the physical realm, the exponential growth of the Internet of Things and digital information flows is triggering the largest change process in the realm of human organisation that societies have ever experienced.

The digital realm

The digital realm

Sensor networks and pervasive use of RFID tags are generating a flood of data and lively machine-to-machine chatter. Machines have replaced humans as the most social species on the planet, and this must inform the approach to the development of healthy economic ecosystems.

Internet of Things

Sensors that are part of the Internet of Things

When data scientists and automation engineers collaborate with human domain experts in various disciplines, machine-generated data is the magic ingredient for solving the hardest automation problems.

  • In domains such as manufacturing and logistics the writing is on the wall. Introduction of self-driving vehicles and just a few more robots on the shop floor will eliminate the human element in the social chatter at the workplace within the next 10 years.
  • The medical field is being revolutionised by the downward spiral of the cost of genetic analysis, and by the development of medical robots and medical devices that are hooked up to the Internet, paving the way for machine learning algorithms and big data to replace many of the interactions with human medical professionals.
  • The road ahead for the provision of government services is clearly digital. It is conceivable that established bureaucracies can resist the trend to digitisation for a few years, but any delay will not prevent the inevitability of automation.

The social implications

Data driven automation leads to an entirely new perspective on the purpose of the education system and on the role of work and employment in society.

Large global surveys show that more than 70% of employees are disengaged at work. It is mainly in manufacturing that automation directly replaces human labour. In many other fields the shift in responsibilities from humans to machines initially goes hand in hand with the invention of new roles and loss of a clear purpose.

Traditional work is being transformed into a job for a machine. Exceptions are few and far between.

Data that is not sufficiently accessible is only of very limited value to society. The most beneficial and disruptive data driven innovation are those that result from the creative combination of data sets from two or more different sources.

It is unrealistic to assume that the most creative minds can be found via the traditional channel of employment, and it is unrealistic that such minds can achieve the best results if data is locked up in organisation-specific or national silos.

The most valuable data is data that has been meticulously validated, and that is made available in the public domain. It is no coincidence that software, data, and innovation is increasingly produced in the public domain. Jeremy Rifkin describes the emergence of a third mode of commons-based digitally networked production that is distinct from the property- and contract-based modes of firms and markets.

The education system has a major role to play in creating data literate citizen-scientists-innovators.

The role of economics

It is worthwhile remembering the origin of the word economics. It used to denote the rules for good household management. On a planet that hosts life, household management occurs at all levels of scale, from the activities of single cells right up to processes that involve the entire planetary ecosystem. Human economics are part of a much bigger picture that always included biological economics and that now also includes economics in the digital realm.

To be able to reason about economics at a planetary level the planet needs a language for reasoning about economic ecosystems, only some of which may contain humans. Ideally such a language should be understandable by humans, but must also be capable of reaching beyond the scope of human socio-economic systems. In particular the language must not be coloured by any concrete human culture or economic ideology, and must be able to represent dependencies and feedback loops at all levels of scale, as well as feedback loops between levels of scale, to enable adequate representation of the fractal characteristic of nature.

The digital extension of the planetary nervous system

In biology the use of electrical impulses for communication is largely confined to communication within individual organisms, and communication between organisms is largely handled via electromagnetic waves (light, heat), pressure waves (sound), and chemicals (key-lock combinations of molecules).

The emergence of the Internet of Things is adding to the communication between human made devices, which in turn interact with the local biological environment via sensors and actuators. The impact of this development is hard to overestimate. The number of “tangible” things that might be computerized is approaching 200 billion, and this number does not include large sensor networks that are being rolled out by scientists in cities and in the natural environment. Scientists are talking about trillion-sensor networks within 10 years. The number of sensors in mobile devices is already more than 50 billion.

Compared to chemical communication channels between organisms, the speed of digital communication is orders of magnitude faster. The overall effect of equipping the planet with a ubiquitous digital nervous system is comparable to the evolution of animals with nervous systems and brains – it opens up completely new possibilities for household management at all levels of scale.

The complexity of the Internet of Things that is emerging on the horizon over the next decade is comparable to the complexity of the human brain, and the volume of data flows handled by the network is orders of magnitudes larger than anything a human brain is able to handle.

The global brain

Over the course of the last century, starting with the installation of the first telegraph lines, humans have embarked on the journey of equipping the planet with a digital electronic brain. To most human observers this effort has only become reasonably obvious with the rise of the Web over the last 20 years.

Human perception and human thought processes are strongly biased towards the time scales that matter to humans on a daily basis to the time scale of a human lifetime. Humans are largely blind to events and processes that occur in sub-second intervals and processes that are sufficiently slow. Similarly human perception is biased strongly towards living and physical entities that are comparable to the physical size of humans plus minus two orders of magnitude.

As a result of their cognitive limitations and biases, humans are challenged to understand non-human intelligences that operate in the natural world at different scales of time and different scales of size, such as ant colonies and the behaviour of networks of plants and microorganisms. Humans need to take several steps back in order to appreciate that intelligence may not only exist at human scales of size and time.

The extreme loss of biodiversity that characterises the anthropocene should be a warning, as it highlights the extent of human ignorance regarding the knowledge and intelligence that evolution has produced over a period of several billion years.

It is completely misleading to attempt to attach a price tag to the loss of biodiversity. Whole ecosystems are being lost – each such loss is the loss of a dynamic and resilient living system of accumulated local biological knowledge and wisdom.

Just like an individual human is a complex adaptive system, the planet as a whole is a complex adaptive system. All intelligent systems, whether biological or human created, contain representations of themselves, and they use these representations to generate goal directed behaviour. Examples of intelligent systems include not only individual organisms, but also large scale and long-lived entities such as native forests, ant colonies, and coral reefs. The reflexive representations of these systems are encoded primarily in living DNA.

From an external perspective it nearly seems as if the planetary biological brain, powerful – but thinking slowly in chemical and biological signals over thousands of years, has shaped the evolution of humans for the specific purpose of developing and deploying a faster thinking global digital brain.

It is delusional to think that humans are in control of what they are creating. The planet is in the process of teaching humans about their role in its development, and some humans are starting to respond to the feedback. Feedback loops across different levels of scale and time are hard for humans to identify and understand, but that does not mean that they do not exist.

The global digital brain is currently still in under development, not unlike the brain of a human baby before birth. All corners of the planet are being wired up and connected to sensors and actuators. The level of resilience of the overall network depends on the levels of decentralisation, redundancy, and variability within the network. A hierarchical structure of subsystems as envisaged by technologist Ray Kurzweil is influenced by elements of established economic ideology rather than by the resilient neural designs found in biology. A hierarchical global brain would likely suffer from recurring outages and from a lack of behavioural plasticity, not unlike the Cloud services from Microsoft and Amazon that define the current technological landscape.

Global thinking

The ideology of economic globalisation is dominated by simplistic and flawed assumptions. In particular the concepts of money and globally convertible currencies are no longer helpful and have become counter-productive. The limitations of the monetary system are best understood by examining the historic context in which money and currencies were invented, which predates the development of digital networks by several thousand years. At the time a simple and crude metric in the form of money was the best technology available to store information about economic flows.

As the number of humans has exploded, and as human societies have learned to harness energy in the form of fossil fuels to accelerate and automate manufacturing processes, the old monetary metrics have become less and less helpful as economic signals. In particular the impact of economic externalities that are ignored by the old metrics, both in the natural environment as well as in the human social sphere, is becoming increasingly obvious.

The global digital brain allows flows of energy, physical resources, and economic goods to be tracked in minute detail, without resorting to crude monetary metrics and assumptions of fungibility that open the door to suppressing inconvenient externalities.

A new form of global thinking is required that is not confined to the limited perspective of financial economics. The notions of fungibility and capital gains need to be replaced with the notions of collaborative economics and zero-waste cyles of economic flows.

Metrics are still required, but the new metrics must provide a direct and undistorted representation of flows of energy, physical resources, and economic goods. Such highly context specific metrics enable computational simulation and optimisation of zero-waste economics. Their role is similar to the role of chemical signalling substances used by biological organisms.

Global thinking requires the extension of a zero-waste approach to economics to the planetary level – leaving no room for any known externalities, and encouraging continuous monitoring to detect unknown externalities that may be affecting the planetary ecosystem.

The future of human economics

The real benefits of the global digital brain will be realised when massive amounts of machine generated data become accessible in the public domain in the form of disruptive innovation, and are used to solve complex optimisation problems in transportation networks, distributed generation and supply of power, healthcare, recycling of non-renewable resources, industrial automation, and agriculture.

Five years ago Tim O’Reilly predicted a war for control of the Web. The hype around big data has let many organisations forget that the Web and social media in particular is already saturated with explicit and implicit marketing messages, and that there is an upper bound to the available time (attention) and money for discretionary purchases. A growing list of organisations is fighting over a very limited amount of potential revenue, unable to see the bigger picture of global economics.

Over the next decade one of the biggest challenges will be the required shift in organisational culture, away from simplistic monetisation of big data, towards collaboration and extensive data and knowledge sharing across disciplines and organisational boundaries. The social implications of advanced automation across entire economic ecosystems, and a corresponding necessary shift in the education system need to be addressed.

The future of humans

Human capabilities and limitations are under the spot light. How long will it take for human minds to shift gears, away from the power politics and hierarchically organised societies that still reflect the cultural norms of our primate cousins, and from myopic human-centric economics, towards planetary economics that recognise the interconnectedness of life across space and time?

The future of democratic governance could be one where people vote for human understandable open source legislation that is directly executable by intelligent software systems. Corporate and government politicians will no longer be deemed as an essential part of human society. Instead, any concentration of power in human hands is likely to be recognised as an unacceptable risk to the welfare of society and the health of the planet.



Humans have to ask themselves whether they want to continue to be useful parts of the ecosystem of the planet or whether they prefer to take on the role of a genetic experiment that the planet switched on and off for a brief period in its development.

Quality of service in the digital age

Oh the irony. Last week I wrote an article on the role of service resilience in shaping a positive user experience, and today I’m trying to use a basic digital service to charge up a mobile with credit before travelling overseas – and receive the following notification, along the lines of:

Dear customer, unfortunately the opening hours of our digital service are top secret.

Dear customer, unfortunately the opening hours of our digital service are top secret.

Not even an indication of when it may be worthwhile trying again. The local 0800 number is also not of much help to a traveller. The particular incident is just one example of typical quality of service in the digital realm. Last week, before this wonderful user experience, I wrote:

The digitisation of services that used to be delivered manually puts the spotlight on user experience as human interactions are replaced with human to software interactions. Organisations that are intending to transition to digital service delivery must consider all the implications from a customer’s perspective. The larger the number of customers, the more preparation is required, and the higher the demands in terms of resilience and scalability of service delivery. Organisations that do not think beyond the business-as-usual scenario of service delivery may find that customer satisfaction ratings can plummet rapidly.

Promises made in formal service level agreements are easily broken. Especially if a service provider operates a monopoly, the service provider has very little incentive to improve quality of service, and ignores the full downstream costs of outages incurred by service users.

All assurances made in service level agreements with external service providers need to be scrutinised. Seemingly straightforward claims such as 99.99% availability must be broken down into more meaningful assurances. Does 99.9% availability mean one outage of up to 9 hours per year, or a 10 minute outage per week, or a 90 second outage per day? Does the availability figure include or exclude any scheduled service maintenance windows?

My recommendation to all operators of digital services: Compute the overall risk exposure to unavailability of services and make informed decisions on the level of service that must be offered to customers. As a rule, when transitioning from manual services to digital services, ensure that customers benefit from an increase in service availability. The convenience of close to 24×7 availability is an important factor to entice customers to use the digital channel.

Big data blah $ blah $ blah $

Why does LinkedIn feed me with big data hype from 2011?
By only talking about dollar metrics, potential big data intelligence is turned into junk data science.

blunt abstraction of native domain metrics into dollars is a source of junk data

All  meaningful automation, quality, energy efficiency, and resilience metrics are obliterated by translating into dollars. Good business decisions are made by understanding the implications of domain-specific metrics:
  1. Level of automation
  2. Volume of undesirable waste
  3. Energy use
  4. Reliability and other quality of service attributes

Any practitioner of Kaizen knows that sustainable cost reductions are the result of improvements in concrete metrics that relate directly to the product that is being developed or the service that is being delivered. The same domain expertise that is useful for Kaizen can be combined with high quality external big data sources to produce insights that enable radical innovation.

Yes, often the results have a highly desirable effect on operating costs or sales, but the real value can only be understood in terms of native domain metrics. The healthcare domain is a good example. Minimising the costs of high quality healthcare is desirable, but only when patient outcomes and quality of care are not compromised.

When management consultants only talk about results in dollars, there is a real danger of only expressing goals in financial terms. This then leads down the slippery slope of tinkering with outcomes and accounting procedures until the desirable numbers are within range. It is too late when experts start to ask questions about outcomes, and when lacking native domain metrics expose reductions in operational costs as a case of cutting corners.

Before believing a big data case study, always look beyond the dollars. If in doubt, survey customers to confirm claims of improved outcomes and customer satisfaction. The referenced McKinsey article does not encourage corner cutting, but it fails to highlight the need for setting targets in native domain metrics, and it distracts the reader with blunt financial metrics.

Let’s talk semantics. Do you know what I mean?

Over the last few years the talk about search engine optimisation has given way to hype about semantic search.


context matters

The challenge with semantics is always context. Any useful form of semantic search would have to consider the context of a given search request. At a minimum the following context variables are relevant: industry, organisation, product line, scientific discipline, project, and geography. When this context is known, a semantic search engine can realistically tackle the following use cases:

  1. Looking up the natural language names or idioms that are in use to refer to a specific concept
  2. Looking for domain knowledge; i.e. looking for all concepts that are related to a given concept
  3. Investigating how a word or idiom is used in other industries, organisations, products, research projects, geographies; i.e. investigating the variability of a concept across industries, organisations, products, research projects, and geographies
  4. Looking up all the instances where a concept is used in Web content
  5. Investigating how established a specific word or idiom is in the scientific community, to distinguish between established terminology and fashionable marketing jargon
  6. Looking up the formal names that are used in database definitions, program code, and database content to refer to a specific concept
  7. Looking up all the instances where a concept is used in database definitions, program code, and database content

These use cases relate to the day-to-day work of many knowledge workers. The following presentation illustrates the challenges of semantic search and it contains examples that illustrate how semantic search based on concepts differs from search based on words.

semantic search

Do you know what I mean?

The current semantic Web is largely blind to the context parameters of industry, organisation, product line, scientific discipline, and project. Google, Microsoft, and other developers of search engines consider a fixed set of filter categories such as geography, time of publication, application, etc. and apply a more or less secret sauce to deduce further context from a user’s preferences and browsing history. This approach is fundamentally flawed:

  • Each search engine relies on an idiosyncratic interpretation of user preferences and browsing history to deduce the values of further context variables, and the user is only given limited tools for influencing the interpretation, for example via articulating “likes” and “dislikes”
  • Search engines rely on idiosyncratic algorithms for translating filters, and “likes” and “dislikes” into search engine semantics
  • Search engines are unaware of the specific intent of the user at a given point in time, and without more dynamic and explicit mechanisms for a user to articulate intent, relying on a small set of filter categories, user’s preferences, and browsing history is a poor choice

The weak foundations of the “semantic Web”, which evolved from a keynote from Tim Berners-Lee in 1994, compound the problem:

“Adding semantics to the Web involves two things: allowing documents which have information in machine readable forms, and allowing links to be created with relationship values.”

Subsequently developed W3C standards are the result of the design by committee with the best intentions.

All organisations that have high hopes for turning big data into gold should pause for a moment, and consider the full implication of “garbage in, garbage out” in their particular context. Ambiguous data is not the only problem. Preconceived notions about semantics are another big problem. Implicit assumptions are easily baked into analytical problem statements, thereby confining the space of potential “insights” gained from data analysis to conclusions that are consistent with preconceived interpretations of so-called metadata.

The root cause of the limitations of state-of-the-art semantic search lies in the following implicit assumptions:

  • Text / natural language is the best mechanism for users to articulate intent, i.e. a reliance on words rather than concepts
  • The best mechanism to determine context is via a limited set of filter categories, user preferences, and via browsing history
words vs concepts

words vs concepts

Semantic search will only improve if and when Web browsers rely on explicit user guidance to translate words into concepts before executing a search request. Furthermore, to reduce search complexity, a formal notion of semantic equivalence is essential.

semantic equivalence

semantic equivalence

Lastly, the mapping between labels and semantics depends significantly on linguistic scope. For example the meaning of solution architecture in organisation A is typically different from the meaning of solution architecture in organisation B.

linguistic scope

linguistic scope

If the glacial speed of innovation in mainstream programming languages and tools is any indication, the main use case of semantic search is going to remain:

User looks for a product with features x, y, and z

The other use cases mentioned above may have to wait for another 10 years.