Beliebte Suchanfragen

Cloud Native



Agile Methoden



Understanding IoT (Part 2)

28.8.2018 | 11 minutes of reading time

Summary of Part 1

In the first part of this blog post we postulated that an IoT device in general is an abstract real-world interface. Subsequently, a general definition of the concept of innovation was elaborated and we found that three aspects in particular are decisive for whether a novelty will prevail over time: maturity, accessibility and need, the latter meaning the size of the problem space that can be solved by the innovation. This thesis statement was further illustrated using the example of the so-called “Industrial Revolutions.” The first part was then concluded with the observation that humanity has today reached a point of technological advancement where at least two of the three characteristics that qualify a novelty as an innovation – maturity and accessibility – are fulfilled for the IoT.

The need for IoT

The need for IoT is somewhat complicated to explain as it does not directly derive from e.g. customer demand. Rather, it is a direct result of the technological process up to this point. In order to better understand this concept, it makes sense to once again take look at the aforementioned “Industrial Revolutions”. As discussed in the first part of the post, each of the Industrial Revolutions had a unique, distinguishing characteristic at its core. For the first one, this was mechanization. The second one was electrification, followed by digitization for the final one. What was left out in this discussion, however, was what kind of problem space (on a meta-level) they actually addressed. This will be elaborated in the following.

The first two revolutions mainly influenced the “blue collar” domain as they solved simple (mechanization) and complicated (electrification) mechanical (as in “touchable”) problems. Digitization, on the other hand, primarily affected the “white collar” sector. Still, the problems it helped to solve fall in the same categories – simple and complicated. As a result, humanity became rather effective at automating repetitive tasks and those that are defined by intrinsic rules or coherencies after assimilating these developments across the board.

So, assuming that this development will continue, where does this lead to now and in the future? So far, humankind has managed to find solutions for simple and complicated tasks both in the mechanical and in the digital domain. With these gone and based on a problem-driven motivation, the next logical step from a systems theory point of view would obviously be to start opening up the area of complex problems.

Now there is a significant distinction between the nature of simple vs. complicated and complex problems. While the former are characterized by an inherent linearity and can therefore be handled very effectively with rule-based cause-and-effect approaches, this is not persistently true for the latter. Complex systems are non-linear at their core. They require a test-driven methodology with short feedback cycles (comparable to an agile workflow, which is, in the end, nothing more than an approach to address exactly this issue). Yet, to be able to generate feedback for a test, one needs metrics and especially measuring data. The better the data basis, the higher the quality of the conclusions that can be drawn from it. This means as humanity moves up in the complexity scale, the need for data grows exponentially.

This, in consequence, requires an equally growing number of data sources, be it virtually as part of software systems (e.g. to be able to analyze customer behavior in an online shop) or in the form of real-world interfaces for physical processes. So from this point of view, the need for IoT derives directly from humankind’s problem-driven endeavor to conquer the domain of the complex.

Most interestingly, this up until this point rather abstract reasoning directly matches current observations from the free market. At the very latest the rise of companies such as Amazon demonstrated that there is currently a shift taking place from an industrial to a post-industrial market. Along with that came a paradigm shift away from cost optimization towards customer orientation, meaning it is no longer sufficient to simply minimize production costs. Instead, what became most important is to adapt to the customer’s wishes and needs as fast as possible. Therefore, one needs feedback, ways to measure e.g. user behavior and habits. Again, this development implies an increase in data sources. So here too, the need for IoT is justified by the shift of the market from a complicated (industrial) to a complex state (post-industrial).
The same trend can be observed in the manufacturing industry, where an increase in connectivity and data generation as well as monitoring along the value creation flow and even beyond that to the end customer lead to new levels of automation, flexibility, and customizability as it allows for situation-oriented and forward-looking acting.

So, to summarize this post up to this point and to answer the questions formulated at the beginning of the first part: is IoT of (long-term) relevance? Definitely yes. Is the topic currently overhyped? Maybe. Is it the next multi-billion dollar market? Probably (at least for some industries). The last section of this post is going to focus on possible future developments in this domain.

Future developments and constitutive technologies

To complete this post after analyzing past and present developments in the previous sections, we will look at possible future trends in the field of IoT. This discussion should, however, be treated with caution as the subject of the former is of time-variant nature, meaning it is an evolutionary process that is highly intertwined with many other domains instead of a rigid, encapsulated state. This means even the best prediction is but an extrapolation of a complex system and thus, by definition, never completely true.

Now, when trying to extrapolate the future development of a process, there are two dimensions one has to consider: vertical and horizontal. The former describes the internal evolution of an individual subject while the latter focusses more on the synergies between different topics and how they influence each other’s development. Of course, both dimensions are closely related to each other as they are ultimately two points of view on the same matter, but for reasoning purposes this separation can be quite helpful in terms of clarity.

As for the vertical development, the rough path is relatively clear. As discussed before, it’s the shift from the complicated to the complex domain that causes the need for IoT as a symptom of the exploding demand for data. Since this development is currently still at the very beginning and humanity hasn’t even exploited more than a small fraction of the problem space that can be addressed by this, it is highly likely that the number of real-world interfaces will continue to increase exponentially in the near future.

In the long run, this development inevitably leads to what is known as ubiquitous computing in an attempt to virtually map the entire complexity of the physically domain (the same as it is being done for simple and complex tasks today). That means IT will be almost everywhere in the future but more and more invisible at the same time as it will seamlessly integrate into our daily lives as part of the accustomed service landscape. Conversely, this implies that real-world interfaces will have a growing influence on how software will be developed, just like IT will become more and more relevant for (non-software) engineering processes. In the long run, both historically separated domains will probably start to merge to a certain degree (comparable to the traditional electrician and mechanic), e.g. leading to the rise of so called Cyber-Physical Systems (CPS). So in a sense one could even label IoT a “domain breaker”. The long-term result of this development will probably be a much more wholistic, problem-focussed way of programming than it is common today.

Even so, the role of IoT will most likely not be the front-facing part. As stated before, it is a topic that tackles the foundation, the data aggregation and (real world) interfacing layer, and as such is more likely to take the role of a catalyst for continuative technologies like e.g. AI.

This directly leads to the horizontal dimension. There are several current technologies that can be said to be building upon IoT and that are in high synergy with this subject. An overview of some especially noteworthy topics in this respect shall be given in the following:

  • Contextual Computing: The term “Contextual Computing” describes an environment that adapts its behavior depending on who is present or using it. This strongly relates to the ongoing trend of service-orientation as this adds another level of individuality to the user experience. Even though this field is still quite new, it’s already common practice in software development – especially in web and mobile development – to have certain features that adapt based on the person accessing the resource. This is usually closely connected to machine learning and behavioral analysis. An example would be the customized buy recommendations on e.g. Amazon or eBay.

    Recently, however, this trend has also been expanding more and more into the real world domain. Automobile seats that automatically switch configurations based on which person is driving or coffee machines that learn and recognize the user’s preferences are just some of the more common examples. In this context the second dimension of Contextual Computing, devices that adapt to their environment (instead of the other way around), becomes interesting as well. In combination with the ongoing development towards ubiquitous computing, this trend allows for an expansion of service-orientation into the physical world.

  • Human-Centered Interfaces (HCI): Recent developments have proven once again that there are many use cases where textual input is unarguably not the most suitable form of interaction. Instead, new forms of human-machine interfaces (HMIs) with the user at their center are being researched. A current example would be the voice interface, e.g. to control smart home devices or to create reminders, etc. – in short, minor tasks in which it relieves the user if they do not require his or her active focus in claiming.
    At the moment, this field is undergoing a major development. A good summary of the current state of this technological field and an interesting perspective on the future can e.g. be found under [1] (strong focus on Brain Machine Interfaces [BMIs]). In any case, IoT devices are the foundation for this development as they represent the interface used to interact with the environment, both sensors and actors.
  • Artificial Intelligence (AI): This one might not be that surprising for anyone who follows the current trends in the software world, but nevertheless, this might very well be the most important horizontal development in terms of synergies with the continuous increase of IoT devices. In the end, both AI (resp. Machine Learning [ML]) and ubiquitous computing are targeting the same goal. Both aim to help conquer the complex domain – IoT on the database and AI on the computing/logic side. Thus the harmony between these topics is, to a certain degree, almost immanent.

    The problem with only IoT is that it provides access to a HUGE amount of data, but data itself is worthless if no information is extracted from it. As it is not possible to process quantities of this order of magnitude by hand, automated algorithms are used. Yet, common algorithms for data processing reach their limits when it comes to complex factual contexts as it is an immanent characteristic of complexity that it can’t be grasped on the basis of rules (so, the other way round, if “normal” algorithms were sufficient, the problem wouldn’t be of complex nature). In this regard, however, AI algorithms shine because they are capable of learning and recognizing complex immanent connections in the data basis. On the other hand, this kind of algorithms synergizes well with IoT as with increasing problem complexity and with a rising number of input dimensions there is an exponentially growing need for data to train the model.

    So, to sum it up, there are multiple areas in which both fields can greatly benefit from each other. Two great examples that demonstrate the potential of this are AWS Greengrass (cf. [2] ) and Azure IoT Edge (cf. [3] ), which both beautifully combine the concepts of IoT and AI to enable automated monitoring of live data streams.


Although IoT is currently a slightly hyped topic and there seem to be some misunderstandings regarding its immanent nature, it is yet a subject of prevailing relevance. As an element in the process of technological evolution, it fulfills the characteristics of an innovation. Furthermore, a strong need for this subject can directly be derived from the currently ongoing fundamental shift from the complicated to the complex domain, not only in the free market, but also in production, logistics, etc. Thus it is highly probable that IoT will drastically influence the (real world) interfacing and data aggregation layer serving as a foundation for many continuative topics such as AI, eventually leading to ubiquitous computing as well as to a boost of technologies synergistically related to this domain. However, which areas will be affected to what degree exactly will show over time.

Understanding IoT (Part 1)

share post




More articles in this subject area

Discover exciting further topics and let the codecentric world inspire you.


Gemeinsam bessere Projekte umsetzen.

Wir helfen deinem Unternehmen.

Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.

Hilf uns, noch besser zu werden.

Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.