The Smile-IT Blog » Blog Archives

Tag Archives: integration

Automation and Orchestration – a Conclusion

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

Automation and Orchestration are core capabilities in any IT landscape.

Traditionally, there’d be classical on-premise IT, comprised of multiple enterprise applications, (partly) based on old-style architecture patterns like file exchange, asynchronous time-boxed export/import scenarios and historic file formats.

At the same time, the era of the Cloud hype has come to an end in a way that Cloud is ubiquitous; it is as present as the Internet as such has been for years, and the descendants of Cloud – mobile, social, IoT – are forming the nexus for the new era of Digital Business.

For enterprises, this means an ever-increasing pace of innovation and a constant advance of business models and business processes. As this paper has outlined, automation and orchestration solutions form the core for IT landscapes to efficiently support businesses in their striving for constant innovation.

Let’s once again repeat the key findings of this paper:

  • Traditional “old style” integration capabilities – such as: file transfer, object orientation or audit readiness – remain key criteria even for a cloud-ready automation platform.
  • In an era where cloud has become a commodity, just like the internet as such, service centered IT landscapes demand for a maximum of scalability and adaptability as well as multi-tenancy in order to be able to create a service-oriented ecosystem for the advancement of the businesses using it.
  • Security, maximum availability, and centralized management and control are fundamental necessities for transforming an IT environment into an integrated service center supporting business expansion, transformation, and growth.
  • Service orchestration might be the ultimate goal to achieve for an IT landscape, but system orchestration is a first step towards creating an abstraction layer between basic IT systems and business-oriented IT-services.

Therefore, for IT leaders, choosing the right automation and orchestration solution to support the business efficiently might be the majorly crucial decision to either become a differentiator and true innovation leader or (just) remain the head of a solid – yet: commodity – enterprise IT.

The CIO of the future is a Chief Innovation (rather than “Information”) Officer – and Automation and Orchestration both build the core basis for innovation. What to look at in getting to the right make-or-buy decision was the main requirement for this paper.

 

Published by:

System Orchestrator Architecture

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

This final chapter addresses architecture components for typical system orchestrators; comparing these with the blueprints for high-grade innovative automation solutions mentioned previously in this paper will reveal the close resemblance between automation and system orchestration patterns in a very obvious way.

The first figure below is system deployment architecture diagram describing the main physical components for a system orchestrator:

System Orchestration Deployment Architecture

System Orchestration Deployment Architecture

Note, that:

  1. the database normally needs to be setup in a clustered mode for high availability. Most orchestrator solutions do rely fully on the database (at least at design time).
  2. the Management Server’s deployment architecture is depending on availability requirements for management and control.
  3. the Runtime server nodes should be highly distributed (ideally geographically dispersed). The better this is supported by the product architecture the more reliable orchestration will support IT operations.
  4. the Web service deployment is depending on availability and web service API needs (product and requirement dependent)

Logical architecture

The logical architecture builds on the previous description of the deployment architecture and outlines the different building blocks of the orchestration solution. The logical architecture diagram is depicted in the following figure:

System Orchestration Logical Architecture

System Orchestration Logical Architecture

Notes to “logical architecture” figure:

  1. The Orchestrator DB holds runtime and design time orchestration flows, action packs, activities, plugins, logs, …
  2. Management Server controls access to orchestration artefacts
  3. Runtime Server provides execution environment for orchestration flows
  4. Orchestration designer (backend) provides environment for creation of orchestration flows using artefacts from the database (depending on specific product architecture the designer and management components could be integrated)
  5. Web Service exposes the Orchestrator’s functionality to external consumers (ideally via REST)
  6. Action packs or plugins are introduced through installation at the design time (normally integrated into the DB)
  7. The Orchestrator’s admin console is ideally implemented as web service, hence accessible via browser
  8. The Design Client UI could either be web-based or a dedicated client application to be installed locally and using a specific protocol for backend communication

Of course, these building blocks can vary from product to product. However, what remains crucial to successful orchestration operations (more or less in the same way as with automation) is to have lightweight, scalable runtime components capable of supporting a small scale, low footprint deployment equally efficient to a large scale, multi sight, highly distributed orchestration solution.

 

Published by:

System versus Service Orchestration

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

One of the most well-known blueprints for service orchestration is the representation as seen from the perspective of service oriented architecture (SOA).

The following figure in principle describes this viewpoint:

Service Orchestration, defined

Service Orchestration, defined

Operational components – such as “commercial off the shelf” (COTS) or custom applications, possibly with a high level of automated functionality (see previous chapters) – are orchestrated to simple application services (service components) which in turn are aggregated to atomic or composite IT services which subsequently support the execution of business processes. The latter are presented to consumers of various kind without disclosing any of the underlying services or applications directly. In well established service orchestration, functionality is often defined top-down by modelling business processes and defining its requirements first and then leveraging or composing necessary services to fulfil the process’ needs.

A different approach is derived from typical definitions in cloud frameworks; the following figure shows this approach:

System Orchestration: Context

System Orchestration: Context

Here, the emphasis lies on the automation layer building the core aggregation. The orchestration layer on top creates system and application services needed by the framework to execute its functional and operational processes.

The latter approach could be seen as a subset of the former, which will become more clear when talking about the essential differences between system and service orchestration.

Differences between system and service orchestration

System Orchestration

  • could in essence be comprised of an advanced automation engine
  • leverages atomic automation blocks
  • eases the task of automating (complex) application and service operation
  • oftenly directly supports OS scripting
  • supports application interfaces (API) through a set of plugins
  • may offer REST-based API in itself for integration and SOA

Service Orchestration

  • uses SOA patterns
  • is mostly message oriented (focuses on the exchange of messages between services)
  • supports message topics and queues
  • leverages a message broker and (enterprise) service bus
  • can leverage and provide API
  • composes low level services to higher level business process oriented services

Vendor examples of the former are vRealize Orchestrator, HP Operations Orchestration, Automic ONE Automation, BMC Atrium, System Center Orchestrator, ServiceNow (unsurprisingly some of these products have an essential say in the field of automation as well).

Service orchestration examples would be vendors or products like TIBCO, MuleSoft, WSO2, Microsoft BizTalk, OpenText Cordys or Oracle Fusion.

System orchestration key features

System orchestrators are mainly demanded to support a huge variety of underlying applications and IT services in a highly flexible and scalable way:

  • OS neutral installation (depending on specific infrastructure operations requirements)
  • Clustering or node setup possible for scalability and availability reasons
  • Ease of use; low entry threshold for orchestration/automation developers
  • Support quality; support ecosystem (community, online support access, etc.)
  • Database dependency to minimum extent; major databases to be supported equally
  • Built-in business continuity support (backup/restore without major effort)
  • Northbound integratability: REST API
  • Southbound integratability and extensibility: either built-in, by leveraging APIs or by means of a plugin ecosystem
  • Plugin SDK for vendor external plugin development support
  • Scripting possible but not necessarily needed
  • Ease of orchestrating vendor-external services (as vendor neutral as possible, depending on landscape to be orchestrated/integrated)
  • Self-orchestration possible
  • Cloud orchestration: seamless integration with major public cloud vendors

Main requirements for a service orchestrator

In contrary to the above, service orchestration solutions would focus mainly on message handling and integration, as its main purpose is to aggregate lower level application services into higher level composite services to support business process execution. Typical demands to such a product would therefore involve:

  • Support of major web service protocol standards (SOAP, REST)
  • Supports “old-style” enterprise integration technologies (RMI, CORBA, (S)FTP, EDI, …) for integration of legacy applications
  • Provides a central service registry
  • Supports resilient message handling (mediation, completion, message persistence, …)
  • Includes flexible and easy to integrate data mapping based on modelling and XSLT
  • Supports message routing and distribution through topics, queues, etc.
  • Integrated API management solution
  • Integrated business process modelling (BPM) solution
  • Integrated business application monitoring (BAM) solution
  • Extensibility through low-threshold commonly accepted software development technologies

As a rule of thumb to delineate the two effectively from each other, one can say that it is – to a certain extent – possible to create service orchestration by means of a system orchestrator but it is (mostly) impossible to do system orchestration with only a service orchestrator at hand.

For this reason, we will continue with a focus on system orchestration as a way to leverage basic IT automation for the benefit of higher level IT services, and will address vanilla architectures for typical system orchestrator deployments.

 

Published by:

Homogeneous end-to-end Automation Integration

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

Whether looking to extend existing IT service capabilities through innovative service orchestration and delivery, or trying to increase the level of automation within the environment, one would always want to examine the following core features of a prospective automation solution:

  • Automation
  • Orchestration
  • Provisioning
  • Service definition and catalogue
  • Onboarding and subscription
  • Monitoring and metering
  • Reporting and billing

Though not all of those might be utilized at once, the automation solution will definitely play a major role in aggregating them to support the business processes of an entire enterprise.

Either way, homogeneity represents a key element, when it comes to determining the right solution, with the right approach and the right capabilities.

Homogeneous UX for all integrations

First, the automation platform one choses must have a unified user experience (UX) for all targeted applications. This doesn’t mean that for every component in the system the exact same user interface needs to be presented. It’s more important that there is a unified pattern for all the components. This should start with the central management elements of the solution and extend to both internal and external resources such as an Automation Framework IDE for 3rd party solutions discussed previously.

In addition, the core automation components also must match the same UX. Introducing an automation system with standard user interfaces and integration concepts ensures rapid implementation, since SME’s can focus on automating system processes rather than being bogged down with training on the automation solution itself.

A single platform for all automation

The more products that make up the automation solution, the greater the effort required to integrate them all into an IT landscape. Software and system architecture throughout history has never proposed one single technology, standard or design guideline for all functional capabilities, non-functional requirements, or interface definitions. Therefore, a bundled system comprised of multiple products will in 95% of cases come with a variety of inter-component interfaces that will need to be configured separately from the centralized parameterization of the overall solution.

Project implementation experience shows that the slope of learning associated with the solution is directly proportional to the number of different components contained in the automation suite. (see figure below):

Automation adoption time

Automation: Adoption time by number of platform components

Functional integration with target systems

Finally, the solution’s core functionality should be able to integrate with target systems using industry standards such as common OS scripting languages, REST, JMS or quasi-standards with target applications like RPC and EJB. At minimum, the solution should include 90% of these industry standards out-of-the-box.

In addition, an enterprise-grade automation solution should provide:

  • Multiple action/workflow templates (either bundled with the core solution or available for purchase)
  • Ease of integration implementation with target systems’ core functionality at a very detailed level – such as administrative scripting control from within the automation core through scripting integration (to be discussed in the following chapter)

 

Published by:

Managing Tenants in Automation

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

Another decision one needs to make during the selection process is whether the automation platform needs to support multi tenant and/or multi client capability. How you choose can have a significant financial impact.

Multi tenancy versus multi-client

Multi tenancy is closely related to the Cloud. Although not strictly a cloud pattern, multi tenancy has become one of the most discussed topics in application and service architectures due to the rise of Cloud Computing. That is, because multi tenancy eventually enables one of the essential cloud characteristics, namely: virtually endless scaling and resource pooling.

Multi-tenancy

Multi tenancy partitions an application into virtual units. Each virtual unit serves one customer while being executed within a shared environment together with the other tenants’ units. It doesn’t interact or conflict with other virtual units nor can a single virtual unit remove all resources from the shared environment in case of malfunction (resource pooling, resource sharing).

Multi-client

In contrast, a multi-client system is able to split an application into logical environments by separating functionality, management, object storage, and permission layers. This enables setting up a server that allows logons by different users with each user having their separate working environment while sharing common resources – file system, CPU, memory. However, in this environment, there remains the possibility of users impacting each other’s work.

Importance of Multi tenancy and Multi Client

These concepts are critical because of the need to provide separated working environments, object stores, automation flows for different customers or business lines. Therefore, one should be looking for an automation solution which supports this capability out-of-the-box. In certain circumstances you may not require strict customer segregation or the ability to offer pooling and sharing of resources out of one single environment. This clear differentiation might become a cost-influencing factor in certain cases.

Independent units within one system

Whether your automation solution needs to be multi-tenant or not depends on the business case and usage scenario. Normally in enterprise environments having major systems running on-premises, multi-tenancy is not a major requirement in an automation solution. Experience shows that when automation systems are shared between multiple organizational units or are automating multiple customers’ IT landscapes in an outsourcing scenario, multi-tenancy isn’t required since management of all units and customers is controlled through the central administration and architecture.

Multi-client capabilities, though, are indeed a necessity in an enterprise ready automation solution, as users of multiple different organizations want to work within the automation environment.

Multi-client capabilities would include the ability to:

  • Split a single automation solution instance into up to 1,000++ different logical units (clients)
  • Add clients on demand without downtime or without changing underlying infrastructure
  • Segregate object permission by client and enable user assignment to clients
  • Segregate automation objects and enable assignment to specific clients
  • Allow for automation execution delegation of centrally implemented automation workflows by simply assigning them to the specific clients (assuming the specific permissions have been set)
  • Re-use automation artifacts between clients (including clear and easy to use permission management)
  • Share use of resources across clients (but not necessarily for secure and scalable resource pooling across clients; see differentiation above)

Segregation of duties

Having multiple clients within one automation solution instance enables servicing of multiple external as well as internal customers. This allows for quick adaptation to changing business needs. Each client can define separate automation templates, security regulations, and access to surrounding infrastructure. Having a simple transport/delegation mechanism between clients at hand, allows to implement a multi-staging concept for the automation solution

Published by:

Integrated File Transfer still required for Automation

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

There is a wide range of sophistication when it comes to the IT systems that businesses operate. Some came on line around 2000, but others have been in use for much longer. Some have constructed systems in order to maintain high quality services for years to come. Still others are constantly adapting their systems to take advantage of the latest technology.

Because of this wide disparity, an automation solution must be able to handle current and future innovations in integration, orchestration and performance. It must also be backwards compatible so it can support legacy technologies.

Saying this, one of the technologies that an automation solution must support is file transfer between systems. Along with this, it must also support elaboration, interpretation, and transformation of file content to create new levels of automation integration for enterprise IT.

Experiences with multiple customers show that replacing legacy file transfer applications with state-of-the-art APIs is sometimes simply too time consuming and costly. However, it is crucial that these legacy system capabilities are provided for an automated and integrated IT landscape. Strangely enough, therefore, being able to address, process, interpret, and transfer files with the demands and challenges of an automated IT environment is still a must-have criteria for an enterprise automation solution.

Support of multiple different file transfer protocols

FTP (file transfer protocols – see a list here[1]) not equals FTP: Following are the most common FTPs still in use which must be supported by the selected automation solution:

  • FTP: This is the standard protocol definition for transferring files in an insecure manner. When operating behind a firewall, using FTP for transporting files is convenient and needs to be an integrated feature of your enterprise automation solution.
  • FTPS: adds support for “Transport Layer Security” (TLS) and the “Secure Socket Layer” (SSL) encryption protocols to FTP. Many enterprises rely on this type of protocol for security reasons, especially when it comes to moving beyond the network.
  • FTPES: This differs from FTPS only in terms of the timing of the encryption and transferring of login information. It adds an additional safety control to FTPS-based file transfers
  • SFTP: has been added to the Secure Shell protocol (SSH) by the Internet Engineering Task Force (IETF)[2] in order to allow for access, transfer and management of files through any reliable (SSH) data stream.

In addition to supporting all of the above protocols, an automation solution can enhance FT integration in automation scenarios by offering a direct endpoint-to-endpoint file transfer – based on a proprietary protocol. Providing this protocol eases the need for a central management engine implementation solely to transport files from one system to another.

Standard FT protocols

The most convenient way to allow connecting FTP capable remote systems based on the protocols listed above is through a graphical UI that allows defining the transfer much the way it is done with standard FTP clients. The actual transfer itself is normally executed by means of a dedicated adapter only initiated by the centrally managed and executed automation workflows. To comply with security requirements limiting login information to only top-level administrators, sensitive information such as username, password, or certificates are stored in separate objects. At the same time, file transfers are integrated into automation flows by specialists who do not have access to the detailed login information but can still make use of the prepared security objects.

Endpoint-to-endpoint File Transfer

In addition to supporting the standard FTP protocols, the automation solution’s ecosystem should offer a direct secure file transfer between two endpoints within the IT landscape.

In this case the automation solution issues the establishment of a direct, encrypted connection between the affected endpoints – normally using a proprietary internal protocol. This type of mechanism eliminates the need for additional tools and increases the performance of file transfers significantly.

Other features the solution should support include:

  • Multiple code translation (e.g. from ASCII to EBCDIC)
  • Data compression
  • Wildcard transfer
  • Regular checkpoint log (in order to re-issue aborted transfers from the last checkpoint recorded)
  • Checksum verification based on hashing algorithms (like e.g. MD5)
  • Industry standard transfer encryption (e.g. AES-128, AES-192 or AES-256)

Process integration

Finally, the key to offering enterprise ready integrated file transfer through any protocol is to allow seamless integration into existing automation workflows while leveraging all the automation functionality without additional re-coding or re-interpretation of transferred files. This includes:

  • Using file transfer results and file content in other automation objects.
  • Including file transfer invocation, execution, and result processing in the scripting environment.
  • Using files within pre or post conditions of action or workflow execution or augmenting pre/post conditions by making use of results from transferred files.
  • Bundling file transfers to groups of endpoints executing similar – but not necessarily identical – file transfer processes.

This allows the integration of legacy file transfers into innovative business processes without losing transparency and flexibility.

Published by:

Automation Adaptability & Extensibility

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

Today’s Automation solutions normally are ready-built enterprise products (or a consumable service) offering out-of-the-box functionality for multiple automation, orchestration, and integration scenarios. On top of this, ease of installation, implementation, and use would be of importance.

However, in less than 20% of cases, the automation platform remains unchanged for the first six months. This is why it’s crucial that from the beginning, the selected solution has the ability to extend and adapt in order to serve business needs. The architecture should be able to leverage technologies for hot plugging integrations and industry-standard interfaces while augmenting standard functionality through dynamic data exchange.

Hot plug-in of new integrations

Once basic automation workflows for an IT landscape are implemented, avoiding downtime is critical. While some automation solutions may offer fast and flexible production updates, the expectation on top of that is to be able to integrate new system and application adapters on the fly.

The first step to this level of integration can be achieved by rigorously complying with the SOLID object orientation principles discussed in the last chapter. Integrating adapters to new system or application endpoints, infrastructure management layers (like hypervisors), or middleware components is then a matter of adding new objects to the automation library. Existing workloads can be seamlessly delegated to these new entities, avoiding the need to stop any runtime entities or updating or upgrading any of the system’s components.

Hot-plugging, however, isn’t the main factor in assessing an automation solution’s enterprise readiness. In addition to being able to plug new adapters into the landscape, a truly expandable automation solution must be able to build new adapters as well. The automation solution should offer a framework, which enables system architects and software developers to create their own integration solutions based on the patterns the automation solution encompasses.

Such a framework allows for the creation of integration logic based on existing objects and interfaces, self-defined user interface elements specific to the solution’s out-of-the-box templates. Extensions to such a framework include a package-manager enabling third party solutions to be deployed in a user-friendly way. It must also take into account any dependencies and solution versions and a framework IDE enabling developers to leverage the development environment to which they are accustomed (e.g. Eclipse and Eclipse plugins).

Herewith, plugging new integrations into an automation solution can expand the automation platform by leveraging a community-based ecosystem of 3rd party extensions.

Leverage industry standards

Hot plugging all-new automation integration packages to adapt and expand your automation platform might not always be the strategy of choice.

In a service-based IT-architecture, many applications and management layers can be integrated using APIs. This means that an automation solution needs to leverage standards to interface with external resources prior to forcing you into development effort for add-ons.

The automation layer needs to have the ability to integrate remote functionality through common shell-script extensions like PowerShell (for Microsoft-based landscapes), JMS (Java Message Service API for middleware integration), REST (based on standard data formats like XML and JSON), and maybe (with decreasing importance) SOAP.

Dynamic data validation & exchange

Part of the adaptability and extensibility requirement is for the product of choice to be able to process and integrate the results of the resources previously discussed into existing object instances (as dynamic input data to existing workflows) without having to develop customized wrappers or additional interpreters.

This can be achieved through either variable objects – their values being changeable through integration like DB queries or Web Services results – or through prompts that allow to set variable values through a user input query.

To be truly extensible and adaptable, an automation solution should not only offer manual prompts but it should be able to automatically integrate and present those prompts within external systems. The solution should be responsible for automating and orchestrating IT resources while other systems – a service catalogue or request management application – handles IT resource demand and approval.

Together, all of the above forms the framework of an automation solution that can be extended and adapted specifically to the needs of the business leveraging it.

Published by:

Automation and Orchestration for Innovative IT-aaS Architectures

This blog post kicks off a series of connected publications about automation and orchestration solution architecture patterns. The series commences with multiple chapters discussing key criteria for automation solutions and will subsequently continue into outlining typical patterns and requirements for "Orchestrators". Posts in this series are tagged with "Automation-Orchestration" and will ultimately together compose a whitepaper about the subject matter.

Introduction & Summary

Recent customer projects in the field of architectural cloud computing frameworks revealed one clear fact repeatedly: Automation is the utter key to success – not only technically for the cloud solution to be performant, scalable and highly available but also for the business, which is using the cloudified IT, in order to remain in advantage compared to competition as well as stay on the leading edge of innovation.

On top of the automation building block within a cloud framework, an Orchestration solution ensures that atomic automation “black boxes” together form a platform for successful business execution.

As traditional IT landscapes take their leap into adopting (hybrid) cloud solutions for IaaS or maybe PaaS, automation and orchestration – in the same way – has to move from job scheduling or workload automation to more sophisticated IT Ops or DevOps tasks such as:

  • Provisioning of infrastructure and applications
  • Orchestration and deployment of services
  • Data consolidation
  • Information collection and reporting
  • Systematic forecasting and planning

In a time of constrained budgets, IT must always look to manage resources as efficiently as possible. One of the ways to accomplish that goal is through use of an IT solution that automates mundane tasks, orchestrates the same to larger solution blocks and eventually frees up enough IT resources to focus on driving business success.

This blog post is the first of a series of posts targeted at

  • explaining key criteria for a resilient, secure and scalable automation solution fit for the cloud
  • clearly identifying the separation between “automation” and “orchestration”
  • providing IT decision makers with a set of criteria to selecting the right solutions for their need of innovation

Together this blog post series will comprise a complete whitepaper on “Automation and “Orchestration for Innovative IT-aaS Architectures” supporting every IT in its strive to succeed with the move to (hybrid) cloud adoption.

The first section of the paper will list key criteria for automation solutions and explain their relevance for cloud frameworks as well as innovative IT landscapes in general.

The second section deals with Orchestration. It will differentiate system orchestration from service orchestration, explain key features and provide decision support for choosing an appropriate solution.

Target audience

Who should continue reading this blog series:

  • Technical decision makers
  • Cloud and solution architects in the field of innovative IT environments
  • IT-oriented pre- and post-sales consultants

If you consider yourself to belong into one of these groups, subscribe to the Smile-IT blog in order to get notified right away whenever a new chapter of this blog series and whitepaper gets published.

Finally, to conclude with the introduction let me give you the main findings that this paper will discuss in detail within the upcoming chapters:

Key findings

  • Traditional “old style” integration capabilities – such as: file transfer, object orientation or audit readiness – remain key criteria even for a cloud-ready automation platform.
  • In an era where cloud has become a commodity, just like the internet as such, service centered IT landscapes demand for a maximum of scalability and adaptability as well as multi-tenancy in order to be able to create a service-oriented ecosystem for the advancement of the businesses using it.
  • Security, maximum availability, and centralized management and control are fundamental necessities for transforming an IT environment into an integrated service center supporting business expansion, transformation, and growth.
  • Service orchestration might be the ultimate goal to achieve for an IT landscape, but system orchestration is a first step towards creating an abstraction layer between basic IT systems and business-oriented IT-services.

So, with these findings in mind, let us start diving into the key capabilities for a cloud- and innovation-ready automation solution.

 

Published by:

#IoT in perspective

This morning, a twitter post by EMC caught my attention – not the article, but the image used to promote it (it’s the feature image of this post now): “30 billion devices expected to be connected and used in 2020”.

Gartner – in a news from about a year ago – had put it more reservedly: 25B devices in 2015 (maybe they’ve changed their estimates already).

Anyway: What strikes me is the permanent rise of this figure. It seems that each month the prognosis on the number of connected devices is increased by 10% (which means that by the end of March we’ll face a claim of 40B by 2020). BTW: Claiming figures for 2020 neglects the fact that the devices claimed to be connected by 2015 would already produce an estimate of 7.3 Zettabyte of data. If this little spot (barely visible) 1Terabyte was your 1 Terabyte harddisc then the following would mimic 1 Exabyte. Take this field of 1TB harddiscs, multiply it by 1024 once more and you got 1(!) Zettabyte. 7,3 times the amount is the estimate of things data production of this very year, you’re reading this.

1Exabyte

I do not think that many can handle even this amount of data. But let us just put the number of things connected into

Perspective

If any of those 30 billion device is estimated to have a width of 1 cm and we line them up 1 by 1 in a row, we’d have a line of 300,000 (3-hundred-thousand) kilometers of length. 7.5 times around the earth. Or:

  • nearly all roads of the UK (400k km) covered with devices
  • 3 times the road network of Austria (100k km)
  • 5% of the road network of the US (6,341,421 km according to OECD figures)

Ridiculous, isn’t it? Let’s try something else:

The current estimate of population in the world (according to wikipedia) is 7.3 billion (nice coincidence: 1 TB of data per 1 living person at the moment) with an annual estimated growth rate of 1.1%. That makes a world population of 7.71 billion in 2020 or (taken EMC’s image from the beginning) 3.89 devices per person.

Assuming that Africa and maybe 50% of Asia and South America might not be that strongly equipped with devices by this time, we can estimate that some 4 billion people will be handling those 30 billion devices (don’t blame the assumptions, just feel the numbers). 30 billion devices per the estimated Internet-of-Things population computes to around 7 devices per capita. Realistic?

Well – just to be sure: An average car – today – has some 20 sensors (fuel level, engine temperature, speedometer, throttle position, tire-pressure, blind spot detector, … just to name a few). To assume that those will not be connected devices in future would be pretty naiv. 3 inhabitants per 1 car? I think this number is overestimated – so no worries: The figure is accurate and realistic (maybe to low even).

However,

The point here is:

“Connected device” means that those little gadgets are constantly talking to something. This “something” is software; software which must be built to connect and integrate those devices with a larger IT ecosystem (see my “Digital” whitepaper for a rough IoT reference architecture). Who has already thought about the amount of communication (not data – communication) happening from these devices. Constantly. On a high pace. Expected to be resilient at all times. And real-time.

And even IF the amount of devices and their communication would be spread over multiple device integration solutions, which integration layer solution would be capable of connecting multiple stakeholder systems across enterprises to make the assumed amount of data useable for businesses.

Who has built – or is building – an integration layer offering these vast capabilities? If you know one, let me know, please … and do post its capability figures into the comment!

 

Published by:

Skype-for-Business (Lync) on 1und1

Some time ago I promised to add a post about how to configure all the Lync DNS records on 1und1 (if in case you are hosting your domain there – might however be, that the information applies to other DNS providers as well).

Well … as time flies, 1und1 obviously has improved their domain management portal, hence all that “fancy stuff”, we originally did in order to make it work, is void now, and the best and pretty straight forward explanation on how to do it rightly can be found in the Office365 support pages.

Be happy and enjoy (I am and did – and it works perfect for us)

 

Published by:
%d bloggers like this: