The Smile-IT Blog » Blog Archives

Tag Archives: architecture

Integrated File Transfer still required for Automation

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

There is a wide range of sophistication when it comes to the IT systems that businesses operate. Some came on line around 2000, but others have been in use for much longer. Some have constructed systems in order to maintain high quality services for years to come. Still others are constantly adapting their systems to take advantage of the latest technology.

Because of this wide disparity, an automation solution must be able to handle current and future innovations in integration, orchestration and performance. It must also be backwards compatible so it can support legacy technologies.

Saying this, one of the technologies that an automation solution must support is file transfer between systems. Along with this, it must also support elaboration, interpretation, and transformation of file content to create new levels of automation integration for enterprise IT.

Experiences with multiple customers show that replacing legacy file transfer applications with state-of-the-art APIs is sometimes simply too time consuming and costly. However, it is crucial that these legacy system capabilities are provided for an automated and integrated IT landscape. Strangely enough, therefore, being able to address, process, interpret, and transfer files with the demands and challenges of an automated IT environment is still a must-have criteria for an enterprise automation solution.

Support of multiple different file transfer protocols

FTP (file transfer protocols – see a list here[1]) not equals FTP: Following are the most common FTPs still in use which must be supported by the selected automation solution:

  • FTP: This is the standard protocol definition for transferring files in an insecure manner. When operating behind a firewall, using FTP for transporting files is convenient and needs to be an integrated feature of your enterprise automation solution.
  • FTPS: adds support for “Transport Layer Security” (TLS) and the “Secure Socket Layer” (SSL) encryption protocols to FTP. Many enterprises rely on this type of protocol for security reasons, especially when it comes to moving beyond the network.
  • FTPES: This differs from FTPS only in terms of the timing of the encryption and transferring of login information. It adds an additional safety control to FTPS-based file transfers
  • SFTP: has been added to the Secure Shell protocol (SSH) by the Internet Engineering Task Force (IETF)[2] in order to allow for access, transfer and management of files through any reliable (SSH) data stream.

In addition to supporting all of the above protocols, an automation solution can enhance FT integration in automation scenarios by offering a direct endpoint-to-endpoint file transfer – based on a proprietary protocol. Providing this protocol eases the need for a central management engine implementation solely to transport files from one system to another.

Standard FT protocols

The most convenient way to allow connecting FTP capable remote systems based on the protocols listed above is through a graphical UI that allows defining the transfer much the way it is done with standard FTP clients. The actual transfer itself is normally executed by means of a dedicated adapter only initiated by the centrally managed and executed automation workflows. To comply with security requirements limiting login information to only top-level administrators, sensitive information such as username, password, or certificates are stored in separate objects. At the same time, file transfers are integrated into automation flows by specialists who do not have access to the detailed login information but can still make use of the prepared security objects.

Endpoint-to-endpoint File Transfer

In addition to supporting the standard FTP protocols, the automation solution’s ecosystem should offer a direct secure file transfer between two endpoints within the IT landscape.

In this case the automation solution issues the establishment of a direct, encrypted connection between the affected endpoints – normally using a proprietary internal protocol. This type of mechanism eliminates the need for additional tools and increases the performance of file transfers significantly.

Other features the solution should support include:

  • Multiple code translation (e.g. from ASCII to EBCDIC)
  • Data compression
  • Wildcard transfer
  • Regular checkpoint log (in order to re-issue aborted transfers from the last checkpoint recorded)
  • Checksum verification based on hashing algorithms (like e.g. MD5)
  • Industry standard transfer encryption (e.g. AES-128, AES-192 or AES-256)

Process integration

Finally, the key to offering enterprise ready integrated file transfer through any protocol is to allow seamless integration into existing automation workflows while leveraging all the automation functionality without additional re-coding or re-interpretation of transferred files. This includes:

  • Using file transfer results and file content in other automation objects.
  • Including file transfer invocation, execution, and result processing in the scripting environment.
  • Using files within pre or post conditions of action or workflow execution or augmenting pre/post conditions by making use of results from transferred files.
  • Bundling file transfers to groups of endpoints executing similar – but not necessarily identical – file transfer processes.

This allows the integration of legacy file transfers into innovative business processes without losing transparency and flexibility.

Published by:

Automation Adaptability & Extensibility

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

Today’s Automation solutions normally are ready-built enterprise products (or a consumable service) offering out-of-the-box functionality for multiple automation, orchestration, and integration scenarios. On top of this, ease of installation, implementation, and use would be of importance.

However, in less than 20% of cases, the automation platform remains unchanged for the first six months. This is why it’s crucial that from the beginning, the selected solution has the ability to extend and adapt in order to serve business needs. The architecture should be able to leverage technologies for hot plugging integrations and industry-standard interfaces while augmenting standard functionality through dynamic data exchange.

Hot plug-in of new integrations

Once basic automation workflows for an IT landscape are implemented, avoiding downtime is critical. While some automation solutions may offer fast and flexible production updates, the expectation on top of that is to be able to integrate new system and application adapters on the fly.

The first step to this level of integration can be achieved by rigorously complying with the SOLID object orientation principles discussed in the last chapter. Integrating adapters to new system or application endpoints, infrastructure management layers (like hypervisors), or middleware components is then a matter of adding new objects to the automation library. Existing workloads can be seamlessly delegated to these new entities, avoiding the need to stop any runtime entities or updating or upgrading any of the system’s components.

Hot-plugging, however, isn’t the main factor in assessing an automation solution’s enterprise readiness. In addition to being able to plug new adapters into the landscape, a truly expandable automation solution must be able to build new adapters as well. The automation solution should offer a framework, which enables system architects and software developers to create their own integration solutions based on the patterns the automation solution encompasses.

Such a framework allows for the creation of integration logic based on existing objects and interfaces, self-defined user interface elements specific to the solution’s out-of-the-box templates. Extensions to such a framework include a package-manager enabling third party solutions to be deployed in a user-friendly way. It must also take into account any dependencies and solution versions and a framework IDE enabling developers to leverage the development environment to which they are accustomed (e.g. Eclipse and Eclipse plugins).

Herewith, plugging new integrations into an automation solution can expand the automation platform by leveraging a community-based ecosystem of 3rd party extensions.

Leverage industry standards

Hot plugging all-new automation integration packages to adapt and expand your automation platform might not always be the strategy of choice.

In a service-based IT-architecture, many applications and management layers can be integrated using APIs. This means that an automation solution needs to leverage standards to interface with external resources prior to forcing you into development effort for add-ons.

The automation layer needs to have the ability to integrate remote functionality through common shell-script extensions like PowerShell (for Microsoft-based landscapes), JMS (Java Message Service API for middleware integration), REST (based on standard data formats like XML and JSON), and maybe (with decreasing importance) SOAP.

Dynamic data validation & exchange

Part of the adaptability and extensibility requirement is for the product of choice to be able to process and integrate the results of the resources previously discussed into existing object instances (as dynamic input data to existing workflows) without having to develop customized wrappers or additional interpreters.

This can be achieved through either variable objects – their values being changeable through integration like DB queries or Web Services results – or through prompts that allow to set variable values through a user input query.

To be truly extensible and adaptable, an automation solution should not only offer manual prompts but it should be able to automatically integrate and present those prompts within external systems. The solution should be responsible for automating and orchestrating IT resources while other systems – a service catalogue or request management application – handles IT resource demand and approval.

Together, all of the above forms the framework of an automation solution that can be extended and adapted specifically to the needs of the business leveraging it.

Published by:

Object Orientation supports Efficiency in Automation Solutions

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

A software architecture pattern for an IT oriented automation solution? How would this work together? Let’s have a closer look:

SOLID object orientation

Software guru Robert C. Martin identified “the first five principals” of object-oriented design in the early 2000’s. Michael Feathers introduced the acronym SOLID to easily remember these five basics that developers and architects should follow to ensure they are creating systems that are easy to maintain and to extend over time.[1]

  • Single responsibility: any given object shall have exactly one responsibility or one reason to change.
  • Open-close: any given object shall be closed for modification but open for extension.
  • Liskov substitution principle: any given live instance of a given object shall be replaceable with instances of the objects subtypes without altering the correctness of the program.
  • Interface segregation: every interface shall have a clear and encapsulated purpose; interface consumers must be able to focus on the interface needed and not be forced to implement interfaces they do not need.
  • Dependency inversion: create any dependency to an object’s abstraction not to its actual presence

The SOLID principle and other object-oriented patterns, are discussed further in this article: Object-Oriented-Design-Principles

“SOLID” IT automation

Many of today’s enterprise workload automation solutions were developed with architectural patterns in mind, which date back well before the advance of object orientation. In order to avoid the risk of immaturity of the solution, patterns weren’t innovated in some cases. At the same time, the demand for rapid change, target-dependent concretion and re-usability of implementations has been increasing. An object oriented approach can now be used as a means to support these new requirements.

Object orientation, therefore, should be one of the key architectural patterns of an innovative enterprise automation solution. Such a solution encapsulates data and logic within automation objects and thereby represent what could be called an “automation blueprint.” The object presents a well-defined “input interface” through which a runtime-instance of the automation object can be augmented according to the requirements of the specific scenario.

Through interaction with automation objects, an object-oriented workflow can be defined, thus presenting an aggregated automation object. By employing the patterns of object-oriented architecture and design, developers ensure that the implementation of automation scenarios evolves into object interaction, re-usability, specialization of abstracted out-of-the-box use-cases, and resolves specific business problems in a dedicated and efficient way.

Enterprise-grade, innovative automation solutions define all automation instructions as different types of objects within any kind of object repository – similar to traditional object oriented programming languages. Basic definition of automation tasks is represented as “automation blueprints”. Through instantiation, aggregation and/or specialization, the static object library becomes the arbitrary, dedicated, business process oriented solution to execute and perform business automation.

Automation object orientation

Object orientation in automation architectures

The figure above shows an example of how an object can be instantiated as a dedicated task within different workflows.

IT-aaS example

The following IT-aaS example illustrates an object-oriented automation pattern. The chosen scenario assumes that an IT provider intends to offer an IT service to request, approve, and automatically provision a new, compliant infrastructure service such as a web application server.

  • Object definition: A set of aggregated objects – automation blueprints – define how to provision a new server within a certain environment. The object definition would not – however – bind its capability to a specific hypervisor or public cloud provider.
  • Object instantiation: Request logic – realized for example in a service desk application – would implement blueprint instantiation including the querying (e.g. by user input or by retrieval from a CMDB) and forwarding of the parameters to the objects’ instance.

This not only automates the service provisioning but also addresses burst-out scenarios required by modern IT service delivery through integrated automation.

Patterns benefitting from object reusability

The concept of object orientation allows the reuse of automation objects eliminating the need to duplicate information. It also allows the creation of automation blueprints describing the automation logic that is processed at runtime. An automation blueprint can behave differently once it gets instantiated during runtime because of features like integrated scripting, variables, pre- and post conditional logic, and logical programming elements such as conditions and loop constructs.

Inheritance

Automation objects’ relationships and dependencies as well as input/output data are set at the time they are defined. Run-time instantiated objects can inherit their input parameters from parent objects. They can also pass parameters from one runtime instance to another as well as to their parent containers. This enables fully flexible multi-processing of business workflows without e.g. being forced to clean-up variables in static containers.

Abstraction versus Specialization

Automation blueprint definitions form the basic abstraction of scenarios within an enterprise-grade automation platform. Abstract automation objects get their concrete execution information when instantiated. Object oriented platforms provide the means to augment the instantiated objects at runtime via patterns like prompt sets, database variables or condition sets – in the best case to be modeled graphically; this supports flexibility and dynamic execution.

Maintainability

As the concept of object orientation eliminates the need for duplication of automation logic, maintaining automation workflow definitions becomes minor. Typical modifications such as changes of technical user-ids, path/binary name etc. can be performed in a centrally defined object and is applied wherever the object (blueprint) is used.

 

Published by:

High Availability and Robustness in Automation Solutions

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

High Availability and Robustness are critical elements of any automation solution. Since an automation solution forms the backbone of an entire IT infrastructure, one needs to avoid any type of downtime. In the rare case of a system failure, rapid recovery to resume operations shall be ensured.

Several architectural patterns can help achieve those goals.

Multi-Server / Multi-Process architecture

Assuming, that the automation solution of choice operates a centralized management layer as core component, the intelligence of the automation implementation is located in a central backbone and distributes execution to “dumb” endpoints. Advantages of such an architecture will be discussed in subsequent chapters. The current chapter focusses on the internal architecture of a “central engine” in more detail:

Robustness and near 100% availability is normally achieved through redundancy which comes by the trade-off of a larger infrastructure footprint and resource consumption. However, it is inherently crucial to base the central automation management layer on multiple servers and multiple processes. Not all of these processes necessarily act on a redundant basis, as would be the case with round-robin load balancing setups where multiple processes can all act on the same execution. However, in order to mitigate the risk of a complete failure of one particular function, the different processes distributed over various physical or virtual nodes need to be able to takeover operation of any running process.

This architectural approach also enables scalability which has been addressed previously. Most of all, however, this type of setup best supports the goal of 100% availability. At any given time, workload can be spread across multiple physical/virtual servers as well as split into different processes within the same (or different) servers.

Transactions

Another aspect of a multi-process architecture that helps achieve zero-downtime non-stop operations is that it ensures restoration of an execution point at any given time. Assuming, that all transactional data is stored in a persistent queue, the transaction is automatically recovered by the remaining processes on the same or different node(s) in case of failure of a physical server, a single execution, or a process node. This prevents any interruption, loss of data, or impact to end-users.

Automation scalability and availability

Scalable and high available automation architecture

Purposely, the figure above is the same as in the “Scalability” chapter, thereby showing how a multi-server / multi-process automation architecture can support both scalability and high availability.

Endpoint Failover

In a multi-server/multi-process architecture, recovery can’t occur if the endpoint adapters aren’t capable of instantly reconnecting to a changed server setup. To avoid downtime, all decentralized components such as control and management interfaces, UI, and adapters must support automatic failover. In the case of a server or process outage these interfaces must immediately reconnect to the remaining processes.

In a reliable failover architecture, endpoints need to be in tune with the core engine setup at all times and must receive regular updates about available processes and their current load. This ensures that endpoints connect to central components based on load data, thereby actively supporting the execution of load balancing. This data can also be used to balance the re-connection load efficiently in case of failure/restart.

Disaster Recovery

Even in an optimum availability setup, the potential of an IT disaster still exists. For this reason, organizations should be prepared with a comprehensive business continuity plan. In automation architectures, this does not necessarily mean rapid reboot of central components – although this could be achieved depending on how lightweight the automation solution central component architecture is constructed. More importantly, however, is the ability of rebooted server components to allow for swift re-connection of remote automation components such as endpoint adapters and/or management UIs. These concepts were discussed in depth in the scalability chapter.

Key Performance Indicators

Lightweight, robust architectures as described above have two main advantages beyond scalability and availability:

  • They allow for small scale setups at a low total cost of ownership (TCO).
  • They allow for large scale setups without entirely changing existing systems.

One of the first features of an automation system as described above is a low TCO for a small scale implementation.

Additional indicators to look for include:

  • Number of customers served at the same time without change of the automation architecture
  • Number of tasks per day the automation platform is capable of executing
  • Number of endpoints connected to the central platform

In particular, the number of endpoints provides clarity about strengths of a high-grade, enterprise-ready, robust automation solution.

Published by:

Scalability in Automation

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

Scalability criteria are often referred to as “load scalability”, actually meaning the ability of a solution to automatically adopt to increasing and decreasing consumption or execution demands. While the need for this capability is obviously true for an automation solution, there’s surely more that belongs into the field of scalability. Hence, the following aspects will be discussed within this chapter:

  • load scalability
  • administrative scalability
  • functional scalability
  • throughput

Load scalability

Back at a time where cloud computing was still striving for the one and only definition of itself, one key cloud criteria became clear very quickly: Cloud should be defined mainly by virtually infinite resources to be consumed by whatever means. In other words, cloud computing promised to transform the IT environment into a landscape of endlessly scalable services.

Today, whenever IT executives and decision makers consider the deployment of any new IT service, scalability is one of the major requirements. Scalability offers the ability for a distributed system to easily expand and contract its resource pool to accommodate heavier or lighter loads or number of inputs. It also determines the ease with which a system or component can be modified, added, or removed to accommodate changing load.

Most IT decisions today are based on the question of whether a solution can meet this definition. This is why when deciding on an automation solution – especially when considering it to be the automation, orchestration and deployment layer of your future (hybrid) IT framework – scalability should be a high priority requirement. In order to determine a solution’s load scalability capabilities one must examine the system’s architecture and how it handles the following key functions:

Expanding system resources

A scalable automation solution architecture allows adding and withdrawing resources seamlessly, on demand, and with no downtime of the core system.

The importance of a central management architecture will be discussed later in this paper; for now it is sufficient to understand that centralized engine architecture develops its main load scalability characteristics through its technical process architecture.

As an inherent characteristic, such an architecture is comprised of thoroughly separated working processes each having the same basic capabilities but – depending on the system’s functionality – acting differently (i.e. each serving a different execution function). A worker process could – depending on technical automation requirements – be assigned to some of the following tasks (see figure below for an overview of such an architecture):

  • worker process management (this would be kind-of a “worker control process” capability needed to be built out redundantly in order to allow for seamless handover in case of failure)
  • access point for log-on requests to the central engine (one of “Worker 1..n” below)
  • requests from a user interface instance (“UI worker process”)
  • workload automation synchronization (“Worker 1..n”)
  • Reporting (“Worker 1..n”)
  • Integrated authentication with remote identity directories (“Worker 1..n”)

At the same time, command and communication handling should be technically separated from automation execution handling and be spread across multiple “Command Processes” as well – all acting on and providing the same capabilities. This will keep the core system responsive and scalable in case of additional load.

Automation scalability

Scalable and high available automation architecture

The architecture described in the figure above represents these core differentiators when it comes to one of the following load change scenarios:

Dynamically adding physical/virtual servers to alter CPU, memory and disk space or increasing storage for workload peaks without downtime

With the above architecture, changing load means simply adding or withdrawing physical (virtualized/cloud-based) resources to the infrastructure running the core automation system. With processes acting redundantly on the “separation of concern” principle, it is either possible to provide more resources to run other jobs or add jobs to the core engine (even when physically running on a distributed infrastructure).

This should take place without downtime of the core automation system, ensuring not only rapid reaction to load changes but also high resource availability (to be discussed in a later chapter).

Change the number of concurrently connected endpoints to one instance of the system

At any time during system uptime it might become necessary to handle load requirements by increasing the number of system automation endpoints (such as agents) connected to one specific job instance. This is possible only if concurrently acting processes are made aware of changing endpoint connections and are able to re-distribute load among other running jobs seamlessly without downtime. The architecture described above allows for such scenarios where a separated, less integrated core engine would demand reconfiguration when adding endpoints over a certain number.

Endpoint reconnection following an outage

Even if the solution meets the criteria of maximum availability, outages may occur. A load scalable architecture is a key consideration when it comes to disaster recovery. This involves the concurrent boot-up of a significant number of remote systems including their respective automation endpoints. The automation solution therefore must allow for concurrent re-connection of several thousand automation endpoints within minutes of an outage in order to resume normal operations.

Administrative scalability

While load scalability is the most commonly discussed topic when it comes to key IT decisions, there are other scalability criteria to be considered as differentiating criteria in deciding on an appropriate automation solution. One is “Administrative Scalability” defined as the ability for an increasing number of organizations or users to easily share a single distributed system.[1]

Organizational unit support

A system is considered administratively scalable when it is capable of:

  • Logically separating organizational units within a single system. This capability is generally understood as “multi-client” or “multi-tenancy”.
  • Providing one central administration interface (UI + API) for system maintenance and onboarding of new organizations and/or users.

Endpoint connection from different network segments

Another aspect of administrative scalability is the ability of an automation solution to seamlessly connect endpoints from various organizational sources.

In large enterprises, multiple clients (customers) might be organizationally and network-wise separated. Organizational units are well hidden from each other or are routed through gateways when needing to connect. However, the automation solution is normally part of the core IT system serving multiple or all of these clients. Hence, it must allow for connectivity between processes and endpoints across the aforementioned separated sources. The established secure client network delineation must be kept in place, of course. One approach for the automation solution is to provide special dedicated routing (end)points capable of bridging the network separation via standard gateways and routers but only supporting the automation solution’s connectivity and protocol needs.

Seamless automation expansion for newly added resources

While the previously mentioned selection criteria for automation systems are based on “segregation,” another key decision criteria is based on harmonization and standardization.

An automation system can be considered administratively scalable when it is capable of executing the same, one-time-defined automation process on different endpoints within segregated environments.

The solution must be able to:

  • Add an organization and its users and systems seamlessly from any segregated network source.
  • Provide a dedicated management UI including those capabilities which is solely accessible securely by organization admin users only.

and at the same time

  • Define the core basic automation process only once and distribute it to new endpoints based on (organization-based) filter criteria.

The architecture thereby allows for unified process execution (implement once, serve all), administrative scalability and efficient automation.

Functional scalability

Functional Scalability, defined as the ability to enhance the system by adding new functionality at minimal effort[2], is another type of scalability characteristics that shall be included in the decision-making process.

The following are key components of functional scalability:

Enhance functionality through customized solutions

Once the basic processes are automated, IT operations staff can add significant value to the business by incorporating other dedicated IT systems into the automation landscape. Solution architects are faced with a multitude of different applications, services, interfaces, and integration demands that can benefit from automation.

A functionally scalable automation solution supports these scenarios out-of-the-box with the ability to:

  • Introduce new business logic to existing automation workflows or activities by means of simple and easy-to-adopt mechanisms without impacting existing automation or target system functions.
  • Allow for creation of interfaces to core automation workflows (through use of well-structured APIs) in order to ease integration with external applications.
  • Add and use parameters and/or conditional programming/scripting to adapt the behavior of existing base automation functions without changing the function itself.

Template-based implementation and template transfer

A functionally scalable architecture also enables the use of templates for implementing functionality and sharing/distributing it accordingly.

Once templates have been established, the automation solution should provide for a way to transfer these templates between systems or clients. This could either be supported through scripting or solution packaging. Additional value-add (if available): Share readily-tested implementations within an automation community.

Typical use-cases include but are not limited to:

  • Development-test-production deployment of automation packages.
  • Multi-client scenarios with well-segregated client areas with similar baseline functionality.

Customizable template libraries with graphical modeler

In today’s object and service based IT landscapes, products that rely on scripting are simply not considered functionally scalable. When using parameterization, conditional programming, and/or object reuse through scripting only, scaling to augment the automation implementation would become time-consuming, complex, and unsustainable. Today’s functionally scalable solutions use graphical modelers to create the object instances, workflows, templates, and packages that enable business efficiency and rapid adaptation to changing business requirements.

Throughput

Finally, consider the following question as a decision basis for selecting a highly innovative, cloud-ready, scalable automation solution:

What is the minimum and maximum workflow load for execution without architectural change of the solution? If the answer is: 100 up to 4-5 Mio concurrent jobs per day without change of principle setup or architecture, one’s good to go.

In other words: Scalable automation architectures support not only the aforementioned key scalability criteria but are also able to handle a relatively small scale of concurrent flows equally to an utterly large scale. The infrastructure footprint needed for the deployed automation solution must obviously adapt accordingly.

 

 

Published by:

Automation and Orchestration for Innovative IT-aaS Architectures

This blog post kicks off a series of connected publications about automation and orchestration solution architecture patterns. The series commences with multiple chapters discussing key criteria for automation solutions and will subsequently continue into outlining typical patterns and requirements for "Orchestrators". Posts in this series are tagged with "Automation-Orchestration" and will ultimately together compose a whitepaper about the subject matter.

Introduction & Summary

Recent customer projects in the field of architectural cloud computing frameworks revealed one clear fact repeatedly: Automation is the utter key to success – not only technically for the cloud solution to be performant, scalable and highly available but also for the business, which is using the cloudified IT, in order to remain in advantage compared to competition as well as stay on the leading edge of innovation.

On top of the automation building block within a cloud framework, an Orchestration solution ensures that atomic automation “black boxes” together form a platform for successful business execution.

As traditional IT landscapes take their leap into adopting (hybrid) cloud solutions for IaaS or maybe PaaS, automation and orchestration – in the same way – has to move from job scheduling or workload automation to more sophisticated IT Ops or DevOps tasks such as:

  • Provisioning of infrastructure and applications
  • Orchestration and deployment of services
  • Data consolidation
  • Information collection and reporting
  • Systematic forecasting and planning

In a time of constrained budgets, IT must always look to manage resources as efficiently as possible. One of the ways to accomplish that goal is through use of an IT solution that automates mundane tasks, orchestrates the same to larger solution blocks and eventually frees up enough IT resources to focus on driving business success.

This blog post is the first of a series of posts targeted at

  • explaining key criteria for a resilient, secure and scalable automation solution fit for the cloud
  • clearly identifying the separation between “automation” and “orchestration”
  • providing IT decision makers with a set of criteria to selecting the right solutions for their need of innovation

Together this blog post series will comprise a complete whitepaper on “Automation and “Orchestration for Innovative IT-aaS Architectures” supporting every IT in its strive to succeed with the move to (hybrid) cloud adoption.

The first section of the paper will list key criteria for automation solutions and explain their relevance for cloud frameworks as well as innovative IT landscapes in general.

The second section deals with Orchestration. It will differentiate system orchestration from service orchestration, explain key features and provide decision support for choosing an appropriate solution.

Target audience

Who should continue reading this blog series:

  • Technical decision makers
  • Cloud and solution architects in the field of innovative IT environments
  • IT-oriented pre- and post-sales consultants

If you consider yourself to belong into one of these groups, subscribe to the Smile-IT blog in order to get notified right away whenever a new chapter of this blog series and whitepaper gets published.

Finally, to conclude with the introduction let me give you the main findings that this paper will discuss in detail within the upcoming chapters:

Key findings

  • Traditional “old style” integration capabilities – such as: file transfer, object orientation or audit readiness – remain key criteria even for a cloud-ready automation platform.
  • In an era where cloud has become a commodity, just like the internet as such, service centered IT landscapes demand for a maximum of scalability and adaptability as well as multi-tenancy in order to be able to create a service-oriented ecosystem for the advancement of the businesses using it.
  • Security, maximum availability, and centralized management and control are fundamental necessities for transforming an IT environment into an integrated service center supporting business expansion, transformation, and growth.
  • Service orchestration might be the ultimate goal to achieve for an IT landscape, but system orchestration is a first step towards creating an abstraction layer between basic IT systems and business-oriented IT-services.

So, with these findings in mind, let us start diving into the key capabilities for a cloud- and innovation-ready automation solution.

 

Published by:

Scaling: Where to?

Pushed by some friends – and at a fortunately discounted price – I finally gave in and made it to an AWS exam; so call me certified now. Just so.

Solutions-Architect-Associate

However, this has nothing to do with the matter in discussion, here: While awaiting the exam, I talked to some fellow cloud geeks and was surprised again by them confusing up and down and out and in again – as experienced so many times before; so::: this is about “Scaling”!

And it’s really simple – here’s the boring piece:

The Prinicple

Both scaling patterns (up/down and out/in) in essence serve the same purpose and act by the same principle: Upon explicit demand or implicit recognition the amount of compute/memory resources are increased or decreased. Whether this is done

  • proactively or reactively (see guidelines above)
  • automatically
  • or manually through a portal by authorized personal

is subject to the framework implementation possibilities and respective architecture decisions.

What’s up and down?

For the scale-up/-down pattern, the hypervisor (or IaaS service layer) managing cloud compute resources has to provide the ability to dynamically (expectedly without outage; but that’s not granted) increase the compute and memory resources of a single machine instance.

The trigger for scaling execution can either be implemented within a dedicated cloud automation engine, within the Scalability building block or as part of a self-service portal command, depending on the intended flexibility.

What’s in and out?

The same principles apply as for the scale-up/-down pattern; however on scaling out an additional instance is created for the same service. This may involve the following alternatives:

  • Create an instance and re-configure the service to route requests to the new instance additionally
  • Create an instance, re-configure the service to route requests to the newly created instance only and de-provision an existing instance with lower capacity accordingly

Both cases possibly demand for (automated) loadbalancer reconfiguration and for the capability of the application to deal with changing server resources.

Respectively, scale-in means to de-provision instances once load parameters have sufficiently decreased. An application on top has to be able to deal with dynamically de-provisioned server instances. In a scenario where the application’s data layer is involved in the scaling process (i.e. a DBMS is part of the server to be de-provisioned) measures have to be taken by the application to reliably persist data before shutdown of the respective resource.

And now – for the more funny part

It occured to me that two silly graphics could ease to remember the difference, hence I do invite you all to think of a memory IC as a little bug climbing up a ladder and in turn of computers bursting out a data center. Does that make it easier to distinguish the two patterns?

scale-up

Scaling up and down

scale-out

Scaling up and down

 

You think you don’t need to care?

Well – as a SaaS consumer you’re right: As long as your tenant scales and repsonds accurately to any performance demand – no worries. But as soon as things deviate from this, one’s in trouble in terms of finding the right application architecture when it is unclear whether you’re to scale resources or instances. So — remember the bug and the fleeing servers 🙂

 

Published by:

The “Next Big Thing” series wrap-up: How to rule them all?

What is it that remains for the 8th and last issue of the “Next Big Thing” blog post series: To “rule them all” (all the forces, disruptive challenges and game changing innovations) and keep services connected, operating, integrated, … to deliver value to the business.

A bit ago, I came upon Jonathan Murray’s concept of the Composable Enterprise – a paradigm which essentially preaches fully decoupled infrastructure and application as services for company IT. Whether the Composable Enterprise is an entire new approach or just a pin-pointed translation of what is essential to businesses mastering digital transformation challenges is all the same.

The importance lies with the core concepts of what Jonathan’s paradigm preaches. These are to

  • decouple the infrastructure
  • make data a service
  • decompose applications
  • and automate everything

Decouple the Infrastructure.

Rewind into my own application development and delivery times during the 1990ies and the 00-years: When we were ready to launch a new business application we would – as part of the rollout process – inform IT of resources (servers, databases, connections, interface configurations) needed to run the thing. Today, large IT ecosystems sometimes still function that way, making them a slow and heavy-weight inhibitor of business agility. The change to incorporate here is two-folded: On the one hand infra responsibles must understand that they need to deliver on scale, time, demand, … of their business customers (which includes more uniform, more agile and more flexible – in terms of sourcing – delivery mechanisms). And on the other hand, application architects need to understand that it is not anymore their architecture that defines IT needs but in turn their architecture needs to adapt to and adopt agile IT infrastructure resources from wherever they may be sourced. By following that pattern, CIOs will enable their IT landscapes to leverage not only more cloud-like infrastructure sourcing on-premise (thereby enabling private clouds) but also will they become capable of ubiquitously using ubiquitous resources following hybrid sourcing models.

Make Data a Service.

This isn’t about BigData-like services, really. It might be (in the long run). But this is essentially about where the properties and information of IT – of applications and services – really is located. Rewind again. This time only for like 1 or 2 years. The second last delivery framework, that me and my team of gorgeous cloud aficionados created, was still built around a central source of information – essentially a master data database. This simply was the logical framework architecture approach back then. Even only a few months – when admittedly me and my then team (another awesome one) already knew that information needs to lie within the service – it was still less complex (hence: quicker) to construct our framework around such a central source of (service) wisdom. What the Composable Enterprise, though, rightly preaches is a complete shift of where information resides. Every single service, which offers its capabilities to the IT world around it, needs to provide a well-defined, easy to consume, transparently reachable interface to query and store any information relevant to the consumption of the service. Applications or other services using that service simply engage via that interface – not only to leverage the service’s capabilities but even more to store and retrieve data and information relevant to the service and the interaction with it. And there is no central database. In essence there is no database at all. There is no need for any. When services inherently know what they manage, need and provide, all db-centric architecture for the sole benefit of the db as such becomes void.

Decompose Applications.

The aforementioned leads one way into the decomposition pattern. More important, however, is to spend more thorough thinking about what a single business related activity – a business process – really needs in terms of application support. And in turn, what the applications providing this support to the business precisely need to be capable of. Decomposing Applications means to identify useful service entities which follow the above patterns, offer certain functionality in an atom kind-of way via well-defined interfaces (APIs) to the outside world and thereby create an application landscape which delivers on scale, time, demand, … just by being composed through service orchestration in the right – the needed – way. This is the end of huge monolithic ERP systems, which claim to offer all that a business needs (you just needed to customize them rightly). This is the commencing of light-weight services which rapidly adopt to changing underlying infrastructures and can be consumed not only for the benefit of the business owning them but – through orchestration –form whole new business process support systems for cross-company integration along new digitalized business models.

Automate Everything.

So, eventually we’ve ended at the heart of how to breath life into an IT which supports businesses in their digital transformation challenge.

Let me talk you into one final example emphasizing the importance of facing all these disruptive challenges openly: An Austrian bank of high reputation (and respectful success in the market) gave a talk at the Pioneers about how they discovered that they are actually not a good bank anymore, how they discovered that – in some years’ time – they’d not be able to live up to the market challenges and customers’ demands anymore. What they discovered was simply, that within some years they would lose customers just because of their inability to offer a user experience integrated with the mobile and social demands of today’s generations. What they did in turn was to found a development hub within their IT unit, solely focussing on creating a new app-based ecosystem around their offerings in order to deliver an innovative, modern, digital experience to their bank account holders.

Some time prior to the Pioneers, I had received a text that “my” bank (yes, I am one of their customers) now offers a currency exchange app through which I can simply order the amount of currency needed and would receive a confirmation once it’s ready to be handed to me in the nearest branch office. And some days past the Pioneers I received an eMail that a new “virtual bank servant” would be ready as an app in the net to serve all my account-related needs. Needless to say that a few moments later I was in and that the experience was just perfect even though they follow an “early validation” policy with their new developments, accepting possible errors and flaws for the benefit of reduced time to market and more accurate customer feedback.

Now, for a moment imagine just a few of the important patterns behind this approach:

  • System maintenance and keeping-the-lights-on IT management
  • Flexible scaling of infrastructures
  • Core banking applications and services delivering the relevant information to the customer facing apps
  • App deployment on a regular – maybe a daily – basis
  • Integration of third-party service information
  • Data and information collection and aggregation for the benefit of enhanced customer behaviour insight
  • Provision of information to social platforms (to influence customer decisions)
  • Monitoring and dashboards (customer-facing as well as internally to business and IT leaders)
  • Risk mitigation
  • … (I could probably go on for hours)

All of the above capabilities can – and shall – be automated to a certain, a great extent. And this is precisely what the “automate everything” pattern is about.

Conclusion

There is a huge business shift going on. Software, back in the 80ies and 90ies was a driver for growth, had its downturn in and post the .com age and now enters an era of being ubiquitously demanded.

Through the innovative possibilities by combining existing mobile, social and data technologies, through the merge of physical and digital worlds and through the tremendously rapid invention of new thing-based daily-life support, businesses of all kind will face the need for software – even if they had not felt that need so far.

The Composable Enterprise – or whatever one wants to call a paradigm of loosely coupled services being orchestrated through well-defined transparently consumable interfaces – is a way for businesses to accommodate this challenge more rapidly. Automating daily routine – like e.g. the aforementioned tasks – will be key to enterprises which want to stay on the edge of innovation within these fast changing times.

Most importantly, though, is to stay focussed within the blurring worlds of things, humans and businesses. To keep the focus on innovation not for the benefit of innovation as such but for the benefit of growing the business behind.

Innovation Architects will be the business angels of tomorrow – navigating their stakeholders through an ongoing revolution and supporting or driving the right decisions for implementing and orchestrating services in a business-focussed way.

 

{the feature image of this last “The Next Big Thing” series post shows a design by New Jersey and New York-based architects and designers Patricia Sabater, Christopher Booth and Aditya Chauan: The Sky Cloud Skyscraper – found on evolo.us/architecture}

Published by:

The “Next Big Thing” series: Discussing The Thing

Opening chapter No. 6 of the “Next Big Thing” blog post series by discussing the Internet of Things!

What’s essentially so entirely new about the Internet of Things? Things are not. Connectivity and protocols might be – but not entirely. Mastering the data produced by things – well: we’ve discussed that bit in one of the earlier posts of this series.

What’s really entirely new is the speed of adoption that the topic now reaches; and this speed is based on the possibilities that technology innovation have begun to offer to new things-based ecosystems.

In order to understand that, one has to judge the elements of a things-based architecture. While the article “Understanding the IoT Landscape” offers a quite comprehensive simplified IoT architecture, I would tend to be a little more detailed in assessing the elements of a functioning IoT ecosystem:

Architecture building blocks in an IoT Ecosystem

Figure: Architecture building blocks in an IoT Ecosystem

  1. The Thing itself. The variety of what a “Thing” may be is vast: medical human wearable monitoring devices (e.g. heart monitor), medical measurement devices (such as a diabetes meter), sensors, transmitters and alert devices in automated cars, smart home automation sensors, fitness wearables or simply watches that take over several of the above capabilities at once, … and many more things that aren’t even invented or thought of yet (the aforementioned article gives a decent list of what Things could possibly be). Discussing implementation architectures for the Thing itself would exceed the scope of this article by far, obviously.
  2. The Thing’s UI: This is already an interestingly ambiguous architecture element. Do Things have UIs? Yes, they do. Sometimes only comprised of one or a few LEDs or the-like of it. They could, of course, also have none at all if in case interfacing with a Thing’s user is delegated to either a mobile phone or a gateway which the Thing is connected to.
  3. Thing Connectivity and Communication Layer: The purpose of which is solely to bridge the gap between the Thing itself and the first connectivity point capable of transferring data through well-established protocols. Thing Connectivity may sometimes be reached through WiFi but often also by just using Bluetooth or any other wireless near field communication protocols.
  4. Thing Gateway: Rarely will Things directly feed data into any backend analytics or application layer; simply because it is too costly and complicated to accomplishing high performant, secure and reliable data connection based on proprietary protocols over long connectivity routes. Hence, we’ll often see some kind of gateway being introduced with the Thing which in simple cases could just be a mobile phone.
  5. Data Store: By whatever way Things might be leveraged by the business’s backend IT, we will always see a data collection and storage layer introduced to directly capture and provide Thing data for further compute, analysis and application integration.
  6. Application Integration: One essential topic to consider when introducing Things into business models is to envision an application landscape around the Thing in order to offer app-based Thing interaction to the end consumer as well as information from Things and their usage to the Thing’s business plus to third-party consumers in order to drive cross-business integration. New cross-enterprise business models will evolve anyway – the better Things-centered businesses allow for integration and orchestration, the better they will be able to leverage and let others leverage their disruptive innovations.
  7. Analytics: No Thing-based business – no Things introduction – will make any sense without creating the ability to leverage the information produced for analysis and even more for prediction or even prescription. We will see more of that in the next section of the article.

The impact to IT when discussing the change through IoT cannot be overestimated. Just by assessing the layers described above it does become obvious that we will see quite a lot of new architectural approaches evolve which in turn need to be integrated with existing IT landscapes. Also – as with all the more recent disruptions in enterprise IT – the orchestration of different services maturing in the “Things” space will be key for IT organizations to offer utmost business value when leveraging the Internet of Things.

 

{No. 7 of this blog post series will cover “Digitalization” and “Digital Transformation” – and what it really means to any business}

{feature image borrowed from the IoT wikipedia article}

 

Published by:

The “Next Big Thing” series: #Mobile Everywhere

{this is No. 4 of the “Next Big Thing” blog post series, which discusses the revolution to come through ongoing innovation in IT and the challenges involved with’em}

 

I would be interested in getting to know, how many readers of this series still know a person not owning a smartphone? (I do, by the way ;))

Even though I have written several times about the danger of it and how important it is to consider behaviour for a healthy adoption of “Mobile Everywhere” (e.g. in “Switch Offor “3 importances for a self-aware social networker) I am still a strong believer in the advantages that elaborate mobile technology brings into day-2-day life.

Not only do mobile phone technology and mobile app ecosystems add significant value to the other two forces (data and social) but additionally they’ve meanwhile “learned” to make vast use of them. You could actually describe a stacked model of this bond of disruptive technologies which are discussed in this series in a way that

  • data is the back-end business layer of the future
  • social platforms are the middleware to bring together information offers and information needs
  • and mobile technology is the front end to support information and data consumption in both ways

The image below turns the “Nexus”-model from the beginning of this series into a stack appearance:

 

Nexus of Forces (stacked)

Nexus of Forces (stacked)

 

Which – essentially – closes the loop with why we do see a bond of not only the technologies in mobility, social media and data and analytics but even more the visions, strategies and concepts of these three. Needless to say, therefore, that businesses who have a strong strategy and vision around the Nexus of Forces and – at the same time – are backed by a strong Service Orchestration roadmap will be the winners of the “race of embrace” of this bond.

Now, thinking of the Pioneers, which I’ve started this blog series with, I recall that one could see all forms of leveraging the aforementioned concepts in the ideas of the startups presenting there. And – unsurprisingly to me – not a single moment during those 2 festival days back in October this year, “Cloud” was even mentioned, let alone discussed. It is no topic anymore. Period.

However, there’s more: The Nexus of Forces as such is only the beginning of a path leading into the next industrial revolution and we’re already well under way. Hence, this blog series will continue discussing concepts and challenges which build upon the Nexus of Forces and takes it to the next level with change to come for each and every enterprise – software-based, hardware-based or not even technology-based at all.

 

{No. 5 of this blog post series takes the first step into “Industry 4.0” and related disruptive topics}

Published by:
%d bloggers like this: