The Smile-IT Blog » Blog Archives

Tag Archives: Automation-Orchestration

Automation and Orchestration – a Conclusion

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

Automation and Orchestration are core capabilities in any IT landscape.

Traditionally, there’d be classical on-premise IT, comprised of multiple enterprise applications, (partly) based on old-style architecture patterns like file exchange, asynchronous time-boxed export/import scenarios and historic file formats.

At the same time, the era of the Cloud hype has come to an end in a way that Cloud is ubiquitous; it is as present as the Internet as such has been for years, and the descendants of Cloud – mobile, social, IoT – are forming the nexus for the new era of Digital Business.

For enterprises, this means an ever-increasing pace of innovation and a constant advance of business models and business processes. As this paper has outlined, automation and orchestration solutions form the core for IT landscapes to efficiently support businesses in their striving for constant innovation.

Let’s once again repeat the key findings of this paper:

  • Traditional “old style” integration capabilities – such as: file transfer, object orientation or audit readiness – remain key criteria even for a cloud-ready automation platform.
  • In an era where cloud has become a commodity, just like the internet as such, service centered IT landscapes demand for a maximum of scalability and adaptability as well as multi-tenancy in order to be able to create a service-oriented ecosystem for the advancement of the businesses using it.
  • Security, maximum availability, and centralized management and control are fundamental necessities for transforming an IT environment into an integrated service center supporting business expansion, transformation, and growth.
  • Service orchestration might be the ultimate goal to achieve for an IT landscape, but system orchestration is a first step towards creating an abstraction layer between basic IT systems and business-oriented IT-services.

Therefore, for IT leaders, choosing the right automation and orchestration solution to support the business efficiently might be the majorly crucial decision to either become a differentiator and true innovation leader or (just) remain the head of a solid – yet: commodity – enterprise IT.

The CIO of the future is a Chief Innovation (rather than “Information”) Officer – and Automation and Orchestration both build the core basis for innovation. What to look at in getting to the right make-or-buy decision was the main requirement for this paper.

 

Published by:

System Orchestrator Architecture

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

This final chapter addresses architecture components for typical system orchestrators; comparing these with the blueprints for high-grade innovative automation solutions mentioned previously in this paper will reveal the close resemblance between automation and system orchestration patterns in a very obvious way.

The first figure below is system deployment architecture diagram describing the main physical components for a system orchestrator:

System Orchestration Deployment Architecture

System Orchestration Deployment Architecture

Note, that:

  1. the database normally needs to be setup in a clustered mode for high availability. Most orchestrator solutions do rely fully on the database (at least at design time).
  2. the Management Server’s deployment architecture is depending on availability requirements for management and control.
  3. the Runtime server nodes should be highly distributed (ideally geographically dispersed). The better this is supported by the product architecture the more reliable orchestration will support IT operations.
  4. the Web service deployment is depending on availability and web service API needs (product and requirement dependent)

Logical architecture

The logical architecture builds on the previous description of the deployment architecture and outlines the different building blocks of the orchestration solution. The logical architecture diagram is depicted in the following figure:

System Orchestration Logical Architecture

System Orchestration Logical Architecture

Notes to “logical architecture” figure:

  1. The Orchestrator DB holds runtime and design time orchestration flows, action packs, activities, plugins, logs, …
  2. Management Server controls access to orchestration artefacts
  3. Runtime Server provides execution environment for orchestration flows
  4. Orchestration designer (backend) provides environment for creation of orchestration flows using artefacts from the database (depending on specific product architecture the designer and management components could be integrated)
  5. Web Service exposes the Orchestrator’s functionality to external consumers (ideally via REST)
  6. Action packs or plugins are introduced through installation at the design time (normally integrated into the DB)
  7. The Orchestrator’s admin console is ideally implemented as web service, hence accessible via browser
  8. The Design Client UI could either be web-based or a dedicated client application to be installed locally and using a specific protocol for backend communication

Of course, these building blocks can vary from product to product. However, what remains crucial to successful orchestration operations (more or less in the same way as with automation) is to have lightweight, scalable runtime components capable of supporting a small scale, low footprint deployment equally efficient to a large scale, multi sight, highly distributed orchestration solution.

 

Published by:

System versus Service Orchestration

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

One of the most well-known blueprints for service orchestration is the representation as seen from the perspective of service oriented architecture (SOA).

The following figure in principle describes this viewpoint:

Service Orchestration, defined

Service Orchestration, defined

Operational components – such as “commercial off the shelf” (COTS) or custom applications, possibly with a high level of automated functionality (see previous chapters) – are orchestrated to simple application services (service components) which in turn are aggregated to atomic or composite IT services which subsequently support the execution of business processes. The latter are presented to consumers of various kind without disclosing any of the underlying services or applications directly. In well established service orchestration, functionality is often defined top-down by modelling business processes and defining its requirements first and then leveraging or composing necessary services to fulfil the process’ needs.

A different approach is derived from typical definitions in cloud frameworks; the following figure shows this approach:

System Orchestration: Context

System Orchestration: Context

Here, the emphasis lies on the automation layer building the core aggregation. The orchestration layer on top creates system and application services needed by the framework to execute its functional and operational processes.

The latter approach could be seen as a subset of the former, which will become more clear when talking about the essential differences between system and service orchestration.

Differences between system and service orchestration

System Orchestration

  • could in essence be comprised of an advanced automation engine
  • leverages atomic automation blocks
  • eases the task of automating (complex) application and service operation
  • oftenly directly supports OS scripting
  • supports application interfaces (API) through a set of plugins
  • may offer REST-based API in itself for integration and SOA

Service Orchestration

  • uses SOA patterns
  • is mostly message oriented (focuses on the exchange of messages between services)
  • supports message topics and queues
  • leverages a message broker and (enterprise) service bus
  • can leverage and provide API
  • composes low level services to higher level business process oriented services

Vendor examples of the former are vRealize Orchestrator, HP Operations Orchestration, Automic ONE Automation, BMC Atrium, System Center Orchestrator, ServiceNow (unsurprisingly some of these products have an essential say in the field of automation as well).

Service orchestration examples would be vendors or products like TIBCO, MuleSoft, WSO2, Microsoft BizTalk, OpenText Cordys or Oracle Fusion.

System orchestration key features

System orchestrators are mainly demanded to support a huge variety of underlying applications and IT services in a highly flexible and scalable way:

  • OS neutral installation (depending on specific infrastructure operations requirements)
  • Clustering or node setup possible for scalability and availability reasons
  • Ease of use; low entry threshold for orchestration/automation developers
  • Support quality; support ecosystem (community, online support access, etc.)
  • Database dependency to minimum extent; major databases to be supported equally
  • Built-in business continuity support (backup/restore without major effort)
  • Northbound integratability: REST API
  • Southbound integratability and extensibility: either built-in, by leveraging APIs or by means of a plugin ecosystem
  • Plugin SDK for vendor external plugin development support
  • Scripting possible but not necessarily needed
  • Ease of orchestrating vendor-external services (as vendor neutral as possible, depending on landscape to be orchestrated/integrated)
  • Self-orchestration possible
  • Cloud orchestration: seamless integration with major public cloud vendors

Main requirements for a service orchestrator

In contrary to the above, service orchestration solutions would focus mainly on message handling and integration, as its main purpose is to aggregate lower level application services into higher level composite services to support business process execution. Typical demands to such a product would therefore involve:

  • Support of major web service protocol standards (SOAP, REST)
  • Supports “old-style” enterprise integration technologies (RMI, CORBA, (S)FTP, EDI, …) for integration of legacy applications
  • Provides a central service registry
  • Supports resilient message handling (mediation, completion, message persistence, …)
  • Includes flexible and easy to integrate data mapping based on modelling and XSLT
  • Supports message routing and distribution through topics, queues, etc.
  • Integrated API management solution
  • Integrated business process modelling (BPM) solution
  • Integrated business application monitoring (BAM) solution
  • Extensibility through low-threshold commonly accepted software development technologies

As a rule of thumb to delineate the two effectively from each other, one can say that it is – to a certain extent – possible to create service orchestration by means of a system orchestrator but it is (mostly) impossible to do system orchestration with only a service orchestrator at hand.

For this reason, we will continue with a focus on system orchestration as a way to leverage basic IT automation for the benefit of higher level IT services, and will address vanilla architectures for typical system orchestrator deployments.

 

Published by:

What Is Orchestration?

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

Prior to diving into the architectural patterns for a robust orchestration solution for the enterprise, a few definitions need to be clarified, and as this paper talked a lot about Automation in the first place, we shall begin with outlining precisely the delineation between Automation and Orchestration – to begin with:

  • In any IT architecture framework, the automation layer creates the components necessary to provide atomic entities for service orchestration. Automation uses technologies to control, manage and run atomic tasks within machine instances, operating systems and applications on the one hand and provides automation capabilities for the larger ecosystem in order to automate processes (e.g. onboarding a new employee)
  • Orchestration – in turn – uses processes, workflows and integration to construct the representation of a service from atomic components. A service could e.g. consist of operations within various different systems such as the creation of a machine instance in a virtual infrastructure, the installation of a webservice in another instance and the alteration of a permission matrix in an IAM system. “Orchestration” would provide means to aggregate these atomic actions into a service bundle which can then be provisioned to a customer or user for consumption. The Orchestration layer makes use of single automated service components.

While the above definitions mainly address the system context in IT architectures, it is valid to say that there is another slightly different context – the service context – which demands for another definition of the term “orchestration”:

  • Service orchestration is the coordination and arrangement of multiple services exposed as a single aggregate service. It is used to automate business processes through loose coupling of different services and applications, thereby creating composite services. Service orchestration combines service interactions to create business process models consumable as services.

In order to underpin the danger of confusion when discussing orchestration, here are a few references for orchestration definitions:

  • “Orchestration describes the automated arrangement, coordination, and management of complex computer systems, middleware and services.” (wikipedia: https://en.wikipedia.org/wiki/Orchestration_(computing) )
  • “Complex Behavior Interaction (Logic/Business Process Level): a complex interaction among different systems. In the context of service-oriented architecture, this is often described as choreography and orchestration” (Carnegie Mellon University Research Showcase 12-2013: “Understanding Patterns for System-of- Systems Integration”, Rick Kazman, Klaus Nielsen, Klaus Schmid)
  • “Service orchestration in an ESB allows service requesters to call service providers without the need to know where the service provider is or even the data scheme required in the service” (InfoTech Research Group Inc. “Select and Implement an ESB Solution”, August 2015)
  • “Orchestration automates simple or complex multi-system tasks on remote servers that are normally done manually” (ServiceNow Product Documentation: http://wiki.servicenow.com/index.php?title=Orchestration#gsc.tab=0 )
  • “The main difference, then, between a workflow “automation” and an “orchestration” is that workflows are processed and completed as processes within a single domain for automation purposes, whereas orchestration includes a workflow and provides a directed action towards larger goals and objectives” (Cloud Computing: Concepts, Technology & Architecture”, Thomas Erl, Prentice Hall, October 2014)
  • “Orchestration is the automated coordination and management of computer resources and services. Orchestration provides for deployment and execution of interdependent workflows completely on external resources” (ORG: http://cloudpatterns.org/mechanisms/orchestration_engine )

Even though these definitions seemingly leave a lot of room for interpretation of what system and service orchestration really covers, clarity can be gained by looking at a few architectural principles as well as requirements for different orchestration goals, which the last few chapters of this paper will be focusing at.

Published by:

From Automation to Orchestration

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

In some cases, choosing an appropriate IT automation solution comes down to a matter of business dependency, migration efficiency, or simply vendor relationship and trust. Many times, these criteria alone might provide enough information to make a solid buying decision.

However, to make the best decision for an organization, examining prospective automation solutions more closely could be an option. A key consideration are the solutions’ architectural patterns, which should be assessed through well-planned proof of concept testing or a detailed evaluation. Some key questions to raise are listed below, for convenience and possible inclusion in the evaluation questionnaire:

  • Does the solution have sufficient scalability to react to on-demand load changes and at the same time allow for the growth of IT automation and orchestration as your business grows?
  • Does the solution provide object orientation which allows for representation of real-world IT challenges through well-structured re-usable automation objects and templates?
  • Can the solution quickly and easily integrate and adapt to business process changes?
  • Does the solution guarantee 24/7, close-to-100% availability?
  • Is the solution able to interface with traditional legacy system files and applications through integrated file transfer capability?
  • Can the solution provide a full audit trail from the automation backbone and does it support compliance standards and regulation out-of-box?
  • Does the solution offer multi-clients support and have the ability to segregate organizational and customer IT segments?
  • Does the solution include the latest security features supporting confidentiality, integrity and authenticity?
  • Can the solution handle the integration needs in a homogeneous way from a central automation layer that enables IT admins to come up to speed quickly with minimal training?
  • Does the solution dynamically control the execution of processes at all times?

Obviously, not all automation solutions in the market would be able to give a “yes” to each of these questions; and as always the goal is to find the platform that does bundle all of the described criteria in the best possible compromise.

A supportive element to this decision could be a solution’s capability not only to automate mundane operations tasks in an IT landscape but to bundle these tasks into larger system orchestration scenarios.

What’s the essentials of such scenarios? What’s to look at? And what about the delineation between system and service orchestration? This is subject to the following chapters.

Published by:

Dynamic Processing Control of Automation Flows

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

The final compelling element to consider in an enterprise-grade automation solution is its ability to dynamically control – in real-time – how automation changes the behaviour of your IT landscape. This includes:

  • On-demand business process changes requested by stakeholders, mandated by regulatory requirements, or directly affected by events outside of the enterprise’s core business model
  • Risk of service level (SLA) penalties caused by an application failure or the system’s inability to handle a change of load demand
  • The ability of the system to support a rapid introduction of a new product line or service to meet changing business needs

When assessing such capabilities, the following architecture patterns within the proposed automation solution are of importance:

Dynamic just-in-time execution

As described earlier, object orientation forms the basis for aggregating artefacts built into the automation solution (such as: actions, tasks, workflows, file processing, reports). This capability shall be provided in a way that automation operations remain sufficiently granular and at the same time allow the solution to act as a large automation ecosystem. More importantly, the solution must retain the ability to dynamically re-aggregate executions on-demand as required.

If the automation platform handles each artifact as an object, then object interaction, object instantiation parameters, or object execution scheduling can be redefined in a matter of minutes. All that’s left is to define the object model of the actual automation implementation for the specific IT landscape – a one time task.

The best automation solutions include a library of IT process automation actions that can be aggregated throughout automation workflows. These “IT process automation” actions are ideally delivered as part of the solution as a whole or specifically targeted to address particular automation challenges within enterprise IT landscapes.

Examples are:

  • If IT SLA measures reveal a particular IT housekeeping task at risk due to an increase in processing time, dynamic adaptation of the specific workflows would involve assigning a different scheduler object or calendar to the task or re-aggregating the workflow to execute the process in smaller chunks. This assumes end-to-end object orientation and proper object model definition.
  • If a particular monthly batch data processing workflow is exceeding a particular transfer size boundary, the workflow can remain completely unchanged while chunk size definition is altered by changing the input parameters. These input parameters would themselves be derived from IT configuration management so dynamic automation adaptation would still remain zero-interactive.

Pre/post processing of automation tasks

Not only does dynamic execution require maximum object orientation patterns within the implementation and operation of the automation solution, but it must also provide the ability to:

  • Change the behavior of an object by preprocessing actions.
  • Process the output of an object for further use in subsequent automation tasks/workflows.

Adding pre or post execution logic instead of implementing additional objects for the same logic makes it an object property – an inherent part of the object itself instead of treating it as separate object within the model – which rarely occurs with pre or post-processing. These tasks thus become part of the concrete instance of an abstract automation object.

Examples for applying this pattern in automation are:

  • System alive or connectivity check
  • Data validation
  • Parameter augmentation through additional input
  • Data source query
  • Dynamic report augmentation

Automation solutions can offer this capability either through graphical modelling of pre and post conditions or in the case of more complex requirements, through script language elements.

Easy-to-use extensible scripting

The scripting language offered by the automation solution is the key to offering a system capable of implementing enterprise-grade automation scenarios. While scripting within automation and orchestration tends to evolve into supporting mainly standard scripting languages such as Python, Perl, JavaScript or VBscript, a solution that offers both standard and proprietary scripting is still optimum.

An automation system’s proprietary scripting language addresses the system’s own object model most efficiently while at the same time – through extension capabilities – enabling seamless inclusion of target system specific operations. The combination of both is the best way to ensure a flexible, dynamic, and high performing end-to-end automation solution.

 

Published by:

Homogeneous end-to-end Automation Integration

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

Whether looking to extend existing IT service capabilities through innovative service orchestration and delivery, or trying to increase the level of automation within the environment, one would always want to examine the following core features of a prospective automation solution:

  • Automation
  • Orchestration
  • Provisioning
  • Service definition and catalogue
  • Onboarding and subscription
  • Monitoring and metering
  • Reporting and billing

Though not all of those might be utilized at once, the automation solution will definitely play a major role in aggregating them to support the business processes of an entire enterprise.

Either way, homogeneity represents a key element, when it comes to determining the right solution, with the right approach and the right capabilities.

Homogeneous UX for all integrations

First, the automation platform one choses must have a unified user experience (UX) for all targeted applications. This doesn’t mean that for every component in the system the exact same user interface needs to be presented. It’s more important that there is a unified pattern for all the components. This should start with the central management elements of the solution and extend to both internal and external resources such as an Automation Framework IDE for 3rd party solutions discussed previously.

In addition, the core automation components also must match the same UX. Introducing an automation system with standard user interfaces and integration concepts ensures rapid implementation, since SME’s can focus on automating system processes rather than being bogged down with training on the automation solution itself.

A single platform for all automation

The more products that make up the automation solution, the greater the effort required to integrate them all into an IT landscape. Software and system architecture throughout history has never proposed one single technology, standard or design guideline for all functional capabilities, non-functional requirements, or interface definitions. Therefore, a bundled system comprised of multiple products will in 95% of cases come with a variety of inter-component interfaces that will need to be configured separately from the centralized parameterization of the overall solution.

Project implementation experience shows that the slope of learning associated with the solution is directly proportional to the number of different components contained in the automation suite. (see figure below):

Automation adoption time

Automation: Adoption time by number of platform components

Functional integration with target systems

Finally, the solution’s core functionality should be able to integrate with target systems using industry standards such as common OS scripting languages, REST, JMS or quasi-standards with target applications like RPC and EJB. At minimum, the solution should include 90% of these industry standards out-of-the-box.

In addition, an enterprise-grade automation solution should provide:

  • Multiple action/workflow templates (either bundled with the core solution or available for purchase)
  • Ease of integration implementation with target systems’ core functionality at a very detailed level – such as administrative scripting control from within the automation core through scripting integration (to be discussed in the following chapter)

 

Published by:

Automation Security

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

An obvious key point to consider when choosing an automation solution is security. We’ve discussed Audit & Compliance separately from security since audit trails and compliance need the architectural support by the solution but are both less technical in itself compared to security.

Considering security issues for an automation solution means focusing on the following areas:

  • Confidentiality: How does the solution manage authorized access?
  • Integrity: How does the solution ensure that stored objects and data are consistently traceable at any point in time?
  • Availability: How does the solution guarantee availability as defined, communicated, and agreed upon?
  • Authenticity: How does the solution ensure the authenticity of identities used for communication of partners (components, objects, users)
  • Liability: How does the solution support responsibility and accountability of the organization and its managers?

None of these areas rely on one particular architectural structure. Rather they have to be assessed by reviewing the particular solution’s overall architecture and how it relates to security.

User security

Authentication

Any reputable automation solution will offer industry standard authentication mechanisms such as password encryption, strong password policy, and login protection upon fail. Integrating with common identity directories such as LDAP or AD provides a higher level of security for authenticating user’s access. This allows for the “bind request” to be forwarded to the specific directory and thereby leveraging the directory’s technologies not only to protect passwords and users but also to provide audit trail data for login attempts. Going a step further, an authentication system provided through an external, integrated LDAP might offer stronger authentication – such as MFA – out-of-the-box without the need to augment the solution to gain greater security.

In addition, the solution should provide a customized interface (e.g. provided through an “exit – callback” mechanism) for customers to integrate any authentication mechanism that is not yet supported by the product out-of-the-box.

Personnel data base

Most organizations use one core personnel database within their master data management (MDM) process. For example, new employees are onboarded through an HR-triggered process which, in addition to organizational policies, ensures creation of access permissions to systems that employees use every day. As part of an automation system’s architecture, such an approach involves the need to offer automatically available interfaces and synchronization methods for users – either as objects or links. The automation workflow itself, which supports the HR onboarding process, would subsequently leverage these interfaces to create necessary authentication and authorization artifacts.

Authorization & Access

Enterprise-grade automation solutions should offer a variety of access control for managed objects. In addition to the core capabilities already discussed, IT operations should expect the solution’s support for securing various layers and objects within it. This involves:

  • Function level authorization: The ability to grant/revoke permission for certain functions of the solution.
  • Object level authorization: The ability to create access control lists (ACLs) at the single object level if necessary.
  • ACL aggregation: The ability to group object level ACLs together through intelligent filter criteria in order to reduce effort for security maintenance.
  • User grouping: The ability to aggregate users into groups for easy management.

In addition, a secure solution should protect user and group management from unauthorized manipulation through use of permission sets within the authorization system.

API

Automation solutions that do not include APIs are rarely enterprise ready. While compatible APIs (e.g. based on java libraries) would inherently be able to leverage previously discussed security features, Web Service APIs need to offer additional authentication technologies along commonly accepted standards. Within REST, we mainly see three different authentication methods:

  1. Basic authentication is the lowest security option as it involves simply exchanging a base64 encoded username/password. This not only requires additional security measures for storing, transporting, and processing login information, but it also fails to support authenticating against the API. It also opens external access for any authorized users through passwords only.
  2. OAuth 1.0a provides the highest level of security since sensitive data is never transmitted. However, implementation of authentication validation can be complex requiring significant effort to set up specific hash algorithms to be applied with a series of strict steps.
  3. OAuth 2.0 is a simpler implementation, but still considered a sufficiently secure industry standard for API authentication. It eliminates use of signatures and handles all encryption through transport level security (TLS) which simplifies integration.

Basic authentication might be acceptable for an automation solution’s APIs being operated solely within the boundaries of the organization. This is becoming less common as more IT operations evolve into service oriented, orchestrated delivery of business processes operating in a hybrid environment. Operating in such a landscape requires using interfaces for external integration, in which case your automation solution must provide a minimum of OAuth 2.0 security.

Object level security

The levels of authorization previously mentioned set the stage for defining a detailed authorization matrix within the automation solution’s object management layer. An object represents an execution endpoint within a highly critical target system of automated IT operation. Accessing the object representing the endpoint grants permission for the automation solution to directly impact the target system’s behavior. Therefore, an automation system must provide sufficiently detailed ACL configuration methods to control access to:

  • Endpoint adapters/agents
  • Execution artifacts such as processes and workflows
  • Other objects like statistics, reports, and catalogues
  • Logical tenants/clients

The list could be extended even further. However, the more detailed the authorization system, the greater the need for feasible aggregation and grouping mechanisms to ease complexity. At the same time, the higher the number of possibilities for controlling and managing authorization, the better the automation solution’s managability.

Separation of concern

Finally, to allow for a role model implementation that supports a typical IT organizational structure, execution must be separated from design and implementation. Object usage must not automatically imply permission for object definition. This allows another automation specialist to access the system to construct workflows with this and other objects without revealing the underlying credentials.

Communication Security

Securing the communication between systems, objects, and endpoints is the final security issue to be considered when assessing an automation solution. This includes

  • Encryption
  • Remote endpoint authentication – the ability to allow configuration of target endpoints authentication when interacting with the core automation management engine

For communication between components, encryption must be able to leverage standard algorithms. The solution should also allow configuration of the desired encryption method. At minimum, it should support AES-256.

Endpoint authentication provides a view of security from the opposite side of automation. To this point, we’ve discussed how the solution should support security implementation. When a solution is rolled out, however, endpoints need to automatically and securely interact with the automation core. Ideally the automation solution should generate a certification key deployable as a package to endpoint installations. Ideally this would happen via a separate, secure connection. This configuration enables a unique fingerprint for each endpoint and avoids intrusion of untrusted endpoints into the automation infrastructure.

Published by:

Managing Tenants in Automation

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

Another decision one needs to make during the selection process is whether the automation platform needs to support multi tenant and/or multi client capability. How you choose can have a significant financial impact.

Multi tenancy versus multi-client

Multi tenancy is closely related to the Cloud. Although not strictly a cloud pattern, multi tenancy has become one of the most discussed topics in application and service architectures due to the rise of Cloud Computing. That is, because multi tenancy eventually enables one of the essential cloud characteristics, namely: virtually endless scaling and resource pooling.

Multi-tenancy

Multi tenancy partitions an application into virtual units. Each virtual unit serves one customer while being executed within a shared environment together with the other tenants’ units. It doesn’t interact or conflict with other virtual units nor can a single virtual unit remove all resources from the shared environment in case of malfunction (resource pooling, resource sharing).

Multi-client

In contrast, a multi-client system is able to split an application into logical environments by separating functionality, management, object storage, and permission layers. This enables setting up a server that allows logons by different users with each user having their separate working environment while sharing common resources – file system, CPU, memory. However, in this environment, there remains the possibility of users impacting each other’s work.

Importance of Multi tenancy and Multi Client

These concepts are critical because of the need to provide separated working environments, object stores, automation flows for different customers or business lines. Therefore, one should be looking for an automation solution which supports this capability out-of-the-box. In certain circumstances you may not require strict customer segregation or the ability to offer pooling and sharing of resources out of one single environment. This clear differentiation might become a cost-influencing factor in certain cases.

Independent units within one system

Whether your automation solution needs to be multi-tenant or not depends on the business case and usage scenario. Normally in enterprise environments having major systems running on-premises, multi-tenancy is not a major requirement in an automation solution. Experience shows that when automation systems are shared between multiple organizational units or are automating multiple customers’ IT landscapes in an outsourcing scenario, multi-tenancy isn’t required since management of all units and customers is controlled through the central administration and architecture.

Multi-client capabilities, though, are indeed a necessity in an enterprise ready automation solution, as users of multiple different organizations want to work within the automation environment.

Multi-client capabilities would include the ability to:

  • Split a single automation solution instance into up to 1,000++ different logical units (clients)
  • Add clients on demand without downtime or without changing underlying infrastructure
  • Segregate object permission by client and enable user assignment to clients
  • Segregate automation objects and enable assignment to specific clients
  • Allow for automation execution delegation of centrally implemented automation workflows by simply assigning them to the specific clients (assuming the specific permissions have been set)
  • Re-use automation artifacts between clients (including clear and easy to use permission management)
  • Share use of resources across clients (but not necessarily for secure and scalable resource pooling across clients; see differentiation above)

Segregation of duties

Having multiple clients within one automation solution instance enables servicing of multiple external as well as internal customers. This allows for quick adaptation to changing business needs. Each client can define separate automation templates, security regulations, and access to surrounding infrastructure. Having a simple transport/delegation mechanism between clients at hand, allows to implement a multi-staging concept for the automation solution

Published by:

Audit & Compliance for Automation Platforms

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

Audit and Compliance has assumed greater importance in recent years. Following the Global Financial Crisis of 2007-08 – one of the most treacherous crises of our industrial age (Wikipedia cross-references various sources to the matter) – audit and standardization organizations as well as governmental institutions invested heavily into strengthening compliance laws, regulations, and enforcement.

This required enterprises in all industries to make significant investments to comply with these new regulations. Standards have evolved that define necessary policies and controls to be applied as well as requirements and procedures to audit, check, and enhance processes.

Typically, these policies encompass both business and IT related activities such as authentication, authorization, and access to systems. Emphasis is placed on tracking modifications to any IT systems or components through use of timestamps and other verification methods – particular focused on processes and communications that involve financial transactions.

Therefore, supporting the enforcement and reporting of these requirements, policies and regulations must be a core function of the automation solution. Following are the key factors to consider when it comes to automation and its impact on audit and compliance.

Traceability

The most important feature of an automation solution to meet compliance standards is traceability. The solution must allow for logging capabilities that tracks user activity within the system. It must provide tracking of all modifications to the system’s repository and include the user’s name, date and time of the change, and a copy of the data before and after the change was made. Such a feature ensures system integrity and compliance with regulatory statutes.

Statistics

Statistical records are a feature that ensures recording of any step performed either by an actual user or one initiated by an external interface (API). Such records should be stored in a hierarchy within the system’s backend database allowing follow up checking as to who performed what action at what specific time. Additionally, the system should allow for comments on single or multiple statistical records, thereby supporting complete traceability of automation activities by documenting additional operator actions.

Version Management

Some automation solutions offer the option of integrated version management. Once enabled, the solution keeps track of all changes made to tasks and blueprint definitions as well as to objects like calendars, time zones etc. Every change creates a new version of the specific object which can be accessible at any time for follow up investigation. Objects include additional information like version numbers, change dates and user identification. In some cases, the system allows for restoring an older version of the specific objects.

Monitoring

All of the above handle, process and record design-time activity of an automation system, ensuring stored data and data changes are documented to comply with audit needs. During execution, an automation system should also be able to monitor the behavior of every instantiated blueprint. Monitoring records need to track the instance itself as well as every input/output, changes performed to or by this instance (e.g. putting a certain task on hold manually).

Full Audit Trails

All of the above features contribute to a complete audit trail that complies with the reporting requirements as defined by the various standards. Ultimately an automation system must be able to easily produce an audit trail of all system activity from the central database in order to document specific actions being investigated by the auditor. An additional level of security that also enables compliance with law and regulations is the system’s ability to restrict access to this data on a user/group basis.

Compliance Through Standardization

Finally, to ease compliance adherence, the automation solution must follow common industry standards. While proprietary approaches within a system’s architecture are applicable and necessary (e.g. scripting language – see chapter “Dynamic Processing Control”), the automation solution itself must strictly follow encryption methods, communication protocols, and authentication technologies that are widely considered as common industry best practice. Any other approach in these areas would significantly complicate the efforts of IT Operations to prove compliance with audit and regulatory standards. In certain cases, it could even increase the audit cycle to less than a year depending on the financial and IT control standard being followed.

Published by:
%d bloggers like this: