The Smile-IT Blog » March 2016

Monthly Archives: March 2016

Dynamic Processing Control of Automation Flows

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

The final compelling element to consider in an enterprise-grade automation solution is its ability to dynamically control – in real-time – how automation changes the behaviour of your IT landscape. This includes:

  • On-demand business process changes requested by stakeholders, mandated by regulatory requirements, or directly affected by events outside of the enterprise’s core business model
  • Risk of service level (SLA) penalties caused by an application failure or the system’s inability to handle a change of load demand
  • The ability of the system to support a rapid introduction of a new product line or service to meet changing business needs

When assessing such capabilities, the following architecture patterns within the proposed automation solution are of importance:

Dynamic just-in-time execution

As described earlier, object orientation forms the basis for aggregating artefacts built into the automation solution (such as: actions, tasks, workflows, file processing, reports). This capability shall be provided in a way that automation operations remain sufficiently granular and at the same time allow the solution to act as a large automation ecosystem. More importantly, the solution must retain the ability to dynamically re-aggregate executions on-demand as required.

If the automation platform handles each artifact as an object, then object interaction, object instantiation parameters, or object execution scheduling can be redefined in a matter of minutes. All that’s left is to define the object model of the actual automation implementation for the specific IT landscape – a one time task.

The best automation solutions include a library of IT process automation actions that can be aggregated throughout automation workflows. These “IT process automation” actions are ideally delivered as part of the solution as a whole or specifically targeted to address particular automation challenges within enterprise IT landscapes.

Examples are:

  • If IT SLA measures reveal a particular IT housekeeping task at risk due to an increase in processing time, dynamic adaptation of the specific workflows would involve assigning a different scheduler object or calendar to the task or re-aggregating the workflow to execute the process in smaller chunks. This assumes end-to-end object orientation and proper object model definition.
  • If a particular monthly batch data processing workflow is exceeding a particular transfer size boundary, the workflow can remain completely unchanged while chunk size definition is altered by changing the input parameters. These input parameters would themselves be derived from IT configuration management so dynamic automation adaptation would still remain zero-interactive.

Pre/post processing of automation tasks

Not only does dynamic execution require maximum object orientation patterns within the implementation and operation of the automation solution, but it must also provide the ability to:

  • Change the behavior of an object by preprocessing actions.
  • Process the output of an object for further use in subsequent automation tasks/workflows.

Adding pre or post execution logic instead of implementing additional objects for the same logic makes it an object property – an inherent part of the object itself instead of treating it as separate object within the model – which rarely occurs with pre or post-processing. These tasks thus become part of the concrete instance of an abstract automation object.

Examples for applying this pattern in automation are:

  • System alive or connectivity check
  • Data validation
  • Parameter augmentation through additional input
  • Data source query
  • Dynamic report augmentation

Automation solutions can offer this capability either through graphical modelling of pre and post conditions or in the case of more complex requirements, through script language elements.

Easy-to-use extensible scripting

The scripting language offered by the automation solution is the key to offering a system capable of implementing enterprise-grade automation scenarios. While scripting within automation and orchestration tends to evolve into supporting mainly standard scripting languages such as Python, Perl, JavaScript or VBscript, a solution that offers both standard and proprietary scripting is still optimum.

An automation system’s proprietary scripting language addresses the system’s own object model most efficiently while at the same time – through extension capabilities – enabling seamless inclusion of target system specific operations. The combination of both is the best way to ensure a flexible, dynamic, and high performing end-to-end automation solution.

 

Published by:

Homogeneous end-to-end Automation Integration

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

Whether looking to extend existing IT service capabilities through innovative service orchestration and delivery, or trying to increase the level of automation within the environment, one would always want to examine the following core features of a prospective automation solution:

  • Automation
  • Orchestration
  • Provisioning
  • Service definition and catalogue
  • Onboarding and subscription
  • Monitoring and metering
  • Reporting and billing

Though not all of those might be utilized at once, the automation solution will definitely play a major role in aggregating them to support the business processes of an entire enterprise.

Either way, homogeneity represents a key element, when it comes to determining the right solution, with the right approach and the right capabilities.

Homogeneous UX for all integrations

First, the automation platform one choses must have a unified user experience (UX) for all targeted applications. This doesn’t mean that for every component in the system the exact same user interface needs to be presented. It’s more important that there is a unified pattern for all the components. This should start with the central management elements of the solution and extend to both internal and external resources such as an Automation Framework IDE for 3rd party solutions discussed previously.

In addition, the core automation components also must match the same UX. Introducing an automation system with standard user interfaces and integration concepts ensures rapid implementation, since SME’s can focus on automating system processes rather than being bogged down with training on the automation solution itself.

A single platform for all automation

The more products that make up the automation solution, the greater the effort required to integrate them all into an IT landscape. Software and system architecture throughout history has never proposed one single technology, standard or design guideline for all functional capabilities, non-functional requirements, or interface definitions. Therefore, a bundled system comprised of multiple products will in 95% of cases come with a variety of inter-component interfaces that will need to be configured separately from the centralized parameterization of the overall solution.

Project implementation experience shows that the slope of learning associated with the solution is directly proportional to the number of different components contained in the automation suite. (see figure below):

Automation adoption time

Automation: Adoption time by number of platform components

Functional integration with target systems

Finally, the solution’s core functionality should be able to integrate with target systems using industry standards such as common OS scripting languages, REST, JMS or quasi-standards with target applications like RPC and EJB. At minimum, the solution should include 90% of these industry standards out-of-the-box.

In addition, an enterprise-grade automation solution should provide:

  • Multiple action/workflow templates (either bundled with the core solution or available for purchase)
  • Ease of integration implementation with target systems’ core functionality at a very detailed level – such as administrative scripting control from within the automation core through scripting integration (to be discussed in the following chapter)

 

Published by:

Automation Security

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

An obvious key point to consider when choosing an automation solution is security. We’ve discussed Audit & Compliance separately from security since audit trails and compliance need the architectural support by the solution but are both less technical in itself compared to security.

Considering security issues for an automation solution means focusing on the following areas:

  • Confidentiality: How does the solution manage authorized access?
  • Integrity: How does the solution ensure that stored objects and data are consistently traceable at any point in time?
  • Availability: How does the solution guarantee availability as defined, communicated, and agreed upon?
  • Authenticity: How does the solution ensure the authenticity of identities used for communication of partners (components, objects, users)
  • Liability: How does the solution support responsibility and accountability of the organization and its managers?

None of these areas rely on one particular architectural structure. Rather they have to be assessed by reviewing the particular solution’s overall architecture and how it relates to security.

User security

Authentication

Any reputable automation solution will offer industry standard authentication mechanisms such as password encryption, strong password policy, and login protection upon fail. Integrating with common identity directories such as LDAP or AD provides a higher level of security for authenticating user’s access. This allows for the “bind request” to be forwarded to the specific directory and thereby leveraging the directory’s technologies not only to protect passwords and users but also to provide audit trail data for login attempts. Going a step further, an authentication system provided through an external, integrated LDAP might offer stronger authentication – such as MFA – out-of-the-box without the need to augment the solution to gain greater security.

In addition, the solution should provide a customized interface (e.g. provided through an “exit – callback” mechanism) for customers to integrate any authentication mechanism that is not yet supported by the product out-of-the-box.

Personnel data base

Most organizations use one core personnel database within their master data management (MDM) process. For example, new employees are onboarded through an HR-triggered process which, in addition to organizational policies, ensures creation of access permissions to systems that employees use every day. As part of an automation system’s architecture, such an approach involves the need to offer automatically available interfaces and synchronization methods for users – either as objects or links. The automation workflow itself, which supports the HR onboarding process, would subsequently leverage these interfaces to create necessary authentication and authorization artifacts.

Authorization & Access

Enterprise-grade automation solutions should offer a variety of access control for managed objects. In addition to the core capabilities already discussed, IT operations should expect the solution’s support for securing various layers and objects within it. This involves:

  • Function level authorization: The ability to grant/revoke permission for certain functions of the solution.
  • Object level authorization: The ability to create access control lists (ACLs) at the single object level if necessary.
  • ACL aggregation: The ability to group object level ACLs together through intelligent filter criteria in order to reduce effort for security maintenance.
  • User grouping: The ability to aggregate users into groups for easy management.

In addition, a secure solution should protect user and group management from unauthorized manipulation through use of permission sets within the authorization system.

API

Automation solutions that do not include APIs are rarely enterprise ready. While compatible APIs (e.g. based on java libraries) would inherently be able to leverage previously discussed security features, Web Service APIs need to offer additional authentication technologies along commonly accepted standards. Within REST, we mainly see three different authentication methods:

  1. Basic authentication is the lowest security option as it involves simply exchanging a base64 encoded username/password. This not only requires additional security measures for storing, transporting, and processing login information, but it also fails to support authenticating against the API. It also opens external access for any authorized users through passwords only.
  2. OAuth 1.0a provides the highest level of security since sensitive data is never transmitted. However, implementation of authentication validation can be complex requiring significant effort to set up specific hash algorithms to be applied with a series of strict steps.
  3. OAuth 2.0 is a simpler implementation, but still considered a sufficiently secure industry standard for API authentication. It eliminates use of signatures and handles all encryption through transport level security (TLS) which simplifies integration.

Basic authentication might be acceptable for an automation solution’s APIs being operated solely within the boundaries of the organization. This is becoming less common as more IT operations evolve into service oriented, orchestrated delivery of business processes operating in a hybrid environment. Operating in such a landscape requires using interfaces for external integration, in which case your automation solution must provide a minimum of OAuth 2.0 security.

Object level security

The levels of authorization previously mentioned set the stage for defining a detailed authorization matrix within the automation solution’s object management layer. An object represents an execution endpoint within a highly critical target system of automated IT operation. Accessing the object representing the endpoint grants permission for the automation solution to directly impact the target system’s behavior. Therefore, an automation system must provide sufficiently detailed ACL configuration methods to control access to:

  • Endpoint adapters/agents
  • Execution artifacts such as processes and workflows
  • Other objects like statistics, reports, and catalogues
  • Logical tenants/clients

The list could be extended even further. However, the more detailed the authorization system, the greater the need for feasible aggregation and grouping mechanisms to ease complexity. At the same time, the higher the number of possibilities for controlling and managing authorization, the better the automation solution’s managability.

Separation of concern

Finally, to allow for a role model implementation that supports a typical IT organizational structure, execution must be separated from design and implementation. Object usage must not automatically imply permission for object definition. This allows another automation specialist to access the system to construct workflows with this and other objects without revealing the underlying credentials.

Communication Security

Securing the communication between systems, objects, and endpoints is the final security issue to be considered when assessing an automation solution. This includes

  • Encryption
  • Remote endpoint authentication – the ability to allow configuration of target endpoints authentication when interacting with the core automation management engine

For communication between components, encryption must be able to leverage standard algorithms. The solution should also allow configuration of the desired encryption method. At minimum, it should support AES-256.

Endpoint authentication provides a view of security from the opposite side of automation. To this point, we’ve discussed how the solution should support security implementation. When a solution is rolled out, however, endpoints need to automatically and securely interact with the automation core. Ideally the automation solution should generate a certification key deployable as a package to endpoint installations. Ideally this would happen via a separate, secure connection. This configuration enables a unique fingerprint for each endpoint and avoids intrusion of untrusted endpoints into the automation infrastructure.

Published by:

Managing Tenants in Automation

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

Another decision one needs to make during the selection process is whether the automation platform needs to support multi tenant and/or multi client capability. How you choose can have a significant financial impact.

Multi tenancy versus multi-client

Multi tenancy is closely related to the Cloud. Although not strictly a cloud pattern, multi tenancy has become one of the most discussed topics in application and service architectures due to the rise of Cloud Computing. That is, because multi tenancy eventually enables one of the essential cloud characteristics, namely: virtually endless scaling and resource pooling.

Multi-tenancy

Multi tenancy partitions an application into virtual units. Each virtual unit serves one customer while being executed within a shared environment together with the other tenants’ units. It doesn’t interact or conflict with other virtual units nor can a single virtual unit remove all resources from the shared environment in case of malfunction (resource pooling, resource sharing).

Multi-client

In contrast, a multi-client system is able to split an application into logical environments by separating functionality, management, object storage, and permission layers. This enables setting up a server that allows logons by different users with each user having their separate working environment while sharing common resources – file system, CPU, memory. However, in this environment, there remains the possibility of users impacting each other’s work.

Importance of Multi tenancy and Multi Client

These concepts are critical because of the need to provide separated working environments, object stores, automation flows for different customers or business lines. Therefore, one should be looking for an automation solution which supports this capability out-of-the-box. In certain circumstances you may not require strict customer segregation or the ability to offer pooling and sharing of resources out of one single environment. This clear differentiation might become a cost-influencing factor in certain cases.

Independent units within one system

Whether your automation solution needs to be multi-tenant or not depends on the business case and usage scenario. Normally in enterprise environments having major systems running on-premises, multi-tenancy is not a major requirement in an automation solution. Experience shows that when automation systems are shared between multiple organizational units or are automating multiple customers’ IT landscapes in an outsourcing scenario, multi-tenancy isn’t required since management of all units and customers is controlled through the central administration and architecture.

Multi-client capabilities, though, are indeed a necessity in an enterprise ready automation solution, as users of multiple different organizations want to work within the automation environment.

Multi-client capabilities would include the ability to:

  • Split a single automation solution instance into up to 1,000++ different logical units (clients)
  • Add clients on demand without downtime or without changing underlying infrastructure
  • Segregate object permission by client and enable user assignment to clients
  • Segregate automation objects and enable assignment to specific clients
  • Allow for automation execution delegation of centrally implemented automation workflows by simply assigning them to the specific clients (assuming the specific permissions have been set)
  • Re-use automation artifacts between clients (including clear and easy to use permission management)
  • Share use of resources across clients (but not necessarily for secure and scalable resource pooling across clients; see differentiation above)

Segregation of duties

Having multiple clients within one automation solution instance enables servicing of multiple external as well as internal customers. This allows for quick adaptation to changing business needs. Each client can define separate automation templates, security regulations, and access to surrounding infrastructure. Having a simple transport/delegation mechanism between clients at hand, allows to implement a multi-staging concept for the automation solution

Published by:

Audit & Compliance for Automation Platforms

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

Audit and Compliance has assumed greater importance in recent years. Following the Global Financial Crisis of 2007-08 – one of the most treacherous crises of our industrial age (Wikipedia cross-references various sources to the matter) – audit and standardization organizations as well as governmental institutions invested heavily into strengthening compliance laws, regulations, and enforcement.

This required enterprises in all industries to make significant investments to comply with these new regulations. Standards have evolved that define necessary policies and controls to be applied as well as requirements and procedures to audit, check, and enhance processes.

Typically, these policies encompass both business and IT related activities such as authentication, authorization, and access to systems. Emphasis is placed on tracking modifications to any IT systems or components through use of timestamps and other verification methods – particular focused on processes and communications that involve financial transactions.

Therefore, supporting the enforcement and reporting of these requirements, policies and regulations must be a core function of the automation solution. Following are the key factors to consider when it comes to automation and its impact on audit and compliance.

Traceability

The most important feature of an automation solution to meet compliance standards is traceability. The solution must allow for logging capabilities that tracks user activity within the system. It must provide tracking of all modifications to the system’s repository and include the user’s name, date and time of the change, and a copy of the data before and after the change was made. Such a feature ensures system integrity and compliance with regulatory statutes.

Statistics

Statistical records are a feature that ensures recording of any step performed either by an actual user or one initiated by an external interface (API). Such records should be stored in a hierarchy within the system’s backend database allowing follow up checking as to who performed what action at what specific time. Additionally, the system should allow for comments on single or multiple statistical records, thereby supporting complete traceability of automation activities by documenting additional operator actions.

Version Management

Some automation solutions offer the option of integrated version management. Once enabled, the solution keeps track of all changes made to tasks and blueprint definitions as well as to objects like calendars, time zones etc. Every change creates a new version of the specific object which can be accessible at any time for follow up investigation. Objects include additional information like version numbers, change dates and user identification. In some cases, the system allows for restoring an older version of the specific objects.

Monitoring

All of the above handle, process and record design-time activity of an automation system, ensuring stored data and data changes are documented to comply with audit needs. During execution, an automation system should also be able to monitor the behavior of every instantiated blueprint. Monitoring records need to track the instance itself as well as every input/output, changes performed to or by this instance (e.g. putting a certain task on hold manually).

Full Audit Trails

All of the above features contribute to a complete audit trail that complies with the reporting requirements as defined by the various standards. Ultimately an automation system must be able to easily produce an audit trail of all system activity from the central database in order to document specific actions being investigated by the auditor. An additional level of security that also enables compliance with law and regulations is the system’s ability to restrict access to this data on a user/group basis.

Compliance Through Standardization

Finally, to ease compliance adherence, the automation solution must follow common industry standards. While proprietary approaches within a system’s architecture are applicable and necessary (e.g. scripting language – see chapter “Dynamic Processing Control”), the automation solution itself must strictly follow encryption methods, communication protocols, and authentication technologies that are widely considered as common industry best practice. Any other approach in these areas would significantly complicate the efforts of IT Operations to prove compliance with audit and regulatory standards. In certain cases, it could even increase the audit cycle to less than a year depending on the financial and IT control standard being followed.

Published by:

5 Erkenntnisse zur Digitalisierung in Österreich

Nach langem wieder mal etwas Deutschsprachiges zum Thema “Digitalisierung” auf meinem Blog …

Gestern fand die Veranstaltung “Digitalisierung von Produktionsbetrieben” der Wirtschaftsagentur Wien gemeinsam mit dem Netzwerk “IoT-Austria” statt. Mein Antrieb, an der Veranstaltung teilzunehmen, entstand weniger aus einem konkreten Projektkontext, sondern weil ich neugierig war, was produzierende Betriebe in Österreich zum Thema zu sagen hatten (und wer überhaupt etwas sagen würde).

Spontaneindruck

Ein bunter Mix von Unternehmen unterschiedlicher Größe und völlig unterschiedlicher Produktportfolios (die Palette reichte von Steuergeräten der Fa. Tele Haase über die Schlösser der EVVA bis hin zum AIT mit UX oder LieberLieber mit dem Thema “Modellbasierte Softwareentwicklung”), dadurch ein bunter Mix von Blickwinkeln auf das Thema und ein übergroßes Interesse im Auditorium. Letzteres könnte man natürlich als Buzzword- und Hype-Interesse werten; mir ist die positivistische Interpretation, dass die Wirtschaft und Industrie in Österreich die Themen “IoT” und “Digitalisierung” in großem Stil annimmt, ehrlicherweise lieber …

Was blieb hängen?

In erster Linie vor allem die durchaus gute Qualität der Vorträge. Allesamt recht praxisnahe mit vielen konkreten Beispielen aus dem Alltäglichen des jeweiligen Unternehmens; kaum einmal eine Themenverfehlung und nur einmal aus meiner Wahrnehmung das “Weißwaschen” des eigenen Portfolios mit dem IoT-Begriff (es wäre ein Wunder, wenn nicht auch hier das passierte, was wir bei “Cloud” einige Jahre lang allerorts gesehen haben).

Die eigentlichen Aha-Erlebnisse allerdings waren:

1: Ein neues Anwenderbild

Sebastian Egger vom AIT machte dem Plenum bewusst, wo die Anwender der Digitalisierungswelle tatsächlich sind: In allen Gliedern der Wertschöpfungskette (und nicht am Ende beim Konsumenten des entstandenen Produktes). Prozesse verändern sich teilweise derart radikal, dass jeder einzelne Mitarbeiter in Zuliefer-Industrie, Produktion, Logistik, Endvertrieb, etc. radikal Neues vorfinden wird und damit einer Umstellung unterworfen ist. Wieviele werden dem gewachsen sein? Und welche der kommenden Digitalisierungslösungen kann sich effizient auf diese verschiedenen Anwendergruppen einstellen?

2: Das Alter, das Alter!

Produzierende Betriebe – wie z.B. Tele Haase – haben Innovation mit teilweise langjähriger, gutgängiger industrieller Substanz zu bewältigen. Produzierende Industriemaschinen haben Halbwertszeiten mehrerer Jahrzehnte. Man tauscht nicht so leicht und schnell aus wie man heutzutage Consumer-Elektronik, EDV oder “Dinge” tauscht. Damit geht schwierige Integrierbarkeit in eine innovative digitale Plattform automatisch einher. Auch der Wille zu radikaler Innovation – zur Neuschaffung der Digitalen Fabrik – kann da schon mal an “Banalitäten” wie der Nicht-Integrierbarkeit einer 20 Jahre alten Industrie-Maschine scheitern. Erfolgreiche IoT- und Industrie4.0-Plattformen werden dem in irgendeiner Form gerecht werden müssen.

3: KMUs haben in Österreich ein “Partnerschaftsproblem”

Gut – das mag eine radikale Übersetzung von Peter Liebers Aussage zu der Tatsache sein, dass er so gut wie keine Kunden in Österreich hat. Seine Analyse war jedenfalls, dass österreichische produzierende Betriebe (größtenteils klassische KMUs) ein Problem haben, mit Unternehmen kleiner 20 MitarbeiterInnen zusammen zu arbeiten. Auf Grund der Flexibilität und Dynamik derartiger Unternehmen ist allerdings gerade dort das nötige innovative Potential zu finden. Die Einzelperson – das Ein-Personen-Unternehmen – reagiert oft wesentlich wendiger und schneller auf Trends und Trendwechsel als größere Unternehmen mit meist schwerfälligeren Strukturen. Angst vor einem Verlust des Partners durch Verschwinden des Unternehmens? Nun – ein durchaus valides Argument; dem wäre allerdings die eigene Stagnation in der aktuellen Zeit des radikalen Wandels in Industriebetrieben entgegenzusetzen …

4: 1 Schloss – 16 Mio. Konfigurationen

Absolutes Highlight des Vormittags jedoch der Vortrag von Johann Notbauer von EVVA. Sein Vortrag spannte in gewisser Weise den Bogen über alle an diesem Vormittag behandelten Themen: Ein österreichisches Traditionsunternehmen, das mit langjährig erprobten mechanischen Schließlösungen einen deutlichen Platz am Weltmarkt einnimmt, das mit einem auf Nachgeschäft basierenden Geschäftsmodell den Hauptumsatz einfährt (Notbauer bemühte das Druckermodell als Vergleich für seine Schließsysteme: billiger Drucker gefolgt von teuren Patronen), das mit dem 4-blättrigen Magnetschlüsselsystem MCS 3D-Druck-kopiersichere Schlüssel am Markt hat, ist plötzlich konfrontiert mit

  • Software
  • Firmware
  • Software-Sicherheit
  • RFID
  • NFC

und vielen anderen noch nie zuvor im Unternehmen gesehenen Herausforderungen. Plus: Das oben zitierte Geschäftsmodell ist mit modernen Schließsystemen, die wesentlich seltenere System- oder Schlüssel-Wechsel nach sich ziehen, im Prinzip kannibalisiert. Kein anderer Vortrag an diesem Tag hat so unmissverständlich und klar deutlich gemacht, was Digitalisierung für ein Unternehmen eigentlich bedeutet und wie radikal man sich darauf einzustellen hat, wenn man reüssieren will.

Und die 16 Mio Konfigurationen aus der Überschrift stimmen wirklich – nur eben nicht mehr für digitale Schließsysteme …

5: Österreich hat ein IoT-Netzwerk

Last not least bleib aber vor allem eines an diesem Tag hängen: In Österreich gibt es bereits jetzt eine recht aktive Community von Menschen, die sich aus freien Stücken zusammengefunden haben, um der Digitalisierung und Industrie 4.0 in diesem Land auf die Beine zu helfen. Das Netzwerk “IoT-Austria” hat diesen Vormittag mit gestaltet, mit einem (zugegebenermaßen fast klassischen, die einschlägigen Themen bemühenden) Impulsvortrag eingeleitet und tritt als solider Mix von Technik-Fachexperten und strategischen Denkern in Erscheinung.

Noch ist das Potential an Unterstützung für die interessierte Industrie und Wirtschaft, das von diesem Netzwerk ausgehen kann, nicht ganz klar, aber die gestrige Veranstaltung zeigte, dass österreichische Unternehmen und vor allem “IoT-Austria” ganz offensichtlich auf einem guten Weg sind, der Digitalisierung in diesem Land auf die Sprünge zu helfen.

 

Published by:

Integrated File Transfer still required for Automation

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

There is a wide range of sophistication when it comes to the IT systems that businesses operate. Some came on line around 2000, but others have been in use for much longer. Some have constructed systems in order to maintain high quality services for years to come. Still others are constantly adapting their systems to take advantage of the latest technology.

Because of this wide disparity, an automation solution must be able to handle current and future innovations in integration, orchestration and performance. It must also be backwards compatible so it can support legacy technologies.

Saying this, one of the technologies that an automation solution must support is file transfer between systems. Along with this, it must also support elaboration, interpretation, and transformation of file content to create new levels of automation integration for enterprise IT.

Experiences with multiple customers show that replacing legacy file transfer applications with state-of-the-art APIs is sometimes simply too time consuming and costly. However, it is crucial that these legacy system capabilities are provided for an automated and integrated IT landscape. Strangely enough, therefore, being able to address, process, interpret, and transfer files with the demands and challenges of an automated IT environment is still a must-have criteria for an enterprise automation solution.

Support of multiple different file transfer protocols

FTP (file transfer protocols – see a list here[1]) not equals FTP: Following are the most common FTPs still in use which must be supported by the selected automation solution:

  • FTP: This is the standard protocol definition for transferring files in an insecure manner. When operating behind a firewall, using FTP for transporting files is convenient and needs to be an integrated feature of your enterprise automation solution.
  • FTPS: adds support for “Transport Layer Security” (TLS) and the “Secure Socket Layer” (SSL) encryption protocols to FTP. Many enterprises rely on this type of protocol for security reasons, especially when it comes to moving beyond the network.
  • FTPES: This differs from FTPS only in terms of the timing of the encryption and transferring of login information. It adds an additional safety control to FTPS-based file transfers
  • SFTP: has been added to the Secure Shell protocol (SSH) by the Internet Engineering Task Force (IETF)[2] in order to allow for access, transfer and management of files through any reliable (SSH) data stream.

In addition to supporting all of the above protocols, an automation solution can enhance FT integration in automation scenarios by offering a direct endpoint-to-endpoint file transfer – based on a proprietary protocol. Providing this protocol eases the need for a central management engine implementation solely to transport files from one system to another.

Standard FT protocols

The most convenient way to allow connecting FTP capable remote systems based on the protocols listed above is through a graphical UI that allows defining the transfer much the way it is done with standard FTP clients. The actual transfer itself is normally executed by means of a dedicated adapter only initiated by the centrally managed and executed automation workflows. To comply with security requirements limiting login information to only top-level administrators, sensitive information such as username, password, or certificates are stored in separate objects. At the same time, file transfers are integrated into automation flows by specialists who do not have access to the detailed login information but can still make use of the prepared security objects.

Endpoint-to-endpoint File Transfer

In addition to supporting the standard FTP protocols, the automation solution’s ecosystem should offer a direct secure file transfer between two endpoints within the IT landscape.

In this case the automation solution issues the establishment of a direct, encrypted connection between the affected endpoints – normally using a proprietary internal protocol. This type of mechanism eliminates the need for additional tools and increases the performance of file transfers significantly.

Other features the solution should support include:

  • Multiple code translation (e.g. from ASCII to EBCDIC)
  • Data compression
  • Wildcard transfer
  • Regular checkpoint log (in order to re-issue aborted transfers from the last checkpoint recorded)
  • Checksum verification based on hashing algorithms (like e.g. MD5)
  • Industry standard transfer encryption (e.g. AES-128, AES-192 or AES-256)

Process integration

Finally, the key to offering enterprise ready integrated file transfer through any protocol is to allow seamless integration into existing automation workflows while leveraging all the automation functionality without additional re-coding or re-interpretation of transferred files. This includes:

  • Using file transfer results and file content in other automation objects.
  • Including file transfer invocation, execution, and result processing in the scripting environment.
  • Using files within pre or post conditions of action or workflow execution or augmenting pre/post conditions by making use of results from transferred files.
  • Bundling file transfers to groups of endpoints executing similar – but not necessarily identical – file transfer processes.

This allows the integration of legacy file transfers into innovative business processes without losing transparency and flexibility.

Published by:
%d bloggers like this: