Category Archives: SmileIT

Posts in this category will be focussing on IT with a little bit of smile here and there – hopefully. Other categories might be more vice versa. Overall, posts herein are about IT, Digitalization, the “Nexus of Forces” (#cloud, #mobile, #social, #bigdata) and some other technology stuff ;)

Drum prüfe, wer sich ewig bindet!

Chello UPC rühmt sich mit schnellem Internet. Hyper-schnellem Internet. Leider bleibt das – zumindest in den Wiener Innenbezirken – meist eine Mähr’! Da nun aber gerade in diesen Breitengraden die Alternativen mangels “blizznet” et al. rar sind, bzw. LTE auf Grund der Bebauungsdichte auch keine besseren Ergebnisse liefert, ist man der miserablen Servicequalität des Quasi-Monopolisten auf Gedeih und Verderb ausgeliefert.

Oder?

Nein, ist man nicht. Denn eine garantierte Bandbreite muss nun mal eingehalten werden; wird sie das nicht, hat der Kunde laut VKI Gewährleistungsanspruch (siehe Artikel derstandard.at vom 25 Mai d.J.)

Das Silberschneider-Script am Mac in 4 Schritten

Der oben erwähnte Artikel liefert ein “Speed-Test” Skript mit, welches periodisch die Internet-Geschwindigkeit prüft. Idealerweise konfiguriert man Skript und cronjob auf einem dauerhaft laufenden Linux Server (dafür wurde es maßgeschneidert). Es geht aber – mit ein paar Adaptionen auch am Mac. Hier die Infos:

1. Download und Install

Den Dauerläufer speedtest_cron gibts auf GitLab! Er bedient sich eines speedtest-cli Skripts von “Sivel” (github download). Beides herunterladen und in einem eigenen neuen Ordner unter ~/Library ablegen (~ ist: user directory – z.B. /<main-hd>/Users/<mein-name>/). Die speedtest-cli Dateien kommen dabei in das vorbereitete Unterverzeichnis “speedtest_cli” (Anm.: speedtest-cli ist mit Apache-Lizenz freigegeben; speedtest_cron ist komplett frei verwendbar – ohne Gewähr).

2. Pfade anpassen

speedtest_cron ist per README Instruktionen perfekt für die Anpassung vorbereitet; das Einzige, was man im Prinzip tun muss, ist die Pfade auf die realen Gegebenheiten am eigenen Gerät anzupassen – im Skript sind das alle Stellen mit /path/to/this/folder

3. Network-Interface adaptieren

Die Netzwerkkarten werden unter Linux mit eth0..n nummeriert. Unter Mac OS X heißen sie en0..n! Da das cron-Skript versucht, die Quelle des Speed-Tests (Source IP Adresse) mit zu berücksichtigen, muss man diesen Teil adaptieren. Dazu die folgende Zeile in der Datei speedtest_cron ändern:

/<mein-pfad-zum-speedtest>/speedtest_cli/speedtest_cli.py --share --server 5351 --simple --source `/sbin/ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'` > /<mein-pfad-zum-speedtest>/speedtests/$DATE.log

Der wesentliche Teil beginnt bei “/sbin/ifconfig …“. ifconfig liefert – auch unter OS X – die Netzwerkkonfiguration aller Interfaces. eth0 existiert nicht, daher kommt es zu einem Fehler. Unter Verwendung von en0 gibts ein Ergebnis, das allerdings anders als unter Linux formatiert ist; daher läuft auch das nachfolgende rausschneiden der IP-Adresse anders. Der adaptierte Befehl sieht folgendermaßen aus:

/<mein-pfad-zum-speedtest>/speedtest_cli/speedtest_cli.py --share --server 5351 --simple --source `/sbin/ifconfig en0 | grep 'inet' | cut -d: -f2 | awk '{ print $2}'` > /<mein-pfad-zum-speedtest>/speedtests/$DATE.log
  • ifconfig en0 liefert die Daten zur ersten Netzwerkkarte im Gerät (darf auch gerne eine andere sein, wenn über diese getestet werden soll)
  • grep ‘inet’ liefert aus den gesamten Daten von ifconfig jenen Teil, in dem die IP-Adresse steht
  • cut -d: -f2 schneidet alles vor einem Doppelpunkt weg und liefert nur noch das zweite Feld in der Zeile (könnte man unter OS X auch weglassen)
  • awk ‘{ print $2}’ liefert das zweite Feld in der “inet” Zeile – die IP-Adresse

Und diese wird dann als Quelle dem speedtest Skript vorgeworfen.

4: Cron Job erstellen

Das ist am Mac zugegeben etwas lästig. crontab wird nicht empfohlen, stattdessen laufen unter OS X alle zeitgesteuerten Jobs mit launchd. Die Zeit-Parametrierung lässt aber keine Syntax “laufe alle 10 Minuten zwischen X und Y Uhr” zu. Das muss leider mittels mehrerer identer Parameterzeilen angegeben werden:

<dict><key>Hour</key><integer>8</integer><key>Minute</key><integer>30</integer></dict>

Die Zeile oben sagt im Prinzip: Starte den Job um 8:30; und eine derartige Zeile kommt nun so oft mit so vielen Uhrzeiten in die launchd-Konfigurationsdatei, wie man Abläufe von speedtest_cron haben will. Etwas mühsam, aber gut … wem das zu nervig ist, einfach das LaunchControl UI verwenden (hier zum Download).

Also – launchd Einrichtung step-by-step:

  • …plist-Datei beliebigen Namens erstellen
  • Ablegen im Verzeichnis ~/Library/LaunchAgents (hier liegen unter OS X alle benutzerdefinierten launchd Job-Konfigurationen)
  • Label (beliebig): <key>Label</key><string>local.speedtest</string>
  • Auszuführendes Programm: <key>Program</key><string>/<mein-pfad-zum-speedtest>/speedtest_cron</string>
  • Startzeitpunkte festlegen mit dem Schlüssel <key>StartCalendarInterval</key>
  • Oben erwähnte Zeile mehrfach einfügen je nach Wunsch

Eine vollständige sehr gute launchd-Anleitung gibts hier: http://launchd.info/

4a: Start ohne Reboot

launchd Jobs starten beim booten oder beim Login; alternativ kann man mittels Kommando

launchctl load

den Job direkt manuell starten. Ab dann läuft der Speedtest gem. eingestellten Zeitparametern und legt jeweils eine Datei im Unterverzeichnis ~/Library/<mein-pfad-zum-speedtest>/speedtests ab.

Diese können allesamt auf Wunsch noch mit dem mitgelieferten Skript speedcsv in eine CSV-Datei überführt werden.

Und die kann man dann freudig UPC als Nachweis für deren schlechte Service-Qualität vorlegen, um zumindest etwas weniger zu zahlen – in der Hoffnung, dass besonders viele solcher Nachweise den Provider endlich dazu verführen, seine Leistungen im Wiener Innenstadtbereich nachhaltig zu verbessern.

 

Published by:

Vicious Circle into the Past

We are on the edge of an – as businessinsider.com recently called it – exploding era: The IoT Era. An interesting info graphic tells us stunning figures of a bright future (at least when it comes to investment and sales; see the full picture further below or in the article).

The info graphic in fact stresses the usual numbers (billions of devices, $ trillion of ROI) and draws the following simple explanation of the ecosystem:

IoT and BigData Analysis (info graphic clip)

A simple explanation of IoT and BigData Analysis

Devices are receiving requests to send data, in return they do send data and data gets analyzed. Period.

Of course, this falls short of any system integration or business strategy aspect of the IoT evolution. But there’s more of a problem with this (and other similar) views onto IoT. In order to understand that, let us have a bullet point look at the mentioned domains and their relation with IoT (second part of the graph; I am intentionally omitting all numbers):

  • Manufactoring: smart sensors use increases
  • Transportation: connected cars on advance
  • Defense: more drones used
  • Agriculture: more soil sensors for measurements
  • Infrastructure, City: spending on IoT systems increases
  • Retail: more beacons used
  • Logistics: tracking chips usage increases
  • Banking: more teller-assist ATMs
  • Mining: IoT systems increase on extraction sites
  • Insurance (the worst assessment): IoT system will disrupt insurances (surprise me!)
  • Home: more homes will be connected to the internet
  • Food Services: majority of IoT systems will be digital signs
  • Utilities: more smart meter installations
  • Hospitality: room control, connected TVs, beacons
  • Healthcare: this paragraph even contents itself with saying what devices can do (collect data, automate processes, be hacked ?)
  • Smart Buildings: IoT devices will affect how buildings are run (no! really?)

All of these assessments fall short of any qualification of either which data is being produced, collected and processed and for which purpose.

And then – at the very beginning – the info graphic lists 4 barriers to IoT market adoption:

  • Security concerns
  • Privacy concerns
  • Implementation problems
  • Technological fragmentation

BusinessInsider, with this you have become part of the problem (as so many others already have): Just like in the old days of cloud commencement, the most discussed topics are security and privacy – because it is easy to grasp, yet difficult to explain, what the real threat would possibly be.

Let us do ourselves a favour and stop stressing the mere fact that devices will provide data for processing and analysis (as well as more sophisticated integration into backend ERP, by the way). That is a no-brainer.

Let us start talking about “which”, “what for” and “how to show”! Thereby security and privacy will become and advantage for IoT and the digital transformation. Transparency remains the only way of dealing with that challenge, because – just as with cloud – those concerns will ultimately not hinder adoption anyway!

 

The IoT Era will explode (BusinessInsider Info Graphic)

The IoT Era will explode (BusinessInsider Info Graphic)

{feature image from www.thedigitallife.com}

Published by:

Automation and Orchestration – a Conclusion

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

Automation and Orchestration are core capabilities in any IT landscape.

Traditionally, there’d be classical on-premise IT, comprised of multiple enterprise applications, (partly) based on old-style architecture patterns like file exchange, asynchronous time-boxed export/import scenarios and historic file formats.

At the same time, the era of the Cloud hype has come to an end in a way that Cloud is ubiquitous; it is as present as the Internet as such has been for years, and the descendants of Cloud – mobile, social, IoT – are forming the nexus for the new era of Digital Business.

For enterprises, this means an ever-increasing pace of innovation and a constant advance of business models and business processes. As this paper has outlined, automation and orchestration solutions form the core for IT landscapes to efficiently support businesses in their striving for constant innovation.

Let’s once again repeat the key findings of this paper:

  • Traditional “old style” integration capabilities – such as: file transfer, object orientation or audit readiness – remain key criteria even for a cloud-ready automation platform.
  • In an era where cloud has become a commodity, just like the internet as such, service centered IT landscapes demand for a maximum of scalability and adaptability as well as multi-tenancy in order to be able to create a service-oriented ecosystem for the advancement of the businesses using it.
  • Security, maximum availability, and centralized management and control are fundamental necessities for transforming an IT environment into an integrated service center supporting business expansion, transformation, and growth.
  • Service orchestration might be the ultimate goal to achieve for an IT landscape, but system orchestration is a first step towards creating an abstraction layer between basic IT systems and business-oriented IT-services.

Therefore, for IT leaders, choosing the right automation and orchestration solution to support the business efficiently might be the majorly crucial decision to either become a differentiator and true innovation leader or (just) remain the head of a solid – yet: commodity – enterprise IT.

The CIO of the future is a Chief Innovation (rather than “Information”) Officer – and Automation and Orchestration both build the core basis for innovation. What to look at in getting to the right make-or-buy decision was the main requirement for this paper.

 

Published by:

How to StartUp inside an Enterprise

I’ve been following Ruxit for quite some time now. In 2014, I first considered them for the Cloud delivery framework we were to create. Later – during another project – I elaborated on a comparison I did between Ruxit and newRelic; I was convinced by their “need to know” approach to monitor large diverse application landscapes.

Recently they added Docker Monitoring into their portfolio and expanded support for highly dynamic infrastructures; here’s a great webinar on that (be sure to watch closely on the live demos – compelling).

But let’s – for once – let aside the technical masterpieces in their development; let’s have a look on their strategic procession:

Dynatrace – the mothership – has been a well-known player in the monitoring field for years. I am working for quite some customers who leverage Dynatrace’s capabilities. I would not hesitate to call them a well-established enterprise. Especially in the field of cloud, well established enterprises tend to leak a certain elasticity in order to get their X-aaS initiatives to really lift-off; examples are manifold: Canopy failed eventually (my 2 cents; some may see that differently), IBM took a long time to differentiate their cloud from the core business, … some others still market their cloud endeavours sideways their core business – not for the better.

And then – last week – I received Ruxit’s eMail announcing “Ruxit grows up… announcing Dynatrace Ruxit!“, officially sent by “Bernd Greifeneder | Founder and CTO”. I was expecting that eMail; in the webinar mentioned before, slides were already branded “Dynatrace Ruxit”, and the question I raised on this, was answered expectedly, that from a successful startup-like endeavour they would now commence their move back into the parent company.

Comprehensible.

Because that is precisely what a disruptive endeavour inside a well-established company should look like: Greifeneder was obviously given the trust and money to ramp-up a totally new kind of business alongside Dynatrace’s core capabilities. I have long lost any doubts, that Ruxit created a new way of technologically and methodically doing things in Monitoring: In a container-based elastic cloud environment, there’s no need anymore to know about each and every entity; the only importance is to keep things alright for endusers – and when this is not the case, let admins quickly find the problem, and nothing else.

What – though – really baffled me was the rigorous way of pushing their technology into the market: I used to run a test account for running a few tests there and then for my projects. Whenever I logged in, something new had been deployed. Releases happened on an amazingly regular basis – DevOps style 100%. There is no way of doing this within established development processes and traditional on-premise release management. One may be able to derive traditional releases from DevOps-like continuous delivery – but not vice versa.

Bottom line: Greifeneder obviously had the possibility, the ability and the right people to do things in a totally different way from the mothership’s processes. I, of course, do not have insight in how things were really setup within Dynatrace – but last week they took their baby back into “mother’s bosom”, and in the cloud business – I’d argue – that does not happen when the baby isn’t ready to live on its own.

Respect!

Enterprise cloud and digitalisation endeavours may get their learnings from Dynatrace Ruxit. Wishing you a sunny future, Dynatrace Monitoring Cloud!

 

Published by:

System Orchestrator Architecture

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

This final chapter addresses architecture components for typical system orchestrators; comparing these with the blueprints for high-grade innovative automation solutions mentioned previously in this paper will reveal the close resemblance between automation and system orchestration patterns in a very obvious way.

The first figure below is system deployment architecture diagram describing the main physical components for a system orchestrator:

System Orchestration Deployment Architecture

System Orchestration Deployment Architecture

Note, that:

  1. the database normally needs to be setup in a clustered mode for high availability. Most orchestrator solutions do rely fully on the database (at least at design time).
  2. the Management Server’s deployment architecture is depending on availability requirements for management and control.
  3. the Runtime server nodes should be highly distributed (ideally geographically dispersed). The better this is supported by the product architecture the more reliable orchestration will support IT operations.
  4. the Web service deployment is depending on availability and web service API needs (product and requirement dependent)

Logical architecture

The logical architecture builds on the previous description of the deployment architecture and outlines the different building blocks of the orchestration solution. The logical architecture diagram is depicted in the following figure:

System Orchestration Logical Architecture

System Orchestration Logical Architecture

Notes to “logical architecture” figure:

  1. The Orchestrator DB holds runtime and design time orchestration flows, action packs, activities, plugins, logs, …
  2. Management Server controls access to orchestration artefacts
  3. Runtime Server provides execution environment for orchestration flows
  4. Orchestration designer (backend) provides environment for creation of orchestration flows using artefacts from the database (depending on specific product architecture the designer and management components could be integrated)
  5. Web Service exposes the Orchestrator’s functionality to external consumers (ideally via REST)
  6. Action packs or plugins are introduced through installation at the design time (normally integrated into the DB)
  7. The Orchestrator’s admin console is ideally implemented as web service, hence accessible via browser
  8. The Design Client UI could either be web-based or a dedicated client application to be installed locally and using a specific protocol for backend communication

Of course, these building blocks can vary from product to product. However, what remains crucial to successful orchestration operations (more or less in the same way as with automation) is to have lightweight, scalable runtime components capable of supporting a small scale, low footprint deployment equally efficient to a large scale, multi sight, highly distributed orchestration solution.

 

Published by:

System versus Service Orchestration

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

One of the most well-known blueprints for service orchestration is the representation as seen from the perspective of service oriented architecture (SOA).

The following figure in principle describes this viewpoint:

Service Orchestration, defined

Service Orchestration, defined

Operational components – such as “commercial off the shelf” (COTS) or custom applications, possibly with a high level of automated functionality (see previous chapters) – are orchestrated to simple application services (service components) which in turn are aggregated to atomic or composite IT services which subsequently support the execution of business processes. The latter are presented to consumers of various kind without disclosing any of the underlying services or applications directly. In well established service orchestration, functionality is often defined top-down by modelling business processes and defining its requirements first and then leveraging or composing necessary services to fulfil the process’ needs.

A different approach is derived from typical definitions in cloud frameworks; the following figure shows this approach:

System Orchestration: Context

System Orchestration: Context

Here, the emphasis lies on the automation layer building the core aggregation. The orchestration layer on top creates system and application services needed by the framework to execute its functional and operational processes.

The latter approach could be seen as a subset of the former, which will become more clear when talking about the essential differences between system and service orchestration.

Differences between system and service orchestration

System Orchestration

  • could in essence be comprised of an advanced automation engine
  • leverages atomic automation blocks
  • eases the task of automating (complex) application and service operation
  • oftenly directly supports OS scripting
  • supports application interfaces (API) through a set of plugins
  • may offer REST-based API in itself for integration and SOA

Service Orchestration

  • uses SOA patterns
  • is mostly message oriented (focuses on the exchange of messages between services)
  • supports message topics and queues
  • leverages a message broker and (enterprise) service bus
  • can leverage and provide API
  • composes low level services to higher level business process oriented services

Vendor examples of the former are vRealize Orchestrator, HP Operations Orchestration, Automic ONE Automation, BMC Atrium, System Center Orchestrator, ServiceNow (unsurprisingly some of these products have an essential say in the field of automation as well).

Service orchestration examples would be vendors or products like TIBCO, MuleSoft, WSO2, Microsoft BizTalk, OpenText Cordys or Oracle Fusion.

System orchestration key features

System orchestrators are mainly demanded to support a huge variety of underlying applications and IT services in a highly flexible and scalable way:

  • OS neutral installation (depending on specific infrastructure operations requirements)
  • Clustering or node setup possible for scalability and availability reasons
  • Ease of use; low entry threshold for orchestration/automation developers
  • Support quality; support ecosystem (community, online support access, etc.)
  • Database dependency to minimum extent; major databases to be supported equally
  • Built-in business continuity support (backup/restore without major effort)
  • Northbound integratability: REST API
  • Southbound integratability and extensibility: either built-in, by leveraging APIs or by means of a plugin ecosystem
  • Plugin SDK for vendor external plugin development support
  • Scripting possible but not necessarily needed
  • Ease of orchestrating vendor-external services (as vendor neutral as possible, depending on landscape to be orchestrated/integrated)
  • Self-orchestration possible
  • Cloud orchestration: seamless integration with major public cloud vendors

Main requirements for a service orchestrator

In contrary to the above, service orchestration solutions would focus mainly on message handling and integration, as its main purpose is to aggregate lower level application services into higher level composite services to support business process execution. Typical demands to such a product would therefore involve:

  • Support of major web service protocol standards (SOAP, REST)
  • Supports “old-style” enterprise integration technologies (RMI, CORBA, (S)FTP, EDI, …) for integration of legacy applications
  • Provides a central service registry
  • Supports resilient message handling (mediation, completion, message persistence, …)
  • Includes flexible and easy to integrate data mapping based on modelling and XSLT
  • Supports message routing and distribution through topics, queues, etc.
  • Integrated API management solution
  • Integrated business process modelling (BPM) solution
  • Integrated business application monitoring (BAM) solution
  • Extensibility through low-threshold commonly accepted software development technologies

As a rule of thumb to delineate the two effectively from each other, one can say that it is – to a certain extent – possible to create service orchestration by means of a system orchestrator but it is (mostly) impossible to do system orchestration with only a service orchestrator at hand.

For this reason, we will continue with a focus on system orchestration as a way to leverage basic IT automation for the benefit of higher level IT services, and will address vanilla architectures for typical system orchestrator deployments.

 

Published by:

What Is Orchestration?

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

Prior to diving into the architectural patterns for a robust orchestration solution for the enterprise, a few definitions need to be clarified, and as this paper talked a lot about Automation in the first place, we shall begin with outlining precisely the delineation between Automation and Orchestration – to begin with:

  • In any IT architecture framework, the automation layer creates the components necessary to provide atomic entities for service orchestration. Automation uses technologies to control, manage and run atomic tasks within machine instances, operating systems and applications on the one hand and provides automation capabilities for the larger ecosystem in order to automate processes (e.g. onboarding a new employee)
  • Orchestration – in turn – uses processes, workflows and integration to construct the representation of a service from atomic components. A service could e.g. consist of operations within various different systems such as the creation of a machine instance in a virtual infrastructure, the installation of a webservice in another instance and the alteration of a permission matrix in an IAM system. “Orchestration” would provide means to aggregate these atomic actions into a service bundle which can then be provisioned to a customer or user for consumption. The Orchestration layer makes use of single automated service components.

While the above definitions mainly address the system context in IT architectures, it is valid to say that there is another slightly different context – the service context – which demands for another definition of the term “orchestration”:

  • Service orchestration is the coordination and arrangement of multiple services exposed as a single aggregate service. It is used to automate business processes through loose coupling of different services and applications, thereby creating composite services. Service orchestration combines service interactions to create business process models consumable as services.

In order to underpin the danger of confusion when discussing orchestration, here are a few references for orchestration definitions:

  • “Orchestration describes the automated arrangement, coordination, and management of complex computer systems, middleware and services.” (wikipedia: https://en.wikipedia.org/wiki/Orchestration_(computing) )
  • “Complex Behavior Interaction (Logic/Business Process Level): a complex interaction among different systems. In the context of service-oriented architecture, this is often described as choreography and orchestration” (Carnegie Mellon University Research Showcase 12-2013: “Understanding Patterns for System-of- Systems Integration”, Rick Kazman, Klaus Nielsen, Klaus Schmid)
  • “Service orchestration in an ESB allows service requesters to call service providers without the need to know where the service provider is or even the data scheme required in the service” (InfoTech Research Group Inc. “Select and Implement an ESB Solution”, August 2015)
  • “Orchestration automates simple or complex multi-system tasks on remote servers that are normally done manually” (ServiceNow Product Documentation: http://wiki.servicenow.com/index.php?title=Orchestration#gsc.tab=0 )
  • “The main difference, then, between a workflow “automation” and an “orchestration” is that workflows are processed and completed as processes within a single domain for automation purposes, whereas orchestration includes a workflow and provides a directed action towards larger goals and objectives” (Cloud Computing: Concepts, Technology & Architecture”, Thomas Erl, Prentice Hall, October 2014)
  • “Orchestration is the automated coordination and management of computer resources and services. Orchestration provides for deployment and execution of interdependent workflows completely on external resources” (ORG: http://cloudpatterns.org/mechanisms/orchestration_engine )

Even though these definitions seemingly leave a lot of room for interpretation of what system and service orchestration really covers, clarity can be gained by looking at a few architectural principles as well as requirements for different orchestration goals, which the last few chapters of this paper will be focusing at.

Published by:

From Automation to Orchestration

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

In some cases, choosing an appropriate IT automation solution comes down to a matter of business dependency, migration efficiency, or simply vendor relationship and trust. Many times, these criteria alone might provide enough information to make a solid buying decision.

However, to make the best decision for an organization, examining prospective automation solutions more closely could be an option. A key consideration are the solutions’ architectural patterns, which should be assessed through well-planned proof of concept testing or a detailed evaluation. Some key questions to raise are listed below, for convenience and possible inclusion in the evaluation questionnaire:

  • Does the solution have sufficient scalability to react to on-demand load changes and at the same time allow for the growth of IT automation and orchestration as your business grows?
  • Does the solution provide object orientation which allows for representation of real-world IT challenges through well-structured re-usable automation objects and templates?
  • Can the solution quickly and easily integrate and adapt to business process changes?
  • Does the solution guarantee 24/7, close-to-100% availability?
  • Is the solution able to interface with traditional legacy system files and applications through integrated file transfer capability?
  • Can the solution provide a full audit trail from the automation backbone and does it support compliance standards and regulation out-of-box?
  • Does the solution offer multi-clients support and have the ability to segregate organizational and customer IT segments?
  • Does the solution include the latest security features supporting confidentiality, integrity and authenticity?
  • Can the solution handle the integration needs in a homogeneous way from a central automation layer that enables IT admins to come up to speed quickly with minimal training?
  • Does the solution dynamically control the execution of processes at all times?

Obviously, not all automation solutions in the market would be able to give a “yes” to each of these questions; and as always the goal is to find the platform that does bundle all of the described criteria in the best possible compromise.

A supportive element to this decision could be a solution’s capability not only to automate mundane operations tasks in an IT landscape but to bundle these tasks into larger system orchestration scenarios.

What’s the essentials of such scenarios? What’s to look at? And what about the delineation between system and service orchestration? This is subject to the following chapters.

Published by:

Dynamic Processing Control of Automation Flows

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

The final compelling element to consider in an enterprise-grade automation solution is its ability to dynamically control – in real-time – how automation changes the behaviour of your IT landscape. This includes:

  • On-demand business process changes requested by stakeholders, mandated by regulatory requirements, or directly affected by events outside of the enterprise’s core business model
  • Risk of service level (SLA) penalties caused by an application failure or the system’s inability to handle a change of load demand
  • The ability of the system to support a rapid introduction of a new product line or service to meet changing business needs

When assessing such capabilities, the following architecture patterns within the proposed automation solution are of importance:

Dynamic just-in-time execution

As described earlier, object orientation forms the basis for aggregating artefacts built into the automation solution (such as: actions, tasks, workflows, file processing, reports). This capability shall be provided in a way that automation operations remain sufficiently granular and at the same time allow the solution to act as a large automation ecosystem. More importantly, the solution must retain the ability to dynamically re-aggregate executions on-demand as required.

If the automation platform handles each artifact as an object, then object interaction, object instantiation parameters, or object execution scheduling can be redefined in a matter of minutes. All that’s left is to define the object model of the actual automation implementation for the specific IT landscape – a one time task.

The best automation solutions include a library of IT process automation actions that can be aggregated throughout automation workflows. These “IT process automation” actions are ideally delivered as part of the solution as a whole or specifically targeted to address particular automation challenges within enterprise IT landscapes.

Examples are:

  • If IT SLA measures reveal a particular IT housekeeping task at risk due to an increase in processing time, dynamic adaptation of the specific workflows would involve assigning a different scheduler object or calendar to the task or re-aggregating the workflow to execute the process in smaller chunks. This assumes end-to-end object orientation and proper object model definition.
  • If a particular monthly batch data processing workflow is exceeding a particular transfer size boundary, the workflow can remain completely unchanged while chunk size definition is altered by changing the input parameters. These input parameters would themselves be derived from IT configuration management so dynamic automation adaptation would still remain zero-interactive.

Pre/post processing of automation tasks

Not only does dynamic execution require maximum object orientation patterns within the implementation and operation of the automation solution, but it must also provide the ability to:

  • Change the behavior of an object by preprocessing actions.
  • Process the output of an object for further use in subsequent automation tasks/workflows.

Adding pre or post execution logic instead of implementing additional objects for the same logic makes it an object property – an inherent part of the object itself instead of treating it as separate object within the model – which rarely occurs with pre or post-processing. These tasks thus become part of the concrete instance of an abstract automation object.

Examples for applying this pattern in automation are:

  • System alive or connectivity check
  • Data validation
  • Parameter augmentation through additional input
  • Data source query
  • Dynamic report augmentation

Automation solutions can offer this capability either through graphical modelling of pre and post conditions or in the case of more complex requirements, through script language elements.

Easy-to-use extensible scripting

The scripting language offered by the automation solution is the key to offering a system capable of implementing enterprise-grade automation scenarios. While scripting within automation and orchestration tends to evolve into supporting mainly standard scripting languages such as Python, Perl, JavaScript or VBscript, a solution that offers both standard and proprietary scripting is still optimum.

An automation system’s proprietary scripting language addresses the system’s own object model most efficiently while at the same time – through extension capabilities – enabling seamless inclusion of target system specific operations. The combination of both is the best way to ensure a flexible, dynamic, and high performing end-to-end automation solution.

 

Published by:

Homogeneous end-to-end Automation Integration

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

Whether looking to extend existing IT service capabilities through innovative service orchestration and delivery, or trying to increase the level of automation within the environment, one would always want to examine the following core features of a prospective automation solution:

  • Automation
  • Orchestration
  • Provisioning
  • Service definition and catalogue
  • Onboarding and subscription
  • Monitoring and metering
  • Reporting and billing

Though not all of those might be utilized at once, the automation solution will definitely play a major role in aggregating them to support the business processes of an entire enterprise.

Either way, homogeneity represents a key element, when it comes to determining the right solution, with the right approach and the right capabilities.

Homogeneous UX for all integrations

First, the automation platform one choses must have a unified user experience (UX) for all targeted applications. This doesn’t mean that for every component in the system the exact same user interface needs to be presented. It’s more important that there is a unified pattern for all the components. This should start with the central management elements of the solution and extend to both internal and external resources such as an Automation Framework IDE for 3rd party solutions discussed previously.

In addition, the core automation components also must match the same UX. Introducing an automation system with standard user interfaces and integration concepts ensures rapid implementation, since SME’s can focus on automating system processes rather than being bogged down with training on the automation solution itself.

A single platform for all automation

The more products that make up the automation solution, the greater the effort required to integrate them all into an IT landscape. Software and system architecture throughout history has never proposed one single technology, standard or design guideline for all functional capabilities, non-functional requirements, or interface definitions. Therefore, a bundled system comprised of multiple products will in 95% of cases come with a variety of inter-component interfaces that will need to be configured separately from the centralized parameterization of the overall solution.

Project implementation experience shows that the slope of learning associated with the solution is directly proportional to the number of different components contained in the automation suite. (see figure below):

Automation adoption time

Automation: Adoption time by number of platform components

Functional integration with target systems

Finally, the solution’s core functionality should be able to integrate with target systems using industry standards such as common OS scripting languages, REST, JMS or quasi-standards with target applications like RPC and EJB. At minimum, the solution should include 90% of these industry standards out-of-the-box.

In addition, an enterprise-grade automation solution should provide:

  • Multiple action/workflow templates (either bundled with the core solution or available for purchase)
  • Ease of integration implementation with target systems’ core functionality at a very detailed level – such as administrative scripting control from within the automation core through scripting integration (to be discussed in the following chapter)

 

Published by:
%d bloggers like this: