The Smile-IT Blog » February 2016

Monthly Archives: February 2016

Synology SSL Certificate – a guideline

Ever struggled with securing access to your (Synology) NAS box through use of SSL and web server certificates? Well – here’s why you struggled; and maybe also how not to struggle anymore … have fun 🙂

A few things to know first

As always: The boring things go first! There’s some knowledge captured in them, but if you’re the impatient one (like me ;)) -> get over it!

What is:

  • SSL/TLS: secure socket layer – now more correctly referred to as “transport layer security” – is a crypto protocol which on the one hand ensures that the web server’s identity which is accessed is securely confirmed and on the other hand supports encryption of the connection between a client (browser) and a web server.
  • a cert (certificate): (for the purpose of this article let’s keep it with it being) a file that holds identity information of the webserver where it comes from as well as supports creation of a public key for encryption
  • a CA: “certificate authority” – a company or entity which is capable of signing a certificate to confirm its validity; this could be a Root-CA – in that case being the last in a -> “certificate chain” – or an Intermediate-CA, thus being able to sign certs but itself needing to be signed by a Root-CA; CAs are presented through certs themselves (so, you’ll see a whole bunch’o’ different cert files handled here)
  • a private key: the private – confidential, not to be distributed – part of pair of keys for symmetric encryption; one party encrypts content by using a public key and sends that content to the other party, which is capable of decrypting the content by use of its private key. Obviously the sending party must use a public key that corresponds to the private key
  • a certificate chain (cert chain): a chain of certificates, the first in the chain being used for the actual purpose (e.g. encrypting traffic, confirming identity), the last one representing and confirming the ultimately trusted party. Each cert in a chain confirms the correctness of its respective successor
  • a SAN cert: a “Subject Alternate Names” certificate file; this is a certificate confirming a web server’s identity by more than one specific name (URL), e.g. yourbankaccount.com; login.yourbankaccount.com; mobile.yourbankaccount.com; yourbankshortname.com; … whatsoever … (note: the use of SANs with your server’s cert only depends on how you’ve configured DNS and IP addresses to access the server; if there’s only one URL for your server, you won’t need that)

What happens when surfing https in your browser?

The “s” in https says: This is an SSL connection.

  • When accessing a web server via https, your browser expects a cert being presented by the server.
  • This cert is checked in 2 ways:
    • (1) can I trust tis cert to be what it says to be and
    • (2) is it telling me the true identity of the server that I am accessing

So, if e.g. you’re going to https://www.yourbankaccount.com and the cert holds “www.yourbankaccount.com” and your browser can read from the cert chain, that it can trust this cert, all will be fine and you can safely access that server.

How can trust be established?

As explained above, trust is confirmed through a chain of certificates. The last cert in the chain (say: the Root-CA cert) says “yes, that cert you’re showing me is OK”; the next one (say: the intermediary) says “yes, that next one is OK, too” … and so on until eventually the first cert – the one having been presented to your browser – is confirmed to be OK.

In other words: At least the last cert must be one that the client (your browser) can say of: “I can trust you!”

And how could it do that: Because, that last cert in the chain is one of a commonly trusted, public CA; commonly trusted authorities are e.g. Symantec (who purchased Verisign), Comodo, GoDaddy, DigiCert, … (wikipedia has a list plus some references to studies and surveys).

And finally: How can YOU get hold of a cert for YOUR web server, that is confirmed by one of those trusted certificate authorities? By paying them money to sign your cert. As simple as that.

Do you wanna do that just for your home disk station? No – of course not.


And this is were self-signed certificates kick in …

Now: What does Synology do?

Your disk station (whatever vendor, most probably) offers browser access for administration and so … plus a lot of other services that can be (and should be) accessed via SSL, hence: you need a cert. Synology has certificate management in the control panel (rightmost tab in “Security”).

The following screenshot shows the options that are available when clicking “Create certificate”:

Synology: Create SSL Certificate - Options

Synology: Create SSL Certificate – Options

Create certificate options briefly explained (with steps that happen when executing them):

  1. Create a self signed certificate:
    1. In the first step, enter data for the root certificate (see screenshotmind the headline!)
    2. Second step: Enter data for the server certificate itself (here’s a screenshot also for this; note, that you can even use IP addresses in the SAN field at the end – more on this a little later or – sic! – further above)
    3. When hitting apply, the disc station generates 2 certificates + their corresponding private keys and
    4. uses the first one to sign the second one
    5. Eventually, both are applied to the disc station’s web server and the web server is restarted (all in one single go; expect to be disconnected briefly)
    6. Once you’re back online in the control panel, the “Export Certificate” button allows download of both certificate and corresponding private keys (keep the private keys really, really – REALLY! – secluded!)
  2. Create certificate signing request (CSR) – this is a file to be sent to a certificate authority for signing and creation of a corresponding certificate:
    1. The only step to be done is to provide data for a server certificate (same information as in step 2 of (1) above)
    2. When done, click next and the disc station generates the CSR file + its corresponding private key
    3. Eventually the results are provided for download
    4. Download the files; in theory, you could now send them to a trusted CA. All CAs got slightly different procedures and charge slightly different rates for doing this
  3. Renew certificate:
    1. When clicking this option, a new CSR + private key of the existing certificate (the one already loaded) is generated and
    2. offered for download
    3. Same as step 4 in (2) above applies for the downloaded files
  4. Sign certificate signing request (CSR) – this allows to use the root certificate, that you could e.g. have created in (1) above, to create another certificate, which you could e.g. have created a CSR for in (2) above:
    1. The only step to execute is to select the CSR file,
    2. enter a validity period and
    3. specify alternate names for the target server (if need be)
    4. Again – upon completion – download of the resulting files is offered

That last step would e.g. allow you to generate only 1 root certificate (by using option (1) above) and use that for signing another server certificate of a second disc station. Note, that the respective CSR file needs to be generated in that second disc station. As a result in this case (after having downloaded everything) you should have:

  • 1 ca.crt file + corresponding ca.key file
  • 2 server.crt files + corresponding server.key files

The second server.crt + server.key file would be the one to be uploaded into your second disc station by using the “Import certificate” button right next to “Create certificate” in the control panel.

What did I do in fact?

In order to secure my 2 disc stations I executed the following steps:

  • “Create certificate” – (1) above – on the first disc station
  • Download all files
  • “Create CSR” – (2) above – on the second disc station
  • Download all files
  • Execute “Sign cert” – (4) above – on the first disc station with the results of the “Create CSR” step just before
  • Again, download everything
  • “Import cert” on the second disc station with the results from the signing operation just before

Now both my disc stations have valid certificates + corresponding private keys installed and working.

Does that suffice?

Well, not quite. How would any browser in the world know that my certificates can be trusted? Mind, we’re using self-signed certs here; the CA is my own disc station; this is nowhere mentioned in any trusted cert stores of any computer (hopefully ;)).

What I can do next is distribute the cert files to clients that I want to provide with secure access to my disc station.

On an OS X client:

  • Open the “Keychain Access” app
  • Import the .crt files from before (NOT the .key files – these are highly private to your disc station – do never distribute those!)
  • For the certificate entry resulting from the ca.crt file
    • Ctrl+click on the file
    • From the menu chose “Get Info”
    • Expand the “Trust” node
    • Set “When using this certificate” to “Always Trust” (as you can safely trust your own CA cert)

The 2 other cert files should now read “The certificate is valid” in the information on top.

On a Windows client:

  • Open the Management Console (click “Start” -> type “MMC”)
  • Insert the “Certificate” snap-in for “Computer Account” -> “Local Computer”
  • Expand the “Trusted Certificate Authorities” node until you can see all the certificates in the middle pane
  • Right click the “Certificates” node
  • Go to “All tasks” -> “Import…” and
  • Import the ca.crt file
  • Expand the “Trusted Sites” node until you can see all the certificates in the middle pane
  • Right click the “Certificates” node
  • Go to “All tasks” -> “Import…” and
  • Import both the server.crt files
  • Close MMC w/o saving; you’re done here

With that, the respective client now trusts your disc station servers; browsers on your client would not issue a security warning anymore, when accessing your disc station.

Does that solve everything?

Well … ähm … no!

Because I’ve used SAN certificates with my disc stations and because Google Chrome does not support SAN certificates correctly, Chrome still issues a security warning – a pretty weird one telling me that the cert was issued for server abc.def.com while the server presents itself as abc.def.com; well, Google, you better work on that 🙂

But Apple Safari and Microsoft Edge securely route access to my DSs – and that’s pretty neat 🙂

And is this the ultimate solution?

Well … I guess … not so, because: I cannot really easily distribute all my cert files to anyone who’d possibly need to access my disc stations. This is were commonly trusted, public CAs come into play. Money makes the world go ’round, isn’t it?

 

Published by:

Automation Adaptability & Extensibility

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

Today’s Automation solutions normally are ready-built enterprise products (or a consumable service) offering out-of-the-box functionality for multiple automation, orchestration, and integration scenarios. On top of this, ease of installation, implementation, and use would be of importance.

However, in less than 20% of cases, the automation platform remains unchanged for the first six months. This is why it’s crucial that from the beginning, the selected solution has the ability to extend and adapt in order to serve business needs. The architecture should be able to leverage technologies for hot plugging integrations and industry-standard interfaces while augmenting standard functionality through dynamic data exchange.

Hot plug-in of new integrations

Once basic automation workflows for an IT landscape are implemented, avoiding downtime is critical. While some automation solutions may offer fast and flexible production updates, the expectation on top of that is to be able to integrate new system and application adapters on the fly.

The first step to this level of integration can be achieved by rigorously complying with the SOLID object orientation principles discussed in the last chapter. Integrating adapters to new system or application endpoints, infrastructure management layers (like hypervisors), or middleware components is then a matter of adding new objects to the automation library. Existing workloads can be seamlessly delegated to these new entities, avoiding the need to stop any runtime entities or updating or upgrading any of the system’s components.

Hot-plugging, however, isn’t the main factor in assessing an automation solution’s enterprise readiness. In addition to being able to plug new adapters into the landscape, a truly expandable automation solution must be able to build new adapters as well. The automation solution should offer a framework, which enables system architects and software developers to create their own integration solutions based on the patterns the automation solution encompasses.

Such a framework allows for the creation of integration logic based on existing objects and interfaces, self-defined user interface elements specific to the solution’s out-of-the-box templates. Extensions to such a framework include a package-manager enabling third party solutions to be deployed in a user-friendly way. It must also take into account any dependencies and solution versions and a framework IDE enabling developers to leverage the development environment to which they are accustomed (e.g. Eclipse and Eclipse plugins).

Herewith, plugging new integrations into an automation solution can expand the automation platform by leveraging a community-based ecosystem of 3rd party extensions.

Leverage industry standards

Hot plugging all-new automation integration packages to adapt and expand your automation platform might not always be the strategy of choice.

In a service-based IT-architecture, many applications and management layers can be integrated using APIs. This means that an automation solution needs to leverage standards to interface with external resources prior to forcing you into development effort for add-ons.

The automation layer needs to have the ability to integrate remote functionality through common shell-script extensions like PowerShell (for Microsoft-based landscapes), JMS (Java Message Service API for middleware integration), REST (based on standard data formats like XML and JSON), and maybe (with decreasing importance) SOAP.

Dynamic data validation & exchange

Part of the adaptability and extensibility requirement is for the product of choice to be able to process and integrate the results of the resources previously discussed into existing object instances (as dynamic input data to existing workflows) without having to develop customized wrappers or additional interpreters.

This can be achieved through either variable objects – their values being changeable through integration like DB queries or Web Services results – or through prompts that allow to set variable values through a user input query.

To be truly extensible and adaptable, an automation solution should not only offer manual prompts but it should be able to automatically integrate and present those prompts within external systems. The solution should be responsible for automating and orchestrating IT resources while other systems – a service catalogue or request management application – handles IT resource demand and approval.

Together, all of the above forms the framework of an automation solution that can be extended and adapted specifically to the needs of the business leveraging it.

Published by:

Private Cloud Storage: 1 + 1 = 27 backup solutions

Why would a convinced “pro-cloudian” invest into a geo-redundant backup and restore solution for private (cloud) storage? The reasons for this were fairly simple:

  1. I store quite a bit of music (imagery and audio all-the-same); storing that solely in the cloud is (a) expensive and (b) slow when streamed (Austrian downstream is not yet really that fast)
  2. In addition to that, I store quite a lot of important projects data meanwhile (in different public clouds, of course, but also on my office NAS); at one point I needed a second location to further secure this data
  3. I wanted a home media streaming solution close by my hifi

My current NAS used to be a Synology DS411 (4 2TB discs, Synology Hybrid Raid – SHR – which essentially is RAID5). My new one is now a DS416 (same configuration; I just upgraded discs in a way that both NASs now run 2 2TB and 2 3TB discs – mainly disc lifetime considerations were leading to this, and the fact that I didn’t wanna throw away still-good harddiscs (if you’re interested in the upgrade process, just post a quick comment and I’ll come back to that gladly – but with Synology that’s pretty straight forward).

Bored already and not keen on learning all the nitty-gritty details: You can jump to the end, if you really need to 😉

More than 1 backup

Of course, it’s not 27 options – as in the headline – but it’s a fair lot of possibilities to move data between to essentially identical NASs for the benefit of data resilience. Besides that, a few additional constraints come into account when setting it up for geo-redundancy:

  • Is one of the 2 passively taking backup data only or are both actively offering services? (in my case: the latter, as one of the 2 would be the projects’ storage residing in the office and the other would be storage for media mainly – but not only – used at home)
  • How much upstream/downstream can I get for which amount of data to be synced? (ease of thought for me: both locations are identical in that respect, so it boiled down to data volume considerations)
  • Which of the data is really needed actively and where?
  • Which of the data is actively accessed but not changed (I do have quite a few archive folder trees stored on my NAS which I infrequently need)

Conclusion: For some of the data incremental geo-backup suffices fully; other data needs to be replicated to the respective other location but kept read-only; for some data I wanted to have readable replications on both locations.

First things first: Options

Synology Backup related Packages

Synology Backup related Packages

The above screenshot shows available backup packages that can be installed on any Synology disc station:

  • Time Backup is a Synology owned solution that offers incremental/differential backup; I recently heard of incompatibilities with certain disc stations and/or harddiscs, hence this wasn’t my first option (whoever has experiences with this, please leave a comment; thanx)
  • Of all the public cloud backup clients (ElephantDrive, HiDrive, Symform and Glacier) AWS Glacier seemed the most attractive as I’m constantly working within AWS anyway and I wasn’t keen on diving into extended analysis of the others. However, Glacier costs for an estimate of 3 TB would be $36 in Frankfurt and $21 in the US. Per month. Still quite a bit when already running 2 disc stations anyway which both are far from being over-consumed – yet.
  • Symform offers an interesting concept: In turn to contribution to a peer-to-peer network one gets ever more free cloud storage for backup; still I was more keen on finding an alternative without ongoing effort and cost
BTW: Overall CAPEX for the new NAS was around EUR 800,- (or less than 2 years of AWS Glacier storage costs for not even the full capacity of the new NAS). No option, if elasticity and flexibility aren't key that much ...

The NAS-to-NAS way of backup and restore

For the benefit of completeness:

  • Synology “Cloud Sync” (see screen shot above) isn’t really backup: It’s a way of replicating files and folders from your NAS to some public cloud file service like GoogleDrive or Dropbox. I can confirm, it works flawlessly, but is no more than a bit of a playground if one intends to have some files available publicly – for whatever reason (I use it to easily mirror and share my collection of papers with others without granting them access to my NAS).
  • Synology Cloud Station – mind(!) – is IMHO one of the best tools that Synology did so far (besides DSM itself). It’s pretty reliable – in my case – and even offers NAS-2-NAS synchronization of files and folders; hence, we’ll get back to this piece a little later.
  • Finally – and that’s key for what’s to come – there’s the DSM built-in “Backup & Replication” options to be found in the App Launcher. And this is mainly what I bothered with in the first few days of running two of these beasts.
Synology Backup and Replication AppLauncher

Synology Backup and Replication AppLauncher

“Backup and Replication” offers:

  • The activation and configuration of a backup server
  • Backup and Restore (either iSCSI LUN backup, if used, or data backup, the latter with either a multi-version data volume or “readable” option)
  • Shared Folder Sync (the utter Synology anachronism – see a bit further below)

So, eventually, there’s

  • 4 Cloud backup apps
  • 4 Synology owned backup options (Time Backup, iSCSI LUN backup, data volume backup and “readable” data backup) and
  • 3 Synology sync options (Cloud Sync, Cloud Station and Shared Folder Sync)

Not 27, but still enough to struggle hard to find the right one …

So what’s wrong with syncing?

Nothing. Actually.

Cloud Station is one of the best private cloud file synchronization solutions I ever experienced; dropbox has a comparable user experience (and is still the service caring least about data privacy). So – anyway, I could just have setup the whole of the two NASs to sync using Cloud Station. Make one station the master and connect all my devices to it and make the other the backup station and connect it to the master, either.

However, the thought of awaiting the initial sync for that amount of data – especially as quite a bit of it was vastly static – let me disregard this option in the first place.

Shared Folder Sync sounded like a convenient idea to try. It’s configuration is pretty straight forward.

1: Enable Backup Services

The destination station needs to have the backup service running; so that is the first thing to go for. Launching the backup service is essentially kicking off an rsync server which can accept rsync requests from any source (this would even enable your disc station to accept workstation backups from pc, notebook, etc., if they’re capable of running rsync).

To configure the backup service, one needs to launch the “Backup and Replication” App and go to “Backup Service”:

Synology Backup Service Configuration

Synology Backup Service Configuration

NOTE: I do always consider to change the standard ports (22 in this case) to something unfamiliar - for security reasons (see this post: that habit saved me once)!

Other than that, one just enables the service and decides on possible data transfer speed limits (which can even be scheduled). The “Time Backup” tab allows enabling the service for accepting time backups; (update) and third tab makes volume backups possible by just ticking a checkbox. But that’s essentially it.

2: Shared Folder Sync Server

Synology Shared Folder Sync Server

Synology Shared Folder Sync Server

In order to accept sync client linkage, the target disc station needs to have the shared folder sync server enabled, additionally to the backup service. As the screenshot suggests, this is no big deal, really. Mind, though, that it is also here, where you check and release any linked shared folders (a button would appear under server status, where this can be done).

Once “Apply” is hit, the disc station is ready to accept shared folder sync requests.

3: Initiate “Shared Folder Sync”

This is were it gets weird for the first time:

  • In the source station, go to the same page as shown above, but stay at the “Client” tab
  • Launch the wizzard with a click to “Create”
  • It asks for a name
  • And next it asks to select the folders to sync
  • In this very page it says: “I understand that if the selected destination contains folders with identical names as source folders, the folders at destination will be renamed. If they don’t exist at destination they will be created.” – You can’t proceed without explicitly accepting this by checking the box.
  • Next page asks for the server connection (mind: it uses the same port as specified in your destination’s backup service configuration setup previously (see (1) above))
  • Finally, a confirmation page allows verification – or, by going back, correction – of settings and when “Apply” is hit, the service commences its work.

Now, what’s it doing?

Shared Folder Sync essentially copies contents of selected shared folders to shared folders on the destination disc station. As mentioned above, it initially needs to explicitly create its link folder on the destination, so don’t create any folders in advance when using this service.

When investigating the destination in-depth, though, things instantly collapse into agony:

  1. All destination shared folders created by shared folder sync have no user/group rights set except for “read-only” for administrators
  2. Consequentially, the attempt to create or push a file to any of the destination shared folders goes void
  3. And altering shared folder permissions on one of these folders results in a disturbing message
Synology Permission change on Shared Folder Sync folders

Synology Permission change on Shared Folder Sync folders

“Changing its privilege settings may cause sync errors.”  – WTF! Any IT guy knows, that “may” in essence means “will”. So, hands off!

Further:

  • It did not allow me to create more than two different sync tasks
  • I randomly experienced failures being reported during execution which I couldn’t track down to their root cause via the log. It just said “sync failed”.

Eventually, a closer look into Synology’s online documentation reveals: “Shared Folder Sync is a one way sync solution, meaning that the files at the source will be synced to the destination, but not the other way around. If you are looking for a 2-way sync solution, please use Cloud Station.” – Synology! Hey! Something like this isn’t called “synchronization”, that’s a copy!

While writing these lines, I still cannot think of any real advantage of this over

  • Cloud Station (2-way sync)
  • Data backup (readable 1-way copy)
  • Volume backup (non-readable, incremental, 1-way “copy”)

As of the moment, I’ve given up with that piece … (can anyone tell me where I would really use this?)

BUR: Backup & Restore

The essential objective of a successful BUR strategy is to get back to life with sufficiently recent data (RPO – recovery point objective) in sufficiently quick time (RTO – recovery time objective). For the small scale of a private storage solution, Synology already offers quite compelling data security by its RAID implementation. When adding geo redundancy, the backup options in the “Backup & Replication” App would be a logical thing to try …

1: Destination first

As was previously mentioned, the destination station needs to have the backup service running; this also creates a new – administrable, in this case – shared folder “NetBackup” which could (but doesn’t need to) be the target for all backups.

Targets (called “Backup Destination” here), which are to be used for backups, still must be configured at the source station in addition to that. This is done in the “Backup & Replication” App at “Backup Destination”:

Even at this place – besides “Local” (which would e.g. be another volume or some USB attached harddisc) and “Network”- it is still possible to push backups to AWS S3 or other public cloud services by chosing “Public Cloud Backup Destination” (see following screenshots for S3).

Synology Cloud Backup: Selecting the cloud provider

Synology Cloud Backup: Selecting the cloud provider

 

Synology Cloud Backup: Configuring AWS S3

Synology Cloud Backup: Configuring AWS S3

NOTE, that the Wizzard even allows for bucket selection in China (pretty useless outside China, but obviously they sell there and do not differentiate anywhere else in the system ;))

As we’re still keen on getting data replicated between two privately owned NASs, let’s now skip that option and go for the Network Backup Destination:

  • Firstly, chose and enter the settings for the “Synology server” target station (mind, using the customized SSH port from above – Backup Service Configuration)
Synology Network Backup Destination Selection

Synology Network Backup Destination Selection

  • Secondly, decide on which kind of target backup data format to use. The screenshot below is self-explaining: Either go for a multi-version solution or a readable one (there we go!). All backup sets relying on this very destination configuration will produce target backup data according to this very selection.
Synology Network Backup Destination Target Format

Synology Network Backup Destination Target Format

2: And now: For the backup set

Unsurprinsingly, backup sets are created in the section “Backup” of the “Backup and Replication” App:

  • First choice – prior to the wizzard even starting – is either to create a “Data Backup Task” or an iSCSI “LUN Backup Task” (details on iSCSI LUN can be found in the online documentation; however, if your Storage App isn’t mentioning any LUNs used, forget about that option – it obviously wouldn’t have anything to backup)
  • Next, chose the backup destination (ideally configured beforehand)
Synology Backup Task - Select Destination

Synology Backup Task – Select Destination

  • After that, all shared folders are presented and the ones to be included in the backup can be checkmarked
  • In addition, the wizzard allows to include app data into the backup (Surveillance Station is the only example I had running)
Synology Backup Task - Selecting Apps

Synology Backup Task – Selecting Apps

  • Finally some pretty important detail settings can be done:
Synology Backup Task - Details Settings

Synology Backup Task – Details Settings

  • Encryption, compression and/or block-level backup
  • Preserve files on destination, even when source is deleted (note the ambiguous wording here!)
  • Backup metadata of files as well as adjacent thumbnails (obviously more target storage consumed)
  • Enable backup of configurations along with this task
  • Schedule the backup task to run regularly
  • And last not least: bandwidth limitations! It is highly recommended to consider that carefully. While testing the whole stuff, I ran into serious bandwidth decrease within my local area network as both disc stations where running locally for the tests. So, a running backup task does indeed consume quite a bit of performance!

Once the settings are applied, the task is created and stored in the App – waiting to be triggered by a scheduler event or a click to “Backup Now”

So, what is this one doing?

It shovels data from (a) to (b). Period. When having selected “readable” at the beginning, you can even see folders and files being created or updated step by step in the destination directory. One nice advantage (especially for first-time backups) is, that the execution visibly shows its progress in the App:

Synology Backup Task - Progression

Synology Backup Task – Progression

Also, when done, it pushes a notification (by eMail, too, if configured) to inform about successful completion (or any failure happened).

Synology Backup Completion Notification

Synology Backup Completion Notification

Below screenshot eventually shows what folders look like at the destination:

Synology Backup Destination Directory Structure

Synology Backup Destination Directory Structure

And when a new or updated file appears in the source, the next run would update it on the destination in the same folder (tested and confirmed, whatever others claim)!

So, in essence this method is pretty useable and useful for bringing data across to another location, plus: maintaining it readable there. However, there’s still some disadvantages which I’ll discuss in a moment …

So, what about Cloud Station?

Well, I’ve been using Cloud Station for years now. Without any ado; without any serious fault; with

  • around 100.000 pictures
  • several 1000 business data files, various sizes, types, formats, …
  • a nice collection of MP3 music – around 10.000 files
  • and some really large music recording folders (some with uncut raw recordings in WAV format)

Cloud Station works flawlessly under these conditions. For the benefit of Mr. Adam Armstrong of storagereview.com, I’ve skipped a detailed explanation of Cloud Station and will just refer to his – IMHO – very good article!

Why did I look into that, though data backup (explained before) did a pretty good job? Well – one major disadvantage with backup sets in Synology is that even if you chose “readable” as the desired destination format, there is still not really a way of producing destination results which resemble the source in a sufficiently close way, meaning, that with backup tasks, the backed-up data goes into some subdir within the backup destination folder – thereby making permission management on destination data an utter nightmare (no useful permission inheritance from source shared folder, different permissions intended on different sub-sub-folders for the data, etc.).

Cloud Station solves this, but in turn has the disadvantage that initial sync runs are always tremendously tedious and consume loads of transfer resources (though, when using Cloud Station between 2 NASs this disadvantage is more or less reduced to a significantly higher CPU and network usage during the sync process). So, actually we’d be best to go with Cloud Station and just Cloud-sync the two NASs.

BUT: There’s one more thing with this – and any other sync – solution: Files are kept in line on both endpoints, meaning: When a file is deleted on one, its mirror on the other side is deleted, too. This risk can be mitigated by setting up recycle bin function for shared folders and versioning for Cloud Station, but still it’s no real backup solution suitable for full disaster recovery.

What the hell did I do then?

Neither of the options tested was fully perfect for me, so: I took all of them (well: not fully in the end; as said, I can’t get my head around that shared folder sync, so at the moment I am going without it).

Let’s once more have a quick glance at the key capabilities of each of the discussed options:

Synology: Backup Options

Synology: Backup Options

  • Shared Folder Sync is no sync; and it leaves the target essentially unusable. Further: A file deleted in the source would – by the sync process – instantly be deleted in the destination as well.
  • Data Backup (if chosen “readable”) just shifts data 1:1 into the destination – into a sub-folder structure; the multi-version volume option would create a backup package. IMHO great to use if you don’t need instant access to data managed equally to the source.
  • Cloud Station: Tedious initial sync but after that the perfect way of keeping two folder trees (shared folders plus sub-items) in sync; mind: “in sync” means, that destroying a file destroys it at both locations (can be mitigated to a certain extent by using versioning).

I did it may way:

  1. Business projects are “Cloud Station” synced from the office NAS (source and master) to the home NAS; all devices using business projects connect to the office NAS folders of that category.
  2. Media files (photos, videos, MP3 and other music, recordings, …) have been 1:1 replicated to the new NAS by a one-time data backup task. At the moment, Cloud Station is building up its database for these shared folders and will maybe become the final solution for these categories. Master and source is the home NAS (also serving UPnP, of course); the office NAS (for syncing) and all devices, which want to stream media or manage photos, connect to this one.
  3. Archive shared folders (with rare data change) have been replicated to the new NAS and are not synced at the moment. I may go back to a pure incremental backup solution or even set some of these folders to read-only by permission and just leave them as they are.

Will that be final? Probably not … we’ll see.

Do you have a better plan? Please share … I’m curious!

 

Published by:

Object Orientation supports Efficiency in Automation Solutions

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

A software architecture pattern for an IT oriented automation solution? How would this work together? Let’s have a closer look:

SOLID object orientation

Software guru Robert C. Martin identified “the first five principals” of object-oriented design in the early 2000’s. Michael Feathers introduced the acronym SOLID to easily remember these five basics that developers and architects should follow to ensure they are creating systems that are easy to maintain and to extend over time.[1]

  • Single responsibility: any given object shall have exactly one responsibility or one reason to change.
  • Open-close: any given object shall be closed for modification but open for extension.
  • Liskov substitution principle: any given live instance of a given object shall be replaceable with instances of the objects subtypes without altering the correctness of the program.
  • Interface segregation: every interface shall have a clear and encapsulated purpose; interface consumers must be able to focus on the interface needed and not be forced to implement interfaces they do not need.
  • Dependency inversion: create any dependency to an object’s abstraction not to its actual presence

The SOLID principle and other object-oriented patterns, are discussed further in this article: Object-Oriented-Design-Principles

“SOLID” IT automation

Many of today’s enterprise workload automation solutions were developed with architectural patterns in mind, which date back well before the advance of object orientation. In order to avoid the risk of immaturity of the solution, patterns weren’t innovated in some cases. At the same time, the demand for rapid change, target-dependent concretion and re-usability of implementations has been increasing. An object oriented approach can now be used as a means to support these new requirements.

Object orientation, therefore, should be one of the key architectural patterns of an innovative enterprise automation solution. Such a solution encapsulates data and logic within automation objects and thereby represent what could be called an “automation blueprint.” The object presents a well-defined “input interface” through which a runtime-instance of the automation object can be augmented according to the requirements of the specific scenario.

Through interaction with automation objects, an object-oriented workflow can be defined, thus presenting an aggregated automation object. By employing the patterns of object-oriented architecture and design, developers ensure that the implementation of automation scenarios evolves into object interaction, re-usability, specialization of abstracted out-of-the-box use-cases, and resolves specific business problems in a dedicated and efficient way.

Enterprise-grade, innovative automation solutions define all automation instructions as different types of objects within any kind of object repository – similar to traditional object oriented programming languages. Basic definition of automation tasks is represented as “automation blueprints”. Through instantiation, aggregation and/or specialization, the static object library becomes the arbitrary, dedicated, business process oriented solution to execute and perform business automation.

Automation object orientation

Object orientation in automation architectures

The figure above shows an example of how an object can be instantiated as a dedicated task within different workflows.

IT-aaS example

The following IT-aaS example illustrates an object-oriented automation pattern. The chosen scenario assumes that an IT provider intends to offer an IT service to request, approve, and automatically provision a new, compliant infrastructure service such as a web application server.

  • Object definition: A set of aggregated objects – automation blueprints – define how to provision a new server within a certain environment. The object definition would not – however – bind its capability to a specific hypervisor or public cloud provider.
  • Object instantiation: Request logic – realized for example in a service desk application – would implement blueprint instantiation including the querying (e.g. by user input or by retrieval from a CMDB) and forwarding of the parameters to the objects’ instance.

This not only automates the service provisioning but also addresses burst-out scenarios required by modern IT service delivery through integrated automation.

Patterns benefitting from object reusability

The concept of object orientation allows the reuse of automation objects eliminating the need to duplicate information. It also allows the creation of automation blueprints describing the automation logic that is processed at runtime. An automation blueprint can behave differently once it gets instantiated during runtime because of features like integrated scripting, variables, pre- and post conditional logic, and logical programming elements such as conditions and loop constructs.

Inheritance

Automation objects’ relationships and dependencies as well as input/output data are set at the time they are defined. Run-time instantiated objects can inherit their input parameters from parent objects. They can also pass parameters from one runtime instance to another as well as to their parent containers. This enables fully flexible multi-processing of business workflows without e.g. being forced to clean-up variables in static containers.

Abstraction versus Specialization

Automation blueprint definitions form the basic abstraction of scenarios within an enterprise-grade automation platform. Abstract automation objects get their concrete execution information when instantiated. Object oriented platforms provide the means to augment the instantiated objects at runtime via patterns like prompt sets, database variables or condition sets – in the best case to be modeled graphically; this supports flexibility and dynamic execution.

Maintainability

As the concept of object orientation eliminates the need for duplication of automation logic, maintaining automation workflow definitions becomes minor. Typical modifications such as changes of technical user-ids, path/binary name etc. can be performed in a centrally defined object and is applied wherever the object (blueprint) is used.

 

Published by:

High Availability and Robustness in Automation Solutions

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

High Availability and Robustness are critical elements of any automation solution. Since an automation solution forms the backbone of an entire IT infrastructure, one needs to avoid any type of downtime. In the rare case of a system failure, rapid recovery to resume operations shall be ensured.

Several architectural patterns can help achieve those goals.

Multi-Server / Multi-Process architecture

Assuming, that the automation solution of choice operates a centralized management layer as core component, the intelligence of the automation implementation is located in a central backbone and distributes execution to “dumb” endpoints. Advantages of such an architecture will be discussed in subsequent chapters. The current chapter focusses on the internal architecture of a “central engine” in more detail:

Robustness and near 100% availability is normally achieved through redundancy which comes by the trade-off of a larger infrastructure footprint and resource consumption. However, it is inherently crucial to base the central automation management layer on multiple servers and multiple processes. Not all of these processes necessarily act on a redundant basis, as would be the case with round-robin load balancing setups where multiple processes can all act on the same execution. However, in order to mitigate the risk of a complete failure of one particular function, the different processes distributed over various physical or virtual nodes need to be able to takeover operation of any running process.

This architectural approach also enables scalability which has been addressed previously. Most of all, however, this type of setup best supports the goal of 100% availability. At any given time, workload can be spread across multiple physical/virtual servers as well as split into different processes within the same (or different) servers.

Transactions

Another aspect of a multi-process architecture that helps achieve zero-downtime non-stop operations is that it ensures restoration of an execution point at any given time. Assuming, that all transactional data is stored in a persistent queue, the transaction is automatically recovered by the remaining processes on the same or different node(s) in case of failure of a physical server, a single execution, or a process node. This prevents any interruption, loss of data, or impact to end-users.

Automation scalability and availability

Scalable and high available automation architecture

Purposely, the figure above is the same as in the “Scalability” chapter, thereby showing how a multi-server / multi-process automation architecture can support both scalability and high availability.

Endpoint Failover

In a multi-server/multi-process architecture, recovery can’t occur if the endpoint adapters aren’t capable of instantly reconnecting to a changed server setup. To avoid downtime, all decentralized components such as control and management interfaces, UI, and adapters must support automatic failover. In the case of a server or process outage these interfaces must immediately reconnect to the remaining processes.

In a reliable failover architecture, endpoints need to be in tune with the core engine setup at all times and must receive regular updates about available processes and their current load. This ensures that endpoints connect to central components based on load data, thereby actively supporting the execution of load balancing. This data can also be used to balance the re-connection load efficiently in case of failure/restart.

Disaster Recovery

Even in an optimum availability setup, the potential of an IT disaster still exists. For this reason, organizations should be prepared with a comprehensive business continuity plan. In automation architectures, this does not necessarily mean rapid reboot of central components – although this could be achieved depending on how lightweight the automation solution central component architecture is constructed. More importantly, however, is the ability of rebooted server components to allow for swift re-connection of remote automation components such as endpoint adapters and/or management UIs. These concepts were discussed in depth in the scalability chapter.

Key Performance Indicators

Lightweight, robust architectures as described above have two main advantages beyond scalability and availability:

  • They allow for small scale setups at a low total cost of ownership (TCO).
  • They allow for large scale setups without entirely changing existing systems.

One of the first features of an automation system as described above is a low TCO for a small scale implementation.

Additional indicators to look for include:

  • Number of customers served at the same time without change of the automation architecture
  • Number of tasks per day the automation platform is capable of executing
  • Number of endpoints connected to the central platform

In particular, the number of endpoints provides clarity about strengths of a high-grade, enterprise-ready, robust automation solution.

Published by:

What is Social Media still worth for?

I’m pretty pissed by the recent rumours (let’s call it that way) about the social media platform “twitter” introducing an algorithmic timeline (wanna know more about the matter? either follow the #RIPtwitter hashtag or read this (very great and insightful) article by @setlinger to learn about the possible impact)

So why am I annoyed? – Here’s to share a little

personal history:

When having joined twitter and facebook in 2009, things in both networks were pretty straight forward: Your feed filled with updates from your followers, you could watch things you liked more closely and just run over other boring stuff quickly. Step-by-step facebook started to tailor my feed. It sort-of commenced when I noticed that they were constantly changing my feed setting to (don’t remember the exact wording) “trending stuff first” and I had to manually set it back to “chronological” ever and ever again. At some point that setting possibility vanished totally and my feed remained tailored to – well – what, actually?

Did I back out then? No! Because by that time, I had discovered the advertisement possibilities of facebook. Today, I run about 6 different pages (sometimes, I add some, such as the recent “I AM ELEVEN – Austrian Premiere” page, to promote some causes I am committed to; these go offline again some time later). I am co-administrator  of a page that has more than 37.000 followers (CISV International) and it is totally interesting to observe the effects you achieve with one or the other post, comment, engagement, … whatever. Beautiful things happening from time to time. Personally, in my own feed, I mainly share things randomly (you won’t know me, if you just knew my feed); sometimes it just feels like fun to share an update. Honestly, I’ve given up fully to think, that any real engagement is possible through these kind of online encounters – it’s just fun.

Twitter is a bit different: I like getting in touch with people, whom I do not really know. Funny, interesting, insightful exchanges of information happen within 140 characters. And it gives me food for thought job-wise equally as cause-wise (#CISV, #PeaceOneDay, … and more). I came upon the recently introduced “While you were away” section on my mobile, shook heads about it and constantly skipped it not really bothering about were to switch it off (subsequent answer to subsequent twitter-question: “Did you like this?” – always: “NO”).

And then there was the “algorithmic timeline” announcement!

So, why is this utter bullshit?

I’ll give you three simple answers from my facebook experience:

  • Some weeks back – in November, right after the Paris attacks – I was responsible to post an update to our CISV-International facebook followers. Tough thing, to find the right words. Obviously I got it not too wrong as the reported “reach” was around 150k users in the end. Think about that? A page with some 37k followers reaches some 150k with one post. I was happy about the fact, that it was that much, but thinkin’ twice about it: How can I really know about the real impact of that? In truth, that counter does tell me simply nothing.
facebook post on "CISV International" reaching nearly 150k users

facebook post on “CISV International” reaching nearly 150k users

  • Some days ago, I spent a few bucks to push a post from the “I AM ELEVEN – Austria” page. In the end it reported a reach of 1.8k! “Likes” – however – came mostly from users who – according to facebook – don’t even live in Vienna, though I tailored the ad to “Vienna+20km”. One may argue that even the best algorithm cannot control friends-of-friends engagement – and I do value that argument; but what’s the boosting worth then, if I do not get one single person more into the cinema to see the film?
facebook I AM ELEVEN boosted post

facebook I AM ELEVEN boosted post

  • I am recently flooded with constant appearances of “Secret Escape” ads. I’ve never klicked it (and won’t add a link here – I don’t wanna add to their view count); I’m not interested in it; facebook still keeps showing me who of my friends like it and adds the ad to my feed more than once every day. Annoying. And to stop it I’d have to interact with the ad – which I do not want to. However, I don’t have a simple choice of opting out of it …

Thinking of all that – and more – what would I personally gain from an algorithmic timeline on twitter, if facebook hasn’t really helped me in my endeavours anymore, recently? Nothing! I think. I just don’t have the amount of money to feed the tentacles of those guys, having such ideas, so that their ideas would by any means become worthy for my business or causes. Period.

But as those tentacles rarely listen to users like me but rather to potent advertisers (like “Secret Escape” e.g.), the only alternative will probably again be, to opt out:

Twitter: NO to "best tweets"

Twitter: NO to “best tweets”

 

Having recently read “The Circle” that’s a more and more useful alternative, anyway …

 

Published by:

Scalability in Automation

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

Scalability criteria are often referred to as “load scalability”, actually meaning the ability of a solution to automatically adopt to increasing and decreasing consumption or execution demands. While the need for this capability is obviously true for an automation solution, there’s surely more that belongs into the field of scalability. Hence, the following aspects will be discussed within this chapter:

  • load scalability
  • administrative scalability
  • functional scalability
  • throughput

Load scalability

Back at a time where cloud computing was still striving for the one and only definition of itself, one key cloud criteria became clear very quickly: Cloud should be defined mainly by virtually infinite resources to be consumed by whatever means. In other words, cloud computing promised to transform the IT environment into a landscape of endlessly scalable services.

Today, whenever IT executives and decision makers consider the deployment of any new IT service, scalability is one of the major requirements. Scalability offers the ability for a distributed system to easily expand and contract its resource pool to accommodate heavier or lighter loads or number of inputs. It also determines the ease with which a system or component can be modified, added, or removed to accommodate changing load.

Most IT decisions today are based on the question of whether a solution can meet this definition. This is why when deciding on an automation solution – especially when considering it to be the automation, orchestration and deployment layer of your future (hybrid) IT framework – scalability should be a high priority requirement. In order to determine a solution’s load scalability capabilities one must examine the system’s architecture and how it handles the following key functions:

Expanding system resources

A scalable automation solution architecture allows adding and withdrawing resources seamlessly, on demand, and with no downtime of the core system.

The importance of a central management architecture will be discussed later in this paper; for now it is sufficient to understand that centralized engine architecture develops its main load scalability characteristics through its technical process architecture.

As an inherent characteristic, such an architecture is comprised of thoroughly separated working processes each having the same basic capabilities but – depending on the system’s functionality – acting differently (i.e. each serving a different execution function). A worker process could – depending on technical automation requirements – be assigned to some of the following tasks (see figure below for an overview of such an architecture):

  • worker process management (this would be kind-of a “worker control process” capability needed to be built out redundantly in order to allow for seamless handover in case of failure)
  • access point for log-on requests to the central engine (one of “Worker 1..n” below)
  • requests from a user interface instance (“UI worker process”)
  • workload automation synchronization (“Worker 1..n”)
  • Reporting (“Worker 1..n”)
  • Integrated authentication with remote identity directories (“Worker 1..n”)

At the same time, command and communication handling should be technically separated from automation execution handling and be spread across multiple “Command Processes” as well – all acting on and providing the same capabilities. This will keep the core system responsive and scalable in case of additional load.

Automation scalability

Scalable and high available automation architecture

The architecture described in the figure above represents these core differentiators when it comes to one of the following load change scenarios:

Dynamically adding physical/virtual servers to alter CPU, memory and disk space or increasing storage for workload peaks without downtime

With the above architecture, changing load means simply adding or withdrawing physical (virtualized/cloud-based) resources to the infrastructure running the core automation system. With processes acting redundantly on the “separation of concern” principle, it is either possible to provide more resources to run other jobs or add jobs to the core engine (even when physically running on a distributed infrastructure).

This should take place without downtime of the core automation system, ensuring not only rapid reaction to load changes but also high resource availability (to be discussed in a later chapter).

Change the number of concurrently connected endpoints to one instance of the system

At any time during system uptime it might become necessary to handle load requirements by increasing the number of system automation endpoints (such as agents) connected to one specific job instance. This is possible only if concurrently acting processes are made aware of changing endpoint connections and are able to re-distribute load among other running jobs seamlessly without downtime. The architecture described above allows for such scenarios where a separated, less integrated core engine would demand reconfiguration when adding endpoints over a certain number.

Endpoint reconnection following an outage

Even if the solution meets the criteria of maximum availability, outages may occur. A load scalable architecture is a key consideration when it comes to disaster recovery. This involves the concurrent boot-up of a significant number of remote systems including their respective automation endpoints. The automation solution therefore must allow for concurrent re-connection of several thousand automation endpoints within minutes of an outage in order to resume normal operations.

Administrative scalability

While load scalability is the most commonly discussed topic when it comes to key IT decisions, there are other scalability criteria to be considered as differentiating criteria in deciding on an appropriate automation solution. One is “Administrative Scalability” defined as the ability for an increasing number of organizations or users to easily share a single distributed system.[1]

Organizational unit support

A system is considered administratively scalable when it is capable of:

  • Logically separating organizational units within a single system. This capability is generally understood as “multi-client” or “multi-tenancy”.
  • Providing one central administration interface (UI + API) for system maintenance and onboarding of new organizations and/or users.

Endpoint connection from different network segments

Another aspect of administrative scalability is the ability of an automation solution to seamlessly connect endpoints from various organizational sources.

In large enterprises, multiple clients (customers) might be organizationally and network-wise separated. Organizational units are well hidden from each other or are routed through gateways when needing to connect. However, the automation solution is normally part of the core IT system serving multiple or all of these clients. Hence, it must allow for connectivity between processes and endpoints across the aforementioned separated sources. The established secure client network delineation must be kept in place, of course. One approach for the automation solution is to provide special dedicated routing (end)points capable of bridging the network separation via standard gateways and routers but only supporting the automation solution’s connectivity and protocol needs.

Seamless automation expansion for newly added resources

While the previously mentioned selection criteria for automation systems are based on “segregation,” another key decision criteria is based on harmonization and standardization.

An automation system can be considered administratively scalable when it is capable of executing the same, one-time-defined automation process on different endpoints within segregated environments.

The solution must be able to:

  • Add an organization and its users and systems seamlessly from any segregated network source.
  • Provide a dedicated management UI including those capabilities which is solely accessible securely by organization admin users only.

and at the same time

  • Define the core basic automation process only once and distribute it to new endpoints based on (organization-based) filter criteria.

The architecture thereby allows for unified process execution (implement once, serve all), administrative scalability and efficient automation.

Functional scalability

Functional Scalability, defined as the ability to enhance the system by adding new functionality at minimal effort[2], is another type of scalability characteristics that shall be included in the decision-making process.

The following are key components of functional scalability:

Enhance functionality through customized solutions

Once the basic processes are automated, IT operations staff can add significant value to the business by incorporating other dedicated IT systems into the automation landscape. Solution architects are faced with a multitude of different applications, services, interfaces, and integration demands that can benefit from automation.

A functionally scalable automation solution supports these scenarios out-of-the-box with the ability to:

  • Introduce new business logic to existing automation workflows or activities by means of simple and easy-to-adopt mechanisms without impacting existing automation or target system functions.
  • Allow for creation of interfaces to core automation workflows (through use of well-structured APIs) in order to ease integration with external applications.
  • Add and use parameters and/or conditional programming/scripting to adapt the behavior of existing base automation functions without changing the function itself.

Template-based implementation and template transfer

A functionally scalable architecture also enables the use of templates for implementing functionality and sharing/distributing it accordingly.

Once templates have been established, the automation solution should provide for a way to transfer these templates between systems or clients. This could either be supported through scripting or solution packaging. Additional value-add (if available): Share readily-tested implementations within an automation community.

Typical use-cases include but are not limited to:

  • Development-test-production deployment of automation packages.
  • Multi-client scenarios with well-segregated client areas with similar baseline functionality.

Customizable template libraries with graphical modeler

In today’s object and service based IT landscapes, products that rely on scripting are simply not considered functionally scalable. When using parameterization, conditional programming, and/or object reuse through scripting only, scaling to augment the automation implementation would become time-consuming, complex, and unsustainable. Today’s functionally scalable solutions use graphical modelers to create the object instances, workflows, templates, and packages that enable business efficiency and rapid adaptation to changing business requirements.

Throughput

Finally, consider the following question as a decision basis for selecting a highly innovative, cloud-ready, scalable automation solution:

What is the minimum and maximum workflow load for execution without architectural change of the solution? If the answer is: 100 up to 4-5 Mio concurrent jobs per day without change of principle setup or architecture, one’s good to go.

In other words: Scalable automation architectures support not only the aforementioned key scalability criteria but are also able to handle a relatively small scale of concurrent flows equally to an utterly large scale. The infrastructure footprint needed for the deployed automation solution must obviously adapt accordingly.

 

 

Published by:

Automation and Orchestration for Innovative IT-aaS Architectures

This blog post kicks off a series of connected publications about automation and orchestration solution architecture patterns. The series commences with multiple chapters discussing key criteria for automation solutions and will subsequently continue into outlining typical patterns and requirements for "Orchestrators". Posts in this series are tagged with "Automation-Orchestration" and will ultimately together compose a whitepaper about the subject matter.

Introduction & Summary

Recent customer projects in the field of architectural cloud computing frameworks revealed one clear fact repeatedly: Automation is the utter key to success – not only technically for the cloud solution to be performant, scalable and highly available but also for the business, which is using the cloudified IT, in order to remain in advantage compared to competition as well as stay on the leading edge of innovation.

On top of the automation building block within a cloud framework, an Orchestration solution ensures that atomic automation “black boxes” together form a platform for successful business execution.

As traditional IT landscapes take their leap into adopting (hybrid) cloud solutions for IaaS or maybe PaaS, automation and orchestration – in the same way – has to move from job scheduling or workload automation to more sophisticated IT Ops or DevOps tasks such as:

  • Provisioning of infrastructure and applications
  • Orchestration and deployment of services
  • Data consolidation
  • Information collection and reporting
  • Systematic forecasting and planning

In a time of constrained budgets, IT must always look to manage resources as efficiently as possible. One of the ways to accomplish that goal is through use of an IT solution that automates mundane tasks, orchestrates the same to larger solution blocks and eventually frees up enough IT resources to focus on driving business success.

This blog post is the first of a series of posts targeted at

  • explaining key criteria for a resilient, secure and scalable automation solution fit for the cloud
  • clearly identifying the separation between “automation” and “orchestration”
  • providing IT decision makers with a set of criteria to selecting the right solutions for their need of innovation

Together this blog post series will comprise a complete whitepaper on “Automation and “Orchestration for Innovative IT-aaS Architectures” supporting every IT in its strive to succeed with the move to (hybrid) cloud adoption.

The first section of the paper will list key criteria for automation solutions and explain their relevance for cloud frameworks as well as innovative IT landscapes in general.

The second section deals with Orchestration. It will differentiate system orchestration from service orchestration, explain key features and provide decision support for choosing an appropriate solution.

Target audience

Who should continue reading this blog series:

  • Technical decision makers
  • Cloud and solution architects in the field of innovative IT environments
  • IT-oriented pre- and post-sales consultants

If you consider yourself to belong into one of these groups, subscribe to the Smile-IT blog in order to get notified right away whenever a new chapter of this blog series and whitepaper gets published.

Finally, to conclude with the introduction let me give you the main findings that this paper will discuss in detail within the upcoming chapters:

Key findings

  • Traditional “old style” integration capabilities – such as: file transfer, object orientation or audit readiness – remain key criteria even for a cloud-ready automation platform.
  • In an era where cloud has become a commodity, just like the internet as such, service centered IT landscapes demand for a maximum of scalability and adaptability as well as multi-tenancy in order to be able to create a service-oriented ecosystem for the advancement of the businesses using it.
  • Security, maximum availability, and centralized management and control are fundamental necessities for transforming an IT environment into an integrated service center supporting business expansion, transformation, and growth.
  • Service orchestration might be the ultimate goal to achieve for an IT landscape, but system orchestration is a first step towards creating an abstraction layer between basic IT systems and business-oriented IT-services.

So, with these findings in mind, let us start diving into the key capabilities for a cloud- and innovation-ready automation solution.

 

Published by:

Read “The Circle” and opt out!

Is it – as a committed social media aficionado – applicable to call for an opt out of it all? It is, once you’ve read “The Circle”, the 2013 fictional novel by author Dave Eggers.

Eggers portraits a powerful internet company making money through advertising (links to Google or Facebook are purely accidental, of course). Mae Holland is a tech worker and in her second job after having graduated she’s given an opportunity at The Circle – an opportunity which most tech workers these days desperately seek for. Mae got support from her college roommate Annie who had already made it to the group of the 40 most senior managers in the company, directly reporting to the founders – “Three Wise Men”: Tom Stenton, Eamon Bailey and Ty Gospodinov. While the first two actively involve themselves in the company’s endeavours, Ty works on new developments mostly secluded in the background.

Mae starts in Customer Experience and works herself up the chain by overcommitting to objectives and seemingly easily (but in truth with great personal effort and sacrifice) following the increasingly demanding involvement not only in her work duties but also all virtual and physical social interaction with fellow colleagues. She not-falls-in-love with one nerdy Circler she has sex with, whom she somehow admires for his technological development of a system protecting children from violence; she commences to desperately long for encounters with another Circler, who becomes increasingly mysterious as the company develops itself more and more towards total transparency.

Eggers, the author, does not keep the reader long from his message: One of the first major announcements of one of the Wise, Eamon Bailey, is a development called “SeeChange” – an extremely low-cost, top-quality A/V camera, capable of running on battery for about 2 years and streaming its crystal clear 4k images via satellite onto the SeeChange platform. Anyone can install cameras anywhere, they are barely noticed and everybody can logon to SeeChange with their unique – very personal and real – identity, their “TruYou”.

Rings a bell? Well, this is only the starting point into a rollercoaster of more awesomely cool technology tools, all aggregated through “TruYou” and made available to everyone anytime.

Dave Eggers is brilliantly creating a staggering balance between technological blessings and their benefit for employees, communities and the people as a whole on the one hand and the increasing sacrifice individuals could be demanded to make on the other hand in order to leverage that technological advance. This is – in short – the utter embarrassing red line throughout the whole book from the very first page until the closing line.

Of course, “The Circle” addresses the time we spend in social media, the way we communicate with each other (personally and virtually), the blessings and the threats that a modern, technology-based life bears. While reading, I was constantly torn between appreciating the sketched development (note: this isn’t science fiction, this is just the next step in a logical advance that we’re facing) and detesting the commitment it would demand from the ones making real use of it. Being into like two thirds of it and swallowing the book’s lines in nightly sessions, my only remaining questions was this: Will Eggers eventually manage to destroy my thorough belief in the two main importances of modern social media involved life and communication:

  • Utter transparency: I want to always know – or: be able to know – who does what with my data
  • And utter free will: I want to always be allowed to opt out, if I want to

I will not disclose the answer – I’d be “spoiling”. BUT – if you haven’t done so far, I recommend: Read “The Circle”. And then consider carefully, where and what to opt in or opt out of. It remains important.

circlebig

P.S.: There’ll be a movie comin’ this year, starring Tom Hanks as Eamon Bailey. Don’t read the articles on it, as they all contain spoilers on one important turn of the story!

 

Published by:
%d bloggers like this: