The Smile-IT Blog » Blog Archives

Tag Archives: cloud

How to StartUp inside an Enterprise

I’ve been following Ruxit for quite some time now. In 2014, I first considered them for the Cloud delivery framework we were to create. Later – during another project – I elaborated on a comparison I did between Ruxit and newRelic; I was convinced by their “need to know” approach to monitor large diverse application landscapes.

Recently they added Docker Monitoring into their portfolio and expanded support for highly dynamic infrastructures; here’s a great webinar on that (be sure to watch closely on the live demos – compelling).

But let’s – for once – let aside the technical masterpieces in their development; let’s have a look on their strategic procession:

Dynatrace – the mothership – has been a well-known player in the monitoring field for years. I am working for quite some customers who leverage Dynatrace’s capabilities. I would not hesitate to call them a well-established enterprise. Especially in the field of cloud, well established enterprises tend to leak a certain elasticity in order to get their X-aaS initiatives to really lift-off; examples are manifold: Canopy failed eventually (my 2 cents; some may see that differently), IBM took a long time to differentiate their cloud from the core business, … some others still market their cloud endeavours sideways their core business – not for the better.

And then – last week – I received Ruxit’s eMail announcing “Ruxit grows up… announcing Dynatrace Ruxit!“, officially sent by “Bernd Greifeneder | Founder and CTO”. I was expecting that eMail; in the webinar mentioned before, slides were already branded “Dynatrace Ruxit”, and the question I raised on this, was answered expectedly, that from a successful startup-like endeavour they would now commence their move back into the parent company.

Comprehensible.

Because that is precisely what a disruptive endeavour inside a well-established company should look like: Greifeneder was obviously given the trust and money to ramp-up a totally new kind of business alongside Dynatrace’s core capabilities. I have long lost any doubts, that Ruxit created a new way of technologically and methodically doing things in Monitoring: In a container-based elastic cloud environment, there’s no need anymore to know about each and every entity; the only importance is to keep things alright for endusers – and when this is not the case, let admins quickly find the problem, and nothing else.

What – though – really baffled me was the rigorous way of pushing their technology into the market: I used to run a test account for running a few tests there and then for my projects. Whenever I logged in, something new had been deployed. Releases happened on an amazingly regular basis – DevOps style 100%. There is no way of doing this within established development processes and traditional on-premise release management. One may be able to derive traditional releases from DevOps-like continuous delivery – but not vice versa.

Bottom line: Greifeneder obviously had the possibility, the ability and the right people to do things in a totally different way from the mothership’s processes. I, of course, do not have insight in how things were really setup within Dynatrace – but last week they took their baby back into “mother’s bosom”, and in the cloud business – I’d argue – that does not happen when the baby isn’t ready to live on its own.

Respect!

Enterprise cloud and digitalisation endeavours may get their learnings from Dynatrace Ruxit. Wishing you a sunny future, Dynatrace Monitoring Cloud!

 

Published by:

Private Cloud Storage: 1 + 1 = 27 backup solutions

Why would a convinced “pro-cloudian” invest into a geo-redundant backup and restore solution for private (cloud) storage? The reasons for this were fairly simple:

  1. I store quite a bit of music (imagery and audio all-the-same); storing that solely in the cloud is (a) expensive and (b) slow when streamed (Austrian downstream is not yet really that fast)
  2. In addition to that, I store quite a lot of important projects data meanwhile (in different public clouds, of course, but also on my office NAS); at one point I needed a second location to further secure this data
  3. I wanted a home media streaming solution close by my hifi

My current NAS used to be a Synology DS411 (4 2TB discs, Synology Hybrid Raid – SHR – which essentially is RAID5). My new one is now a DS416 (same configuration; I just upgraded discs in a way that both NASs now run 2 2TB and 2 3TB discs – mainly disc lifetime considerations were leading to this, and the fact that I didn’t wanna throw away still-good harddiscs (if you’re interested in the upgrade process, just post a quick comment and I’ll come back to that gladly – but with Synology that’s pretty straight forward).

Bored already and not keen on learning all the nitty-gritty details: You can jump to the end, if you really need to 😉

More than 1 backup

Of course, it’s not 27 options – as in the headline – but it’s a fair lot of possibilities to move data between to essentially identical NASs for the benefit of data resilience. Besides that, a few additional constraints come into account when setting it up for geo-redundancy:

  • Is one of the 2 passively taking backup data only or are both actively offering services? (in my case: the latter, as one of the 2 would be the projects’ storage residing in the office and the other would be storage for media mainly – but not only – used at home)
  • How much upstream/downstream can I get for which amount of data to be synced? (ease of thought for me: both locations are identical in that respect, so it boiled down to data volume considerations)
  • Which of the data is really needed actively and where?
  • Which of the data is actively accessed but not changed (I do have quite a few archive folder trees stored on my NAS which I infrequently need)

Conclusion: For some of the data incremental geo-backup suffices fully; other data needs to be replicated to the respective other location but kept read-only; for some data I wanted to have readable replications on both locations.

First things first: Options

Synology Backup related Packages

Synology Backup related Packages

The above screenshot shows available backup packages that can be installed on any Synology disc station:

  • Time Backup is a Synology owned solution that offers incremental/differential backup; I recently heard of incompatibilities with certain disc stations and/or harddiscs, hence this wasn’t my first option (whoever has experiences with this, please leave a comment; thanx)
  • Of all the public cloud backup clients (ElephantDrive, HiDrive, Symform and Glacier) AWS Glacier seemed the most attractive as I’m constantly working within AWS anyway and I wasn’t keen on diving into extended analysis of the others. However, Glacier costs for an estimate of 3 TB would be $36 in Frankfurt and $21 in the US. Per month. Still quite a bit when already running 2 disc stations anyway which both are far from being over-consumed – yet.
  • Symform offers an interesting concept: In turn to contribution to a peer-to-peer network one gets ever more free cloud storage for backup; still I was more keen on finding an alternative without ongoing effort and cost
BTW: Overall CAPEX for the new NAS was around EUR 800,- (or less than 2 years of AWS Glacier storage costs for not even the full capacity of the new NAS). No option, if elasticity and flexibility aren't key that much ...

The NAS-to-NAS way of backup and restore

For the benefit of completeness:

  • Synology “Cloud Sync” (see screen shot above) isn’t really backup: It’s a way of replicating files and folders from your NAS to some public cloud file service like GoogleDrive or Dropbox. I can confirm, it works flawlessly, but is no more than a bit of a playground if one intends to have some files available publicly – for whatever reason (I use it to easily mirror and share my collection of papers with others without granting them access to my NAS).
  • Synology Cloud Station – mind(!) – is IMHO one of the best tools that Synology did so far (besides DSM itself). It’s pretty reliable – in my case – and even offers NAS-2-NAS synchronization of files and folders; hence, we’ll get back to this piece a little later.
  • Finally – and that’s key for what’s to come – there’s the DSM built-in “Backup & Replication” options to be found in the App Launcher. And this is mainly what I bothered with in the first few days of running two of these beasts.
Synology Backup and Replication AppLauncher

Synology Backup and Replication AppLauncher

“Backup and Replication” offers:

  • The activation and configuration of a backup server
  • Backup and Restore (either iSCSI LUN backup, if used, or data backup, the latter with either a multi-version data volume or “readable” option)
  • Shared Folder Sync (the utter Synology anachronism – see a bit further below)

So, eventually, there’s

  • 4 Cloud backup apps
  • 4 Synology owned backup options (Time Backup, iSCSI LUN backup, data volume backup and “readable” data backup) and
  • 3 Synology sync options (Cloud Sync, Cloud Station and Shared Folder Sync)

Not 27, but still enough to struggle hard to find the right one …

So what’s wrong with syncing?

Nothing. Actually.

Cloud Station is one of the best private cloud file synchronization solutions I ever experienced; dropbox has a comparable user experience (and is still the service caring least about data privacy). So – anyway, I could just have setup the whole of the two NASs to sync using Cloud Station. Make one station the master and connect all my devices to it and make the other the backup station and connect it to the master, either.

However, the thought of awaiting the initial sync for that amount of data – especially as quite a bit of it was vastly static – let me disregard this option in the first place.

Shared Folder Sync sounded like a convenient idea to try. It’s configuration is pretty straight forward.

1: Enable Backup Services

The destination station needs to have the backup service running; so that is the first thing to go for. Launching the backup service is essentially kicking off an rsync server which can accept rsync requests from any source (this would even enable your disc station to accept workstation backups from pc, notebook, etc., if they’re capable of running rsync).

To configure the backup service, one needs to launch the “Backup and Replication” App and go to “Backup Service”:

Synology Backup Service Configuration

Synology Backup Service Configuration

NOTE: I do always consider to change the standard ports (22 in this case) to something unfamiliar - for security reasons (see this post: that habit saved me once)!

Other than that, one just enables the service and decides on possible data transfer speed limits (which can even be scheduled). The “Time Backup” tab allows enabling the service for accepting time backups; (update) and third tab makes volume backups possible by just ticking a checkbox. But that’s essentially it.

2: Shared Folder Sync Server

Synology Shared Folder Sync Server

Synology Shared Folder Sync Server

In order to accept sync client linkage, the target disc station needs to have the shared folder sync server enabled, additionally to the backup service. As the screenshot suggests, this is no big deal, really. Mind, though, that it is also here, where you check and release any linked shared folders (a button would appear under server status, where this can be done).

Once “Apply” is hit, the disc station is ready to accept shared folder sync requests.

3: Initiate “Shared Folder Sync”

This is were it gets weird for the first time:

  • In the source station, go to the same page as shown above, but stay at the “Client” tab
  • Launch the wizzard with a click to “Create”
  • It asks for a name
  • And next it asks to select the folders to sync
  • In this very page it says: “I understand that if the selected destination contains folders with identical names as source folders, the folders at destination will be renamed. If they don’t exist at destination they will be created.” – You can’t proceed without explicitly accepting this by checking the box.
  • Next page asks for the server connection (mind: it uses the same port as specified in your destination’s backup service configuration setup previously (see (1) above))
  • Finally, a confirmation page allows verification – or, by going back, correction – of settings and when “Apply” is hit, the service commences its work.

Now, what’s it doing?

Shared Folder Sync essentially copies contents of selected shared folders to shared folders on the destination disc station. As mentioned above, it initially needs to explicitly create its link folder on the destination, so don’t create any folders in advance when using this service.

When investigating the destination in-depth, though, things instantly collapse into agony:

  1. All destination shared folders created by shared folder sync have no user/group rights set except for “read-only” for administrators
  2. Consequentially, the attempt to create or push a file to any of the destination shared folders goes void
  3. And altering shared folder permissions on one of these folders results in a disturbing message
Synology Permission change on Shared Folder Sync folders

Synology Permission change on Shared Folder Sync folders

“Changing its privilege settings may cause sync errors.”  – WTF! Any IT guy knows, that “may” in essence means “will”. So, hands off!

Further:

  • It did not allow me to create more than two different sync tasks
  • I randomly experienced failures being reported during execution which I couldn’t track down to their root cause via the log. It just said “sync failed”.

Eventually, a closer look into Synology’s online documentation reveals: “Shared Folder Sync is a one way sync solution, meaning that the files at the source will be synced to the destination, but not the other way around. If you are looking for a 2-way sync solution, please use Cloud Station.” – Synology! Hey! Something like this isn’t called “synchronization”, that’s a copy!

While writing these lines, I still cannot think of any real advantage of this over

  • Cloud Station (2-way sync)
  • Data backup (readable 1-way copy)
  • Volume backup (non-readable, incremental, 1-way “copy”)

As of the moment, I’ve given up with that piece … (can anyone tell me where I would really use this?)

BUR: Backup & Restore

The essential objective of a successful BUR strategy is to get back to life with sufficiently recent data (RPO – recovery point objective) in sufficiently quick time (RTO – recovery time objective). For the small scale of a private storage solution, Synology already offers quite compelling data security by its RAID implementation. When adding geo redundancy, the backup options in the “Backup & Replication” App would be a logical thing to try …

1: Destination first

As was previously mentioned, the destination station needs to have the backup service running; this also creates a new – administrable, in this case – shared folder “NetBackup” which could (but doesn’t need to) be the target for all backups.

Targets (called “Backup Destination” here), which are to be used for backups, still must be configured at the source station in addition to that. This is done in the “Backup & Replication” App at “Backup Destination”:

Even at this place – besides “Local” (which would e.g. be another volume or some USB attached harddisc) and “Network”- it is still possible to push backups to AWS S3 or other public cloud services by chosing “Public Cloud Backup Destination” (see following screenshots for S3).

Synology Cloud Backup: Selecting the cloud provider

Synology Cloud Backup: Selecting the cloud provider

 

Synology Cloud Backup: Configuring AWS S3

Synology Cloud Backup: Configuring AWS S3

NOTE, that the Wizzard even allows for bucket selection in China (pretty useless outside China, but obviously they sell there and do not differentiate anywhere else in the system ;))

As we’re still keen on getting data replicated between two privately owned NASs, let’s now skip that option and go for the Network Backup Destination:

  • Firstly, chose and enter the settings for the “Synology server” target station (mind, using the customized SSH port from above – Backup Service Configuration)
Synology Network Backup Destination Selection

Synology Network Backup Destination Selection

  • Secondly, decide on which kind of target backup data format to use. The screenshot below is self-explaining: Either go for a multi-version solution or a readable one (there we go!). All backup sets relying on this very destination configuration will produce target backup data according to this very selection.
Synology Network Backup Destination Target Format

Synology Network Backup Destination Target Format

2: And now: For the backup set

Unsurprinsingly, backup sets are created in the section “Backup” of the “Backup and Replication” App:

  • First choice – prior to the wizzard even starting – is either to create a “Data Backup Task” or an iSCSI “LUN Backup Task” (details on iSCSI LUN can be found in the online documentation; however, if your Storage App isn’t mentioning any LUNs used, forget about that option – it obviously wouldn’t have anything to backup)
  • Next, chose the backup destination (ideally configured beforehand)
Synology Backup Task - Select Destination

Synology Backup Task – Select Destination

  • After that, all shared folders are presented and the ones to be included in the backup can be checkmarked
  • In addition, the wizzard allows to include app data into the backup (Surveillance Station is the only example I had running)
Synology Backup Task - Selecting Apps

Synology Backup Task – Selecting Apps

  • Finally some pretty important detail settings can be done:
Synology Backup Task - Details Settings

Synology Backup Task – Details Settings

  • Encryption, compression and/or block-level backup
  • Preserve files on destination, even when source is deleted (note the ambiguous wording here!)
  • Backup metadata of files as well as adjacent thumbnails (obviously more target storage consumed)
  • Enable backup of configurations along with this task
  • Schedule the backup task to run regularly
  • And last not least: bandwidth limitations! It is highly recommended to consider that carefully. While testing the whole stuff, I ran into serious bandwidth decrease within my local area network as both disc stations where running locally for the tests. So, a running backup task does indeed consume quite a bit of performance!

Once the settings are applied, the task is created and stored in the App – waiting to be triggered by a scheduler event or a click to “Backup Now”

So, what is this one doing?

It shovels data from (a) to (b). Period. When having selected “readable” at the beginning, you can even see folders and files being created or updated step by step in the destination directory. One nice advantage (especially for first-time backups) is, that the execution visibly shows its progress in the App:

Synology Backup Task - Progression

Synology Backup Task – Progression

Also, when done, it pushes a notification (by eMail, too, if configured) to inform about successful completion (or any failure happened).

Synology Backup Completion Notification

Synology Backup Completion Notification

Below screenshot eventually shows what folders look like at the destination:

Synology Backup Destination Directory Structure

Synology Backup Destination Directory Structure

And when a new or updated file appears in the source, the next run would update it on the destination in the same folder (tested and confirmed, whatever others claim)!

So, in essence this method is pretty useable and useful for bringing data across to another location, plus: maintaining it readable there. However, there’s still some disadvantages which I’ll discuss in a moment …

So, what about Cloud Station?

Well, I’ve been using Cloud Station for years now. Without any ado; without any serious fault; with

  • around 100.000 pictures
  • several 1000 business data files, various sizes, types, formats, …
  • a nice collection of MP3 music – around 10.000 files
  • and some really large music recording folders (some with uncut raw recordings in WAV format)

Cloud Station works flawlessly under these conditions. For the benefit of Mr. Adam Armstrong of storagereview.com, I’ve skipped a detailed explanation of Cloud Station and will just refer to his – IMHO – very good article!

Why did I look into that, though data backup (explained before) did a pretty good job? Well – one major disadvantage with backup sets in Synology is that even if you chose “readable” as the desired destination format, there is still not really a way of producing destination results which resemble the source in a sufficiently close way, meaning, that with backup tasks, the backed-up data goes into some subdir within the backup destination folder – thereby making permission management on destination data an utter nightmare (no useful permission inheritance from source shared folder, different permissions intended on different sub-sub-folders for the data, etc.).

Cloud Station solves this, but in turn has the disadvantage that initial sync runs are always tremendously tedious and consume loads of transfer resources (though, when using Cloud Station between 2 NASs this disadvantage is more or less reduced to a significantly higher CPU and network usage during the sync process). So, actually we’d be best to go with Cloud Station and just Cloud-sync the two NASs.

BUT: There’s one more thing with this – and any other sync – solution: Files are kept in line on both endpoints, meaning: When a file is deleted on one, its mirror on the other side is deleted, too. This risk can be mitigated by setting up recycle bin function for shared folders and versioning for Cloud Station, but still it’s no real backup solution suitable for full disaster recovery.

What the hell did I do then?

Neither of the options tested was fully perfect for me, so: I took all of them (well: not fully in the end; as said, I can’t get my head around that shared folder sync, so at the moment I am going without it).

Let’s once more have a quick glance at the key capabilities of each of the discussed options:

Synology: Backup Options

Synology: Backup Options

  • Shared Folder Sync is no sync; and it leaves the target essentially unusable. Further: A file deleted in the source would – by the sync process – instantly be deleted in the destination as well.
  • Data Backup (if chosen “readable”) just shifts data 1:1 into the destination – into a sub-folder structure; the multi-version volume option would create a backup package. IMHO great to use if you don’t need instant access to data managed equally to the source.
  • Cloud Station: Tedious initial sync but after that the perfect way of keeping two folder trees (shared folders plus sub-items) in sync; mind: “in sync” means, that destroying a file destroys it at both locations (can be mitigated to a certain extent by using versioning).

I did it may way:

  1. Business projects are “Cloud Station” synced from the office NAS (source and master) to the home NAS; all devices using business projects connect to the office NAS folders of that category.
  2. Media files (photos, videos, MP3 and other music, recordings, …) have been 1:1 replicated to the new NAS by a one-time data backup task. At the moment, Cloud Station is building up its database for these shared folders and will maybe become the final solution for these categories. Master and source is the home NAS (also serving UPnP, of course); the office NAS (for syncing) and all devices, which want to stream media or manage photos, connect to this one.
  3. Archive shared folders (with rare data change) have been replicated to the new NAS and are not synced at the moment. I may go back to a pure incremental backup solution or even set some of these folders to read-only by permission and just leave them as they are.

Will that be final? Probably not … we’ll see.

Do you have a better plan? Please share … I’m curious!

 

Published by:
SmileIT

Evaluation Report – Monitoring Comparison: newRelic vs. Ruxit

I’ve worked on cloud computing frameworks with a couple of companies meanwhile. DevOps like processes are always an issue along with these cooperations – even more when it comes to monitoring and how to innovatively approach the matter.

As an example I am ever and again emphasizing Netflix’s approach in these conversations: I very much like Netflix’s philosophy of how to deploy, operate and continuously change environment and services. Netflix’s different component teams do not have any clue on the activities of other component teams; their policy is that every team is self-responsible for changes not to break anything in the overall system. Also, no one really knows in detail which servers, instances, services are up and running to serve requests. Servers and services are constantly automatically re-instantiated, rebooted, added, removed, etc. Such is a philosophy to make DevOps real.

Clearly, when monitoring such a landscape traditional (SLA-fulfilment oriented) methods must fail. It simply isn’t sufficient for a Cloud-aware, continuous delivery oriented monitoring system to just integrate traditional on-premise monitoring solutions like e.g. Nagios with e.g. AWS’ CloudWatch. Well, we know that this works fine, but it does not yet ease the cumbersome work of NOCs or Application Operators to quickly identify

  1. the impact of a certain alert, hence its priority for ongoing operations and
  2. the root cause for a possible error

After discussing these facts the umpteenth time and (again) being confronted with the same old arguments about the importance of ubiquitous information on every single event within a system (for the sake of proving SLA compliancy), I thought to give it a try and dig deeper by myself to find out whether these arguments are valid (and I am therefore wrong) or whether there is a possibility to substantially reduce event occurrence and let IT personal only follow up the really important stuff. Efficiently.

At this stage, it is time for a little

DISCLAIMER: I am not a monitoring or APM expert; neither am I a .NET programming expert. Both skill areas are fairly familiar to me, but in this case I intentionally approached the matter from a business perspective – as least technical as possible.

The Preps

In autumn last year I had the chance to get a little insight into 2 pure-SaaS monitoring products: Ruxit and newRelic. Ruxit back then was – well – a baby: Early beta, no real functionality but a well-received glimpse of what the guys are on for. newRelic was already pretty strong and I very much liked their light and quick way of getting started.

As that project back then got stuck and I ended my evaluations in the middle of getting insight, I thought, getting back to that could be a good starting point (especially as I wasn’t able to find any other monitoring product going the SaaS path that radically, i.e. not even thinking of offering an on-premise option; and as a cloud “aficionado” I was very keen on seeing a full-stack SaaS approach). So the product scope was set pretty straight.

The investigative scope, this time, should answer questions a bit more in a structured way:

  1. How easy is it to kick off monitoring within one system?
  2. How easy is it to combine multiple systems (on-premise and cloud) within one easy-to-digest overview?
  3. What’s alerted and why?
  4. What steps are needed in order to add APM to a system already monitored?
  5. Correlation of events and its appearance?
  6. The “need to know” principle: Impact versus alert appearance?

The setup I used was fairly simple (and reduced – as I didn’t want to bother our customer’s workloads in any of their datacenters): I had an old t1.micro instance still lurking around on my AWS account; this is 1 vCPU with 613MB RAM – far too small to really perform with the stuff I wanted it to do. I intentionally decided to use that one for my tests. Later, the following was added to the overall setup:

  • An RDS SQL Server database (which I used for the application I wanted to add to the environment at a later stage)
  • IIS 6 (as available within the Server image that my EC2 instance is using)
  • .NET framework 4
  • Some .NET sample application (some “Contoso” app; deployed directly from within Visual Studio – no changes to the defaults)

Immediate Observations

2 things popped into my eyes only hours (if not minutes) after commencing my activities in newRelic and Ruxit, but let’s first start with the basics.

Setting up accounts is easy and straight forward in both systems. They are both truly following the cloud affine “on-demand” characteristic. newRelic creates a free “Pro” trial account which is converted into a lifetime free account when not upgraded to “paid” after 14 days. Ruxit sets up a free account for their product but takes a totally different approach – closer resembling to consumption-based pricing: you get 1000 hours of APM and 50k user visits for free.

Both systems follow pretty much the same path after an account has been created:

  • In the best case, access your account from within the system you want to monitor (or deploy the downloaded installer package – see below – to the target system manually)
  • Download the appropriate monitoring agent and run the installer. Done.

Both agents started to collect data immediately and the browser-based dashboards produced the first overview of my system within some minutes.

As a second step, I also installed the agents to my local client machine as I wanted to know how the dashboards display multiple systems – and here’s a bummer with Ruxit: My antivirus scanner alerted me with an Win32.Evo-Gen suspicion:

Avast virus alert upon Ruxit agent install

Avast virus alert upon Ruxit agent install

It wasn’t really a problem for the agent to install and operate properly and produce data; it was just a little confusing. In essence, the reason for this is fairly obvious: The agent is using a technique which is comparable to typical virus intrusion patterns, i.e. sticking its fingers deep into the system.

The second observation was newRelics approach to implement web browser remote checks, called “Synthetics”. It was indeed astonishingly easy to add a URL to the system and let newRelic do their thing – seemingly from within the AWS datacenters around the world. And especially with this, newRelic has a very compelling way of displaying the respective information on their Synthetics dashboard. Easy to digest and pretty comprehensive.

At the time when I started off with my evaluation, Ruxit didn’t offer that. Meanwhile they added their Beta for “Web Checks” to my account. Equally easy to setup but lacking some more rich UI features wrt display of information. I am fairly sure that this’ll be added soon. Hopefully. My take is, that combining system monitoring or APM with insights displaying real user usage patterns is an essential part to efficiently correlate events.

Security

I always spend a second thought on security questions, hence contemplated Ruxit’s way of making sure that an agent really connects to the right tenant when being installed. With newRelic you’re confronted with an extra step upon installation: They ask you to copy+paste a security key from your account page during their install procedure.

newRelic security key example

newRelic security key example

Ruxit doesn’t do that. However, they’re not really less secure; it’s just that they pre-embed this key into the installer package that is downloaded,c so they’re just a little more convenient. Following shows the msiexec command executed upon installation as well as its parameters taken form the installer log (you can easily find that information after the .exe package unpacks into the system’s temp folder):

@msiexec /i "%i_msi_dir%\%i_msi%" /L*v %install_log_file% SERVER="%i_server%" PROCESSHOOKING="%i_hooking%" TENANT="%i_tenant%" TENANT_TOKEN="%i_token%" %1 %2 %3 %4 %5 %6 %7 %8 %9 >con:
MSI (c) (5C:74) [13:35:21:458]: Command Line: SERVER=https://qvp18043.live.ruxit.com:443 PROCESSHOOKING=1 TENANT=qvp18043 TENANT_TOKEN=ABCdefGHI4JKLM5n CURRENTDIRECTORY=C:\Users\thome\Downloads CLIENTUILEVEL=0 CLIENTPROCESSID=43100

Alerting

After having applied the package (both packages) onto my Windows Server on EC2 things popped up quickly within the dashboards (note, that both dashboard screenshots are from a later evaluation stage; however, the basic layout was the very same at the beginning – I didn’t change anything visually down the road).

newRelic server monitoring dashboard

newRelic server monitoring dashboard showing the limits of my too-small instance 🙂

Ruxit server monitoring dashboard

The Ruxit dashboard on the same server; with a clear hint on a memory problem 🙂

What instantly stroke me here was the simplicity of Ruxit’s server monitoring information. It seemed sort-of “thin” on information (if you want a real whole lot of info right from the start, you probably prefer newRelic’s dashboard). Things, though, changed when my server went into memory saturation (which it constantly does right away when accessed via RDP). At that stage, newRelic started firing eMails alerting me of the problem. Also, the dashboard went red. Ruxit in turn did nothing really. Well, of course, it displayed the problem once I was logged into the dashboard again and had a look at my server’s monitoring data; but no alert triggered, no eMail, no red flag. Nothing.

If you’re into SLA fulfilment, then that is precisely the moment to become concerned. On second thought, however, I figured that actually no one was really bothered by the problem. There was no real user interaction going on in that server instance. I hadn’t even added an app really. Hence: why bother?

So, next step was to figure out, why newRelic went so crazy with that. It turned out that with newRelic every newly added server gets assigned to a default server policy.

newRelic's monitoring policy configuration

newRelic’s monitoring policy configuration

I could turn off that policy easily (also editing apparently seems straight forward; I didn’t try). However, to think that with every server I’m adding I’d have to figure out first, which alerts are important as they might be impacting someone or something seemed less on a “need to know” basis than I intended to have.

After having switched off the policy, newRelic went silent.

BTW, alerting via eMail is not setup by default in Ruxit; within the tenant’s settings area, this can be added as a so called “Integration” point.

AWS Monitoring

As said above, I was keen to know how both systems integrate multiple monitoring sources into their overviews. My idea was to add my AWS tenant to be monitored (this resulted from the previously mentioned customer conversations I had had earlier; that customer’s utmost concern was to add AWS to their monitoring overview – which in their case was Nagios, as said).

A nice thing with Ruxit is that they fill their dashboard with those little demo tiles, which easily lead you into their capabilities without having setup anything yet (the example below shows the database demo tile).

Ruxit demo tile example

This is one of the demo tiles in Ruxit’s dashboard – leading to DB monitoring in this case

I found an AWS demo tile (similar to the example above), clicked and ended up with a light explanation of how to add an AWS environment to my monitoring ecosystem (https://help.ruxit.com/pages/viewpage.action?pageId=9994248). They offer key based or role based access to your AWS tenant. Basically what they need you to do is these 3 steps:

  1. Create either a role or a user (for use of access key based connection)
  2. Apply the respective AWS policy to that role/user
  3. Create a new cloud monitoring instance within Ruxit and connect it to that newly created AWS resource from step 1

Right after having executed the steps, the aforementioned demo tiled changed into displaying real data and my AWS resources showed up (note, that the example below already contains RDS, which I added at a later stage; the cool thing here was, that that was added fully unattended as soon as I had created it in AWS).

Ruxit AWS monitoring overview

Ruxit AWS monitoring overview

Ruxit essentially monitors everything within AWS which you can put a CloudWatch metric on – which is a fair lot, indeed.

So, next step clearly was to seek the same capability within newRelic. As far as I could work out, newRelic’s approach here is to offer plugins – and newRelic’s plugin ecosystem is vast. That may mean, that there’s a whole lot of possibilities for integrating monitoring into the respective IT landscape (whatever it may be); however, one may consider the process to add plugin after plugin (until the whole landscape is covered) a bit cumbersome. Here’s a list of AWS plugins with newRelic:

newRelic plugins for AWS

newRelic plugins for AWS

newRelic plugins for AWS

newRelic plugins for AWS

Add APM

Adding APM to my monitoring ecosystem was probably the most interesting experience in this whole test: As a preps for the intended result (i.e.: analyse data about a web application’s performance at real user interaction) I added an IIS to my server and an RDS database to my AWS account (as mentioned before).

The more interesting fact, though, was that after having finalized the IIS installation, Ruxit instantly showed the IIS services in their “Smartscape” view (more on that a little later). I didn’t have to change anything in my Ruxit environment.

newRelic’s approach is a little different here. The below screenshot shows their APM start page with .NET selected.

newRelic APM start page with .NET selected

newRelic APM start page with .NET selected

After having confirmed each selection which popped up step by step, I was presented with a download link for another agent package which I had to apply to my server.

The interesting thing, though, was, that still nothing showed up. No services or additional information on any accessible apps. That is logical in a way, as I did not have anything published on that server yet which resembled an application, really. The only thing that was accessible from the outside was the IIS default web (just showing that IIS logo).

So, essentially the difference here is that with newRelic you get system monitoring with a system monitoring agent, and by means of an application monitoring agent you can add monitoring of precisely the type of application the agent is intended for.

I didn’t dig further yet (that may be subject for another article), but it seems that with Ruxit I can have monitoring for anything going on on a server by means of just one install package (maybe one more explanation for the aforementioned virus scan alert).

However, after having published my .NET application, everything was fine again in both systems – and the dashboards went red instantly as the server went into CPU saturation due to its weakness (as intended ;)).

Smartscape – Overview

So, final question to answer was: What do the dashboards show and how do they ease (root cause) analysis?

As soon as the app was up and running and web requests started to role in, newRelic displayed everything to know about the application’s performance. Particularly nice is the out-of-the-box combination of APM data with browser request data within the first and the second menu item (either switch between the 2 by clicking the menu or use the links within the diagrams displayed).

newRelic APM dashboard

newRelic APM dashboard

The difficulty with newRelic was to discover the essence of the web application’s problem. Transactions and front-end code performance was displayed in every detail, but I knew (from my configuration) that the problem of slow page loads – as displayed – lied in the general weakness of my web server.

And that is basically where Ruxit’s smartscape tile in their dashboard made the essential difference. The below screenshot shows a problem within my web application as initially displayed in Ruxit’s smartscape view:

Ruxit's smartscape view showing a problem in my application

Ruxit’s smartscape view showing a problem in my application

By this view, it was obvious that the problem was either within the application itself or within the server as such. A click to the server not only reveals the path to the depending web application but also other possibly impacted services (obviously without end user impact as otherwise there would be an alert on them, too).

Ruxit smartscape with dependencies between servers, services, apps

Ruxit smartscape with dependencies between servers, services, apps

And digging into the server’s details revealed the problem (CPU saturation, unsurprisingly).

Ruxit revealing CPU saturation as a root cause

Ruxit revealing CPU saturation as a root cause

Still, the amount of dashboard alerts where pretty few. While I had 6 eMails from newRelic telling me about the problem on that server, I had only 2 within Ruxit: 1 telling me about the web app’s weak response and another about CPU saturation.

Next step, hence, would be to scale-up the server (in my environment) or scale-out or implement an enhanced application architecture (in a realistic production scenario). But that’s another story …

Bottom line

Event correlation and alerting on a “need to know” basis – at least for me – remains the right way to go.

This little test was done with just one server, one database, one web application (and a few other services). While newRelics comprehensive approach to showing information is really compelling and perfectly serves the objective of complete SLA compliancy reporting, Ruxit’s “need to know” principle much more meets the needs of what I would expect form innovative cloud monitoring.

Considering Netflix’s philosophy from the beginning of this article, innovative cloud monitoring basically translates into: Every extra step is a burden. Every extra information on events without impact means extra OPS effort. And every extra-click to correlate different events to a probable common root-cause critically lengthens MTTR.

A “need to know” monitoring approach while at the same time offering full stack visibility of correlated events is – for me – one step closer to comprehensive Cloud-ready monitoring and DevOps.

And Ruxit really seems to be “spot on” in that respect!

 

Published by:

DevOps style performance monitoring for .NET

 

{{ this article has originally been published in DevOps.com }}

 

Recently I began looking for an application performance management solution for .NET. My requirements are code level visibility, end to end request tracing, and infrastructure monitoring in a DevOps production setup.

DotTrace is clearly the most well-known tool for code level visibility in development setups, but it can’t be used in a 24×7 production setup. DotTrace also doesn’t do typical Ops monitoring.

Unfortunately a Google search didn’t return much in terms of a tool comparison for .NET production monitoring. So I decided to do some research on my own. Following is a short list of well-known tools in the APM space that support .NET. My focus is on finding an end-to-end solution and profiler-like visibility into transactions.

New Relic was the first to do APM SaaS, focused squarely on production with a complete offering. New Relic offers web request monitoring for .NET, Java, and more. It automatically shows a component-based breakdown of the most important requests. The breakdown is fairly intuitive to use and goes down to the SQL level. Code level visibility, at least for .NET, is achieved by manually starting and stopping sampling. This is fine for analyzing currently running applications, but makes analysis of past problems a challenge. New Relic’s main advantage is its ease of us, intuitive UI, and a feature set that can help you quickly identify simple issues. Depth is the main weakness of NewRelic. As soon as you try to dig deeper into the data, you’re stuck. This might be a minor point, but if you’re used to working with a profiler, you’ll miss CPU breakdown as New Relic only shows response times.

net-1-newrelic

Dynatrace is the vendor that started the APM revolution and is definitely the strongest horse in this race. Its feature set in terms of .NET is the most complete, offering code level monitoring (including CPU and wait times), end to end tracing, and user experience monitoring. As far as I can determine, it’s the only tool with a memory profiler for .NET and it also features IIS web request insight. It supports the entire application life cycle from development environments, to load testing, to production. As such it’s nearly perfect for DevOps. Due to its pricing structure and architecture it’s targeted more at the mid to enterprise markets. In terms of ease of use it’s catching up to competition with a new Web UI. It’s rather light on infrastructure monitoring on its own, but shows additional strength with optional Dynatrace synthetic and network monitoring components.

net-2-dynatrace

Ruxit is a new SaaS solution built by Dynatrace. It’s unique in that it unites application performance management and real user monitoring with infrastructure, cloud, and network monitoring into a single product. It is by far the easiest to install, literally takes 2 minutes. It features full end to end tracing, code level visibility down to the method level, SQL visibility, and RUM for .NET, Java, and other languages, with insight into IIS and Apache. Apart from this it has an analytics engine that delivers both technical and user experience insights. Its main advantages are its ease of use, web UI, fully automated root cause analysis, and frankly, amazing breadth. Its flexible consumption based pricing scales from startups, cloud natives, and mid markets up to large web scale deployments of ten-thousands of servers.

net-3-ruxit

AppNetta‘s TraceView takes a different approach to application performance management. It does support tracing across most major languages including database statements and of course .NET. It visualizes things in charts and scatter plots. Even traces across multiple layers and applications are visualized in graphs. This has its advantages but takes some time getting used to it. Unfortunately while TraceView does support .NET it does not yet have code level visibility for it. This makes sense for AppNetta, which as a whole is more focused on large scale monitoring and has more of a network centric background. For DevOps in .NET environments however, it’s a bit lacking.

net-4-TraceView

Foglight, originally owned by Quest and now owned by Dell, is a well-known application performance management solution. It is clearly meant for operations monitoring and tracks all web requests. It integrates infrastructure and application monitoring, end to end tracing, and code level visibility on .NET, among other things. It has the required depth, but it’s rather complex to set up and obviously generates alert storms as far as I could experience. It takes a while to configure and get the data you need. Once properly set up though, you get a lot of insight into your .NET application. In a fast moving DevOps scenario though it might take too long to manually adapt to infrastructure changes.

net-5-foglight

AppDynamics is well known in the APM space. Its offering is quite complete and it features .NET monitoring, quite nice transaction flow tracing, user experience, and code level profiling capabilities. It is production capable, though code level visibility may be limited here to reduce overhead. Apart from these features though, AppDynamics has some weaknesses, mainly the lack of IIS request visibility and the fact that it only features walk clock time with no CPU breakdown. Its flash-based web UI and rather cumbersome agent configuration can also be counted as negatives. Compared to others it’s also lacking in terms of infrastructure monitoring. Its pricing structure definitely targets the mid market.

net-6-AppDynamics

Manage Engine has traditionally focused on IT monitoring, but in recent years they added end user and application performance monitoring to their portfolio called APM Insight. Manage Engine does give you metric level insight into .NET applications and transaction trace snap shots which give you code level stack traces and database interactions. However it’s apparent that Manage Engine is a monitoring tool and APM insight doesn’t provide the level of depth one might be accustomed to from other APM tools and profilers.

net-7-ME

JenniferSoft is a monitoring solution that provides nice real-time dashboarding and gives an overview of the topology of your environment. It enables users to see deviations in the speed of transactions with real time scatter charts and analysis of transactions. It provides “profiling” for IIS/.NET transactions, but only on single tiers and has no transaction tracing. Their strong suit is clearly cool dashboarding but not necessarily analytics. For example, they are the only vendor that features 3D animated dashboards.

net-8-JenniferSoft

Conclusion: There’s more buzz around on the APM space than a Google search would reveal on first sight and I did actually discover some cool vendors to target my needs; however, the field clears up pretty much when you dig for end-2-end visibility from code down to infrastructure, including RUM, any web service requests and deep SQL insights. And if you want to pair that with a nice, fluent, ease-of-use web UI and efficient analytics, there’s actually not many left …

Published by:

The “Next Big Thing” series wrap-up: How to rule them all?

What is it that remains for the 8th and last issue of the “Next Big Thing” blog post series: To “rule them all” (all the forces, disruptive challenges and game changing innovations) and keep services connected, operating, integrated, … to deliver value to the business.

A bit ago, I came upon Jonathan Murray’s concept of the Composable Enterprise – a paradigm which essentially preaches fully decoupled infrastructure and application as services for company IT. Whether the Composable Enterprise is an entire new approach or just a pin-pointed translation of what is essential to businesses mastering digital transformation challenges is all the same.

The importance lies with the core concepts of what Jonathan’s paradigm preaches. These are to

  • decouple the infrastructure
  • make data a service
  • decompose applications
  • and automate everything

Decouple the Infrastructure.

Rewind into my own application development and delivery times during the 1990ies and the 00-years: When we were ready to launch a new business application we would – as part of the rollout process – inform IT of resources (servers, databases, connections, interface configurations) needed to run the thing. Today, large IT ecosystems sometimes still function that way, making them a slow and heavy-weight inhibitor of business agility. The change to incorporate here is two-folded: On the one hand infra responsibles must understand that they need to deliver on scale, time, demand, … of their business customers (which includes more uniform, more agile and more flexible – in terms of sourcing – delivery mechanisms). And on the other hand, application architects need to understand that it is not anymore their architecture that defines IT needs but in turn their architecture needs to adapt to and adopt agile IT infrastructure resources from wherever they may be sourced. By following that pattern, CIOs will enable their IT landscapes to leverage not only more cloud-like infrastructure sourcing on-premise (thereby enabling private clouds) but also will they become capable of ubiquitously using ubiquitous resources following hybrid sourcing models.

Make Data a Service.

This isn’t about BigData-like services, really. It might be (in the long run). But this is essentially about where the properties and information of IT – of applications and services – really is located. Rewind again. This time only for like 1 or 2 years. The second last delivery framework, that me and my team of gorgeous cloud aficionados created, was still built around a central source of information – essentially a master data database. This simply was the logical framework architecture approach back then. Even only a few months – when admittedly me and my then team (another awesome one) already knew that information needs to lie within the service – it was still less complex (hence: quicker) to construct our framework around such a central source of (service) wisdom. What the Composable Enterprise, though, rightly preaches is a complete shift of where information resides. Every single service, which offers its capabilities to the IT world around it, needs to provide a well-defined, easy to consume, transparently reachable interface to query and store any information relevant to the consumption of the service. Applications or other services using that service simply engage via that interface – not only to leverage the service’s capabilities but even more to store and retrieve data and information relevant to the service and the interaction with it. And there is no central database. In essence there is no database at all. There is no need for any. When services inherently know what they manage, need and provide, all db-centric architecture for the sole benefit of the db as such becomes void.

Decompose Applications.

The aforementioned leads one way into the decomposition pattern. More important, however, is to spend more thorough thinking about what a single business related activity – a business process – really needs in terms of application support. And in turn, what the applications providing this support to the business precisely need to be capable of. Decomposing Applications means to identify useful service entities which follow the above patterns, offer certain functionality in an atom kind-of way via well-defined interfaces (APIs) to the outside world and thereby create an application landscape which delivers on scale, time, demand, … just by being composed through service orchestration in the right – the needed – way. This is the end of huge monolithic ERP systems, which claim to offer all that a business needs (you just needed to customize them rightly). This is the commencing of light-weight services which rapidly adopt to changing underlying infrastructures and can be consumed not only for the benefit of the business owning them but – through orchestration –form whole new business process support systems for cross-company integration along new digitalized business models.

Automate Everything.

So, eventually we’ve ended at the heart of how to breath life into an IT which supports businesses in their digital transformation challenge.

Let me talk you into one final example emphasizing the importance of facing all these disruptive challenges openly: An Austrian bank of high reputation (and respectful success in the market) gave a talk at the Pioneers about how they discovered that they are actually not a good bank anymore, how they discovered that – in some years’ time – they’d not be able to live up to the market challenges and customers’ demands anymore. What they discovered was simply, that within some years they would lose customers just because of their inability to offer a user experience integrated with the mobile and social demands of today’s generations. What they did in turn was to found a development hub within their IT unit, solely focussing on creating a new app-based ecosystem around their offerings in order to deliver an innovative, modern, digital experience to their bank account holders.

Some time prior to the Pioneers, I had received a text that “my” bank (yes, I am one of their customers) now offers a currency exchange app through which I can simply order the amount of currency needed and would receive a confirmation once it’s ready to be handed to me in the nearest branch office. And some days past the Pioneers I received an eMail that a new “virtual bank servant” would be ready as an app in the net to serve all my account-related needs. Needless to say that a few moments later I was in and that the experience was just perfect even though they follow an “early validation” policy with their new developments, accepting possible errors and flaws for the benefit of reduced time to market and more accurate customer feedback.

Now, for a moment imagine just a few of the important patterns behind this approach:

  • System maintenance and keeping-the-lights-on IT management
  • Flexible scaling of infrastructures
  • Core banking applications and services delivering the relevant information to the customer facing apps
  • App deployment on a regular – maybe a daily – basis
  • Integration of third-party service information
  • Data and information collection and aggregation for the benefit of enhanced customer behaviour insight
  • Provision of information to social platforms (to influence customer decisions)
  • Monitoring and dashboards (customer-facing as well as internally to business and IT leaders)
  • Risk mitigation
  • … (I could probably go on for hours)

All of the above capabilities can – and shall – be automated to a certain, a great extent. And this is precisely what the “automate everything” pattern is about.

Conclusion

There is a huge business shift going on. Software, back in the 80ies and 90ies was a driver for growth, had its downturn in and post the .com age and now enters an era of being ubiquitously demanded.

Through the innovative possibilities by combining existing mobile, social and data technologies, through the merge of physical and digital worlds and through the tremendously rapid invention of new thing-based daily-life support, businesses of all kind will face the need for software – even if they had not felt that need so far.

The Composable Enterprise – or whatever one wants to call a paradigm of loosely coupled services being orchestrated through well-defined transparently consumable interfaces – is a way for businesses to accommodate this challenge more rapidly. Automating daily routine – like e.g. the aforementioned tasks – will be key to enterprises which want to stay on the edge of innovation within these fast changing times.

Most importantly, though, is to stay focussed within the blurring worlds of things, humans and businesses. To keep the focus on innovation not for the benefit of innovation as such but for the benefit of growing the business behind.

Innovation Architects will be the business angels of tomorrow – navigating their stakeholders through an ongoing revolution and supporting or driving the right decisions for implementing and orchestrating services in a business-focussed way.

 

{the feature image of this last “The Next Big Thing” series post shows a design by New Jersey and New York-based architects and designers Patricia Sabater, Christopher Booth and Aditya Chauan: The Sky Cloud Skyscraper – found on evolo.us/architecture}

Published by:

The “Next Big Thing” series: From Social Network to #Social #Revolution

{this is No. 3 of the “Next Big Thing” blog post series, which discusses the revolution to come through ongoing innovation in IT and the challenges involved with’em}

 

Along with Cloud patterns the delivery of large engagement platforms – essentially web applications architectured, of course, specifically to serve a vast amount of simultaneous access and a huge stream of information – became possible.

If one does take a look back into history of social media, these platforms step-by-step evolved from pure public-chat and tweet apps into full blown areas for (group) communications, gaming, advertising and (sometimes) simply storing information. Not by what they were originally intended to be (facebook’s core goal was – and still is, if you trust Zuckerberg – to connect everyone) but by how the consumers (private or business ones) developed themselves within them as well as developed and matured their usage patterns.

However, there is a “meta level” beyond the obvious: Observing youth and their approach to using technology surrounding them might lead to thinking: Those guys have completely forgotten about communication and engagement. I trust, the opposite is the case. When I talk to my kids, I learn that they read everything, absorb everything, have a much faster ability to notice news, information, consume different channels, etc. The only thing is: They do not react, if it doesn’t touch them. And that pattern applies not only to advertisement-backed social media feeds but also – and maybe foremost – to direct 1:1 or group conversations. And this is why I believe that the social aspect within the Nexus of Forces will have a much stronger impact than we currently notice.

I tend to claim a social revolution to approach us because – together with the other forces – social media will become the integrative middleware between what we want to consume, businesses want to drive us to consume and how we consume it. No advertising phone calls anymore, no spamming in our mailboxes (hurray!), but a social feed of information which is far better suited to create the impression of personal engagement while in truth being just an efficient aggregation and combination of data that we all have earlier produced ourselves.

Are businesses ready for that revolution? Can they adapt their marketing strategies to leverage those vast new possibilities? Orchestrating services and data in order to feed social platforms with what is considered relevant to the customers of a certain enterprise will become a core IT capability in order to be able to become a player of relevance in the social revolution.

 

{No. 4 of this blog post series talks about the challenges of the “mobile everywhere” culture – soon to come – stay tuned}

feature image found at AFAO talks (http://afaotalks.blogspot.com.au/2012/07/going-social_20.html)

Published by:

The Next Big Thing (the even looonger End of the Cloud)

With the Pioneers still in the back of my mind, with all the startups’ ideas presented there, with predictions of like 40 billion connected “things” by 2020 (Source: Gartner) and all those many buzzwords around in these areas I am even more convinced that “The Cloud” as a discussable topic – as a matter that needs any kind of consideration whatsoever – is really at its end.

In case you read the writeup of one of my keynotes, you may recall the red line through it which stated a mere end to the early concepts of Cloud Computing as those concepts have so much matured and so deeply entered businesses and the internet as such, that we can securely claim Cloud to be ubiquitous: It is just there. Just as the Internet has been for years now.

So, what’s next? BigData? Social Revolution? Mobile Everywhere? All of that and any combination?

Here comes a series of posts discussing these topics and beyond. It will offer some clarifying definitions and delineations.

The first parts will cover what’s to expect by the bond of data and analytics, mobility and social media. In the second half it will discuss the huge transformation challenges involved with the digitalization of business. The conclusive part is about how IT has to change in order to support businesses rightly in these challenging and ever-changing times.

 

So let’s begin with

The Nexus of Forces

Nexus of Forces

The Nexus of Forces from another perspective

 

I like this paradigm that was originally postulated by Gartner some time ago (I read it first in the “Hype Cycle for Emerging Technologies 2014”). It describes the bonding of Cloud Computing with BigData, Social and Mobile.

Personally – unsurprisingly – I would disagree with Gartner to see “Cloud” as one of the 4 forces; rather my claim would be that Cloud Computing is the underlying basis to everything else. Why? Because any of those ecosystems which are to support the other 3 forces (mobile, social, data) builds inherently along the 5 essential characteristics of Cloud which still define whether a particular service is within or out of the definition:

  • On demand self-service: make things available when they’re needed
  • Broad network access: ensure delivery of the service through ubiquitous networking on high bandwidth
  • Resource pooling: Manage resources consumed by the service efficiently for sharing between service tenants
  • Rapid elasticity: Enable the service to scale up and down based on the demand of consumers
  • Measured: Offer utmost transparency about what service consumers have been using over time match it clearly and transparently with what they’re charged.

Hence, when now continuing to discuss the Nexus of Forces, I will keep it with the three of them and will not query The Cloud’s role in it (sic! “It’s the End of the Cloud as we know it” ;))

 

{No. 2 of this series discusses definition and challenges related to data and analytics}

 

Update: feature image added (found at http://forcelinkpr.net/?p=9612)

Published by:

I wasn’t cheating (It’s still the End of the Cloud)

I swear, i haven’t known any of what I’m going to share with you in this post, when I wrote “The End of the Cloud (as we know it)“. If I would have I probably may not have written it.

Anyway, this is about Jonathan Murray, CTO of Warner Music Group (@Adamalthus), defining something that isn’t even mentioned yet by Gartner and the-like (google it, for proof; even bing it; the result remains the same). And – hey! – he did that back in April 2013.

What he’s essentially saying is, that in order to enable businesses to compete in an ever more demanding, more dynamic, more agile era of interacting services and capabilities, in an era where things are connected to the net, to systems, to businesses, to humans, in an era where everything is going to be digitalized, IT needs to disruptively change not only the way it is delivered to businesses but actually the way it is built.

What Jonathan demands for (and according to his own account is realizing within Warner) is “IT Factories”. No Silos anymore. No monolithic application engineering and delivery. A service oriented component architecture for everything IT needs to deliver and connect with. And the term “Cloud” isn’t even mentioned. Because we’ve landed beyond even discussing it. The models are clear. The need for adopting them for the benefit of a new – an agile – IT is now!

What Jonathan demands for is “The Composable Enterprise” – essentially consisting of 4 core patterns:

  • Decouple the Infrastructure
  • Make Data a Service (awesome, disruptive quote from Jonathan’s September 2013 “CloudFoundry Conference” talk: “Stored procedures need to be strangled at birth!”)
  • Decompose Applications
  • and finally, of course: Automate Everything

Read the concept in his blog post on adamalthus.com.

And here’s two recordings of talks given by Jonathan – both really worth watching:

And then, let’s go build it …!

 

Published by:

Is it self-serviced? Is it APIed?

With VMware and Microsoft now really entering the Private (and Hybrid) Cloud market, we’ll see an increased drive towards software-controlled DCs and IT-aaS. No surprise, hence, that many legacy DC providers and outsourcers seek their place on that battlefield also with their offerings.

Their way of doing so is to embrace this new technology “Cloud” which is brought onto a more comprehensive, digestable level with vCloud, SystemCenter and the like. Next steps seen from those on that path are normally e.g.:

  • Establish partnerships with those vendors – mainly to get their urgently needed discounts
  • Create a reference architecture which looks surprisingly similar to the diagrams provided by the above vendors
  • Re-organize their network, storage and server teams to jointly create the new data center
  • Create a GTM strategy which uses “Cloud” at least once on every slide
Private Cloud Reference Architecture according to Microsoft

Private Cloud Reference Architecture according to Microsoft

Now, what’s their offering?

Utlimately it is a virtualized datacenter infrastructure, supported by the resource pooling models that the mentioned technologies provide. Who’s using this new datacenter infrastructure? The provider himself. Essentially, this is no false approach – by no means; but! – it is no full approach.

Let’s simply mirror against the esential characteristics of cloud computing *):

  • broad network access: “broad” is not only meant in a sense that the (Cloud) DC is accessible form anywhere but that the service is accessible from anywhere, with any device, in any function. Apart form the fact that connectivity into the DC is seldomly changed from the provider’s earlier offerings, the newly established “Cloud” service is less than seldomly accessible from anything other than a PC.
  • resource pooling: yes, that’s what vCloud and SystemCenter (and maybe others as well) really know how to do – if there are enough of
  • rapid elasticity: yes, by embracing an infrastructure management framework like the above, rapid elasticity may be provided in a sense that the pooled resources can be provisioned to consumers in an instant – if there are enough of. Question is: Who’s provisioning it? And by which process? We’ll come back to that shortly
  • measured: The point of a “measured” cloud service is not that the provider monitors his performance. The point is that the consumer is offered full transparency of the provider’s compliancy with the agreed SLA in terms of performance, availability, realiability and cost. Key here: “transparency” of metered, measured and monitored services and components
  • on-demand self-service: BOOM! Fullstop here. Most of the offerings which are called “Cloud” and are provided by legacy DC providers offer the cloud capabilities internally within the providers’ DC. I tend to believe the slides about reference architecture, building block and service capabilities. But – this functionality is not brought to the customer. The contracting is not changed. Provisioning takes place in a ticket-based manner through service desk personal. Measurement is hidden. Rapidity is reduced. Elasticity and pooling are controlled by the provider. Guys, you’re not cloud. Period.

Why is this a problem?

Because, the world has long moved on. Cloud consumers are not asking for a cloud-based delivery of services out of a legacy DC. Cloud consumers wanna have it in their hands. Cloud consumers expect to gain control over their resources without(!) paying for resource guarantees upfront (btw: guaranteeing a certain amount of resources to cloud consumers is most rarely a way to offer higher service quality to the consumer but rather to gain higher level of control over resources for the provider).

What Cloud consumers will ask, as of now, is:

  • Can I self-service my IT when I run it with you?
  • Does it have an API exposed to the extern, that I can use to integrate?

Alex Williams predicts a hard time for “Cloud Washers” in this post on techcrunch. But what he even more predicts is consumers’ expectations regarding a new way of production, the moving of apps rather than VMs, a losely coupled mesh of services and consumers’ expectation to on-demand self-service these. Overall, this is going far beyond self-service and a featured API as such. Eventually it aims at process centric service (deployment) automation.

“Look out cloudwashers”, he says, “it’s just going to get worse. This shake up is happening faster than anyone realized.”

I’d like to add a wish: Cloudwashers, when embracing an inherently great technology for building your cloud offering, build it fully. Embrace and offer all of the essential characteristics!

 

*) Cloud Computing essential characteristics according to NIST Special Publication 800-145 „The NIST Definition of Cloud Computing“, Recommendations of the National Institute of Standards and Technology

Published by:

3rd Annual Cloud Survey Results of NorthBridge+GigaOM

The following “slideshare”-shared slides provide the result of the “2013 Future of Cloud Computing Survey” by Northbridge and GigaOM.

Why resharing here?

Because I believe that this is one of the best condensed presentation of a trend survey since long. Easily perceivable and free of boring prosa (just some context fitting statements).

Recommended read!

We are still at the beginning!

http://de.slideshare.net/mjskok/2013-future-of-cloud-computing-3rd-annual-survey-results

Published by:
%d bloggers like this: