The Smile-IT Blog » Blog Archives

Tag Archives: analytics

What is Social Media still worth for?

I’m pretty pissed by the recent rumours (let’s call it that way) about the social media platform “twitter” introducing an algorithmic timeline (wanna know more about the matter? either follow the #RIPtwitter hashtag or read this (very great and insightful) article by @setlinger to learn about the possible impact)

So why am I annoyed? – Here’s to share a little

personal history:

When having joined twitter and facebook in 2009, things in both networks were pretty straight forward: Your feed filled with updates from your followers, you could watch things you liked more closely and just run over other boring stuff quickly. Step-by-step facebook started to tailor my feed. It sort-of commenced when I noticed that they were constantly changing my feed setting to (don’t remember the exact wording) “trending stuff first” and I had to manually set it back to “chronological” ever and ever again. At some point that setting possibility vanished totally and my feed remained tailored to – well – what, actually?

Did I back out then? No! Because by that time, I had discovered the advertisement possibilities of facebook. Today, I run about 6 different pages (sometimes, I add some, such as the recent “I AM ELEVEN – Austrian Premiere” page, to promote some causes I am committed to; these go offline again some time later). I am co-administrator  of a page that has more than 37.000 followers (CISV International) and it is totally interesting to observe the effects you achieve with one or the other post, comment, engagement, … whatever. Beautiful things happening from time to time. Personally, in my own feed, I mainly share things randomly (you won’t know me, if you just knew my feed); sometimes it just feels like fun to share an update. Honestly, I’ve given up fully to think, that any real engagement is possible through these kind of online encounters – it’s just fun.

Twitter is a bit different: I like getting in touch with people, whom I do not really know. Funny, interesting, insightful exchanges of information happen within 140 characters. And it gives me food for thought job-wise equally as cause-wise (#CISV, #PeaceOneDay, … and more). I came upon the recently introduced “While you were away” section on my mobile, shook heads about it and constantly skipped it not really bothering about were to switch it off (subsequent answer to subsequent twitter-question: “Did you like this?” – always: “NO”).

And then there was the “algorithmic timeline” announcement!

So, why is this utter bullshit?

I’ll give you three simple answers from my facebook experience:

  • Some weeks back – in November, right after the Paris attacks – I was responsible to post an update to our CISV-International facebook followers. Tough thing, to find the right words. Obviously I got it not too wrong as the reported “reach” was around 150k users in the end. Think about that? A page with some 37k followers reaches some 150k with one post. I was happy about the fact, that it was that much, but thinkin’ twice about it: How can I really know about the real impact of that? In truth, that counter does tell me simply nothing.
facebook post on "CISV International" reaching nearly 150k users

facebook post on “CISV International” reaching nearly 150k users

  • Some days ago, I spent a few bucks to push a post from the “I AM ELEVEN – Austria” page. In the end it reported a reach of 1.8k! “Likes” – however – came mostly from users who – according to facebook – don’t even live in Vienna, though I tailored the ad to “Vienna+20km”. One may argue that even the best algorithm cannot control friends-of-friends engagement – and I do value that argument; but what’s the boosting worth then, if I do not get one single person more into the cinema to see the film?
facebook I AM ELEVEN boosted post

facebook I AM ELEVEN boosted post

  • I am recently flooded with constant appearances of “Secret Escape” ads. I’ve never klicked it (and won’t add a link here – I don’t wanna add to their view count); I’m not interested in it; facebook still keeps showing me who of my friends like it and adds the ad to my feed more than once every day. Annoying. And to stop it I’d have to interact with the ad – which I do not want to. However, I don’t have a simple choice of opting out of it …

Thinking of all that – and more – what would I personally gain from an algorithmic timeline on twitter, if facebook hasn’t really helped me in my endeavours anymore, recently? Nothing! I think. I just don’t have the amount of money to feed the tentacles of those guys, having such ideas, so that their ideas would by any means become worthy for my business or causes. Period.

But as those tentacles rarely listen to users like me but rather to potent advertisers (like “Secret Escape” e.g.), the only alternative will probably again be, to opt out:

Twitter: NO to "best tweets"

Twitter: NO to “best tweets”

 

Having recently read “The Circle” that’s a more and more useful alternative, anyway …

 

Published by:
SmileIT

Evaluation Report – Monitoring Comparison: newRelic vs. Ruxit

I’ve worked on cloud computing frameworks with a couple of companies meanwhile. DevOps like processes are always an issue along with these cooperations – even more when it comes to monitoring and how to innovatively approach the matter.

As an example I am ever and again emphasizing Netflix’s approach in these conversations: I very much like Netflix’s philosophy of how to deploy, operate and continuously change environment and services. Netflix’s different component teams do not have any clue on the activities of other component teams; their policy is that every team is self-responsible for changes not to break anything in the overall system. Also, no one really knows in detail which servers, instances, services are up and running to serve requests. Servers and services are constantly automatically re-instantiated, rebooted, added, removed, etc. Such is a philosophy to make DevOps real.

Clearly, when monitoring such a landscape traditional (SLA-fulfilment oriented) methods must fail. It simply isn’t sufficient for a Cloud-aware, continuous delivery oriented monitoring system to just integrate traditional on-premise monitoring solutions like e.g. Nagios with e.g. AWS’ CloudWatch. Well, we know that this works fine, but it does not yet ease the cumbersome work of NOCs or Application Operators to quickly identify

  1. the impact of a certain alert, hence its priority for ongoing operations and
  2. the root cause for a possible error

After discussing these facts the umpteenth time and (again) being confronted with the same old arguments about the importance of ubiquitous information on every single event within a system (for the sake of proving SLA compliancy), I thought to give it a try and dig deeper by myself to find out whether these arguments are valid (and I am therefore wrong) or whether there is a possibility to substantially reduce event occurrence and let IT personal only follow up the really important stuff. Efficiently.

At this stage, it is time for a little

DISCLAIMER: I am not a monitoring or APM expert; neither am I a .NET programming expert. Both skill areas are fairly familiar to me, but in this case I intentionally approached the matter from a business perspective – as least technical as possible.

The Preps

In autumn last year I had the chance to get a little insight into 2 pure-SaaS monitoring products: Ruxit and newRelic. Ruxit back then was – well – a baby: Early beta, no real functionality but a well-received glimpse of what the guys are on for. newRelic was already pretty strong and I very much liked their light and quick way of getting started.

As that project back then got stuck and I ended my evaluations in the middle of getting insight, I thought, getting back to that could be a good starting point (especially as I wasn’t able to find any other monitoring product going the SaaS path that radically, i.e. not even thinking of offering an on-premise option; and as a cloud “aficionado” I was very keen on seeing a full-stack SaaS approach). So the product scope was set pretty straight.

The investigative scope, this time, should answer questions a bit more in a structured way:

  1. How easy is it to kick off monitoring within one system?
  2. How easy is it to combine multiple systems (on-premise and cloud) within one easy-to-digest overview?
  3. What’s alerted and why?
  4. What steps are needed in order to add APM to a system already monitored?
  5. Correlation of events and its appearance?
  6. The “need to know” principle: Impact versus alert appearance?

The setup I used was fairly simple (and reduced – as I didn’t want to bother our customer’s workloads in any of their datacenters): I had an old t1.micro instance still lurking around on my AWS account; this is 1 vCPU with 613MB RAM – far too small to really perform with the stuff I wanted it to do. I intentionally decided to use that one for my tests. Later, the following was added to the overall setup:

  • An RDS SQL Server database (which I used for the application I wanted to add to the environment at a later stage)
  • IIS 6 (as available within the Server image that my EC2 instance is using)
  • .NET framework 4
  • Some .NET sample application (some “Contoso” app; deployed directly from within Visual Studio – no changes to the defaults)

Immediate Observations

2 things popped into my eyes only hours (if not minutes) after commencing my activities in newRelic and Ruxit, but let’s first start with the basics.

Setting up accounts is easy and straight forward in both systems. They are both truly following the cloud affine “on-demand” characteristic. newRelic creates a free “Pro” trial account which is converted into a lifetime free account when not upgraded to “paid” after 14 days. Ruxit sets up a free account for their product but takes a totally different approach – closer resembling to consumption-based pricing: you get 1000 hours of APM and 50k user visits for free.

Both systems follow pretty much the same path after an account has been created:

  • In the best case, access your account from within the system you want to monitor (or deploy the downloaded installer package – see below – to the target system manually)
  • Download the appropriate monitoring agent and run the installer. Done.

Both agents started to collect data immediately and the browser-based dashboards produced the first overview of my system within some minutes.

As a second step, I also installed the agents to my local client machine as I wanted to know how the dashboards display multiple systems – and here’s a bummer with Ruxit: My antivirus scanner alerted me with an Win32.Evo-Gen suspicion:

Avast virus alert upon Ruxit agent install

Avast virus alert upon Ruxit agent install

It wasn’t really a problem for the agent to install and operate properly and produce data; it was just a little confusing. In essence, the reason for this is fairly obvious: The agent is using a technique which is comparable to typical virus intrusion patterns, i.e. sticking its fingers deep into the system.

The second observation was newRelics approach to implement web browser remote checks, called “Synthetics”. It was indeed astonishingly easy to add a URL to the system and let newRelic do their thing – seemingly from within the AWS datacenters around the world. And especially with this, newRelic has a very compelling way of displaying the respective information on their Synthetics dashboard. Easy to digest and pretty comprehensive.

At the time when I started off with my evaluation, Ruxit didn’t offer that. Meanwhile they added their Beta for “Web Checks” to my account. Equally easy to setup but lacking some more rich UI features wrt display of information. I am fairly sure that this’ll be added soon. Hopefully. My take is, that combining system monitoring or APM with insights displaying real user usage patterns is an essential part to efficiently correlate events.

Security

I always spend a second thought on security questions, hence contemplated Ruxit’s way of making sure that an agent really connects to the right tenant when being installed. With newRelic you’re confronted with an extra step upon installation: They ask you to copy+paste a security key from your account page during their install procedure.

newRelic security key example

newRelic security key example

Ruxit doesn’t do that. However, they’re not really less secure; it’s just that they pre-embed this key into the installer package that is downloaded,c so they’re just a little more convenient. Following shows the msiexec command executed upon installation as well as its parameters taken form the installer log (you can easily find that information after the .exe package unpacks into the system’s temp folder):

@msiexec /i "%i_msi_dir%\%i_msi%" /L*v %install_log_file% SERVER="%i_server%" PROCESSHOOKING="%i_hooking%" TENANT="%i_tenant%" TENANT_TOKEN="%i_token%" %1 %2 %3 %4 %5 %6 %7 %8 %9 >con:
MSI (c) (5C:74) [13:35:21:458]: Command Line: SERVER=https://qvp18043.live.ruxit.com:443 PROCESSHOOKING=1 TENANT=qvp18043 TENANT_TOKEN=ABCdefGHI4JKLM5n CURRENTDIRECTORY=C:\Users\thome\Downloads CLIENTUILEVEL=0 CLIENTPROCESSID=43100

Alerting

After having applied the package (both packages) onto my Windows Server on EC2 things popped up quickly within the dashboards (note, that both dashboard screenshots are from a later evaluation stage; however, the basic layout was the very same at the beginning – I didn’t change anything visually down the road).

newRelic server monitoring dashboard

newRelic server monitoring dashboard showing the limits of my too-small instance 🙂

Ruxit server monitoring dashboard

The Ruxit dashboard on the same server; with a clear hint on a memory problem 🙂

What instantly stroke me here was the simplicity of Ruxit’s server monitoring information. It seemed sort-of “thin” on information (if you want a real whole lot of info right from the start, you probably prefer newRelic’s dashboard). Things, though, changed when my server went into memory saturation (which it constantly does right away when accessed via RDP). At that stage, newRelic started firing eMails alerting me of the problem. Also, the dashboard went red. Ruxit in turn did nothing really. Well, of course, it displayed the problem once I was logged into the dashboard again and had a look at my server’s monitoring data; but no alert triggered, no eMail, no red flag. Nothing.

If you’re into SLA fulfilment, then that is precisely the moment to become concerned. On second thought, however, I figured that actually no one was really bothered by the problem. There was no real user interaction going on in that server instance. I hadn’t even added an app really. Hence: why bother?

So, next step was to figure out, why newRelic went so crazy with that. It turned out that with newRelic every newly added server gets assigned to a default server policy.

newRelic's monitoring policy configuration

newRelic’s monitoring policy configuration

I could turn off that policy easily (also editing apparently seems straight forward; I didn’t try). However, to think that with every server I’m adding I’d have to figure out first, which alerts are important as they might be impacting someone or something seemed less on a “need to know” basis than I intended to have.

After having switched off the policy, newRelic went silent.

BTW, alerting via eMail is not setup by default in Ruxit; within the tenant’s settings area, this can be added as a so called “Integration” point.

AWS Monitoring

As said above, I was keen to know how both systems integrate multiple monitoring sources into their overviews. My idea was to add my AWS tenant to be monitored (this resulted from the previously mentioned customer conversations I had had earlier; that customer’s utmost concern was to add AWS to their monitoring overview – which in their case was Nagios, as said).

A nice thing with Ruxit is that they fill their dashboard with those little demo tiles, which easily lead you into their capabilities without having setup anything yet (the example below shows the database demo tile).

Ruxit demo tile example

This is one of the demo tiles in Ruxit’s dashboard – leading to DB monitoring in this case

I found an AWS demo tile (similar to the example above), clicked and ended up with a light explanation of how to add an AWS environment to my monitoring ecosystem (https://help.ruxit.com/pages/viewpage.action?pageId=9994248). They offer key based or role based access to your AWS tenant. Basically what they need you to do is these 3 steps:

  1. Create either a role or a user (for use of access key based connection)
  2. Apply the respective AWS policy to that role/user
  3. Create a new cloud monitoring instance within Ruxit and connect it to that newly created AWS resource from step 1

Right after having executed the steps, the aforementioned demo tiled changed into displaying real data and my AWS resources showed up (note, that the example below already contains RDS, which I added at a later stage; the cool thing here was, that that was added fully unattended as soon as I had created it in AWS).

Ruxit AWS monitoring overview

Ruxit AWS monitoring overview

Ruxit essentially monitors everything within AWS which you can put a CloudWatch metric on – which is a fair lot, indeed.

So, next step clearly was to seek the same capability within newRelic. As far as I could work out, newRelic’s approach here is to offer plugins – and newRelic’s plugin ecosystem is vast. That may mean, that there’s a whole lot of possibilities for integrating monitoring into the respective IT landscape (whatever it may be); however, one may consider the process to add plugin after plugin (until the whole landscape is covered) a bit cumbersome. Here’s a list of AWS plugins with newRelic:

newRelic plugins for AWS

newRelic plugins for AWS

newRelic plugins for AWS

newRelic plugins for AWS

Add APM

Adding APM to my monitoring ecosystem was probably the most interesting experience in this whole test: As a preps for the intended result (i.e.: analyse data about a web application’s performance at real user interaction) I added an IIS to my server and an RDS database to my AWS account (as mentioned before).

The more interesting fact, though, was that after having finalized the IIS installation, Ruxit instantly showed the IIS services in their “Smartscape” view (more on that a little later). I didn’t have to change anything in my Ruxit environment.

newRelic’s approach is a little different here. The below screenshot shows their APM start page with .NET selected.

newRelic APM start page with .NET selected

newRelic APM start page with .NET selected

After having confirmed each selection which popped up step by step, I was presented with a download link for another agent package which I had to apply to my server.

The interesting thing, though, was, that still nothing showed up. No services or additional information on any accessible apps. That is logical in a way, as I did not have anything published on that server yet which resembled an application, really. The only thing that was accessible from the outside was the IIS default web (just showing that IIS logo).

So, essentially the difference here is that with newRelic you get system monitoring with a system monitoring agent, and by means of an application monitoring agent you can add monitoring of precisely the type of application the agent is intended for.

I didn’t dig further yet (that may be subject for another article), but it seems that with Ruxit I can have monitoring for anything going on on a server by means of just one install package (maybe one more explanation for the aforementioned virus scan alert).

However, after having published my .NET application, everything was fine again in both systems – and the dashboards went red instantly as the server went into CPU saturation due to its weakness (as intended ;)).

Smartscape – Overview

So, final question to answer was: What do the dashboards show and how do they ease (root cause) analysis?

As soon as the app was up and running and web requests started to role in, newRelic displayed everything to know about the application’s performance. Particularly nice is the out-of-the-box combination of APM data with browser request data within the first and the second menu item (either switch between the 2 by clicking the menu or use the links within the diagrams displayed).

newRelic APM dashboard

newRelic APM dashboard

The difficulty with newRelic was to discover the essence of the web application’s problem. Transactions and front-end code performance was displayed in every detail, but I knew (from my configuration) that the problem of slow page loads – as displayed – lied in the general weakness of my web server.

And that is basically where Ruxit’s smartscape tile in their dashboard made the essential difference. The below screenshot shows a problem within my web application as initially displayed in Ruxit’s smartscape view:

Ruxit's smartscape view showing a problem in my application

Ruxit’s smartscape view showing a problem in my application

By this view, it was obvious that the problem was either within the application itself or within the server as such. A click to the server not only reveals the path to the depending web application but also other possibly impacted services (obviously without end user impact as otherwise there would be an alert on them, too).

Ruxit smartscape with dependencies between servers, services, apps

Ruxit smartscape with dependencies between servers, services, apps

And digging into the server’s details revealed the problem (CPU saturation, unsurprisingly).

Ruxit revealing CPU saturation as a root cause

Ruxit revealing CPU saturation as a root cause

Still, the amount of dashboard alerts where pretty few. While I had 6 eMails from newRelic telling me about the problem on that server, I had only 2 within Ruxit: 1 telling me about the web app’s weak response and another about CPU saturation.

Next step, hence, would be to scale-up the server (in my environment) or scale-out or implement an enhanced application architecture (in a realistic production scenario). But that’s another story …

Bottom line

Event correlation and alerting on a “need to know” basis – at least for me – remains the right way to go.

This little test was done with just one server, one database, one web application (and a few other services). While newRelics comprehensive approach to showing information is really compelling and perfectly serves the objective of complete SLA compliancy reporting, Ruxit’s “need to know” principle much more meets the needs of what I would expect form innovative cloud monitoring.

Considering Netflix’s philosophy from the beginning of this article, innovative cloud monitoring basically translates into: Every extra step is a burden. Every extra information on events without impact means extra OPS effort. And every extra-click to correlate different events to a probable common root-cause critically lengthens MTTR.

A “need to know” monitoring approach while at the same time offering full stack visibility of correlated events is – for me – one step closer to comprehensive Cloud-ready monitoring and DevOps.

And Ruxit really seems to be “spot on” in that respect!

 

Published by:

DevOps style performance monitoring for .NET

 

{{ this article has originally been published in DevOps.com }}

 

Recently I began looking for an application performance management solution for .NET. My requirements are code level visibility, end to end request tracing, and infrastructure monitoring in a DevOps production setup.

DotTrace is clearly the most well-known tool for code level visibility in development setups, but it can’t be used in a 24×7 production setup. DotTrace also doesn’t do typical Ops monitoring.

Unfortunately a Google search didn’t return much in terms of a tool comparison for .NET production monitoring. So I decided to do some research on my own. Following is a short list of well-known tools in the APM space that support .NET. My focus is on finding an end-to-end solution and profiler-like visibility into transactions.

New Relic was the first to do APM SaaS, focused squarely on production with a complete offering. New Relic offers web request monitoring for .NET, Java, and more. It automatically shows a component-based breakdown of the most important requests. The breakdown is fairly intuitive to use and goes down to the SQL level. Code level visibility, at least for .NET, is achieved by manually starting and stopping sampling. This is fine for analyzing currently running applications, but makes analysis of past problems a challenge. New Relic’s main advantage is its ease of us, intuitive UI, and a feature set that can help you quickly identify simple issues. Depth is the main weakness of NewRelic. As soon as you try to dig deeper into the data, you’re stuck. This might be a minor point, but if you’re used to working with a profiler, you’ll miss CPU breakdown as New Relic only shows response times.

net-1-newrelic

Dynatrace is the vendor that started the APM revolution and is definitely the strongest horse in this race. Its feature set in terms of .NET is the most complete, offering code level monitoring (including CPU and wait times), end to end tracing, and user experience monitoring. As far as I can determine, it’s the only tool with a memory profiler for .NET and it also features IIS web request insight. It supports the entire application life cycle from development environments, to load testing, to production. As such it’s nearly perfect for DevOps. Due to its pricing structure and architecture it’s targeted more at the mid to enterprise markets. In terms of ease of use it’s catching up to competition with a new Web UI. It’s rather light on infrastructure monitoring on its own, but shows additional strength with optional Dynatrace synthetic and network monitoring components.

net-2-dynatrace

Ruxit is a new SaaS solution built by Dynatrace. It’s unique in that it unites application performance management and real user monitoring with infrastructure, cloud, and network monitoring into a single product. It is by far the easiest to install, literally takes 2 minutes. It features full end to end tracing, code level visibility down to the method level, SQL visibility, and RUM for .NET, Java, and other languages, with insight into IIS and Apache. Apart from this it has an analytics engine that delivers both technical and user experience insights. Its main advantages are its ease of use, web UI, fully automated root cause analysis, and frankly, amazing breadth. Its flexible consumption based pricing scales from startups, cloud natives, and mid markets up to large web scale deployments of ten-thousands of servers.

net-3-ruxit

AppNetta‘s TraceView takes a different approach to application performance management. It does support tracing across most major languages including database statements and of course .NET. It visualizes things in charts and scatter plots. Even traces across multiple layers and applications are visualized in graphs. This has its advantages but takes some time getting used to it. Unfortunately while TraceView does support .NET it does not yet have code level visibility for it. This makes sense for AppNetta, which as a whole is more focused on large scale monitoring and has more of a network centric background. For DevOps in .NET environments however, it’s a bit lacking.

net-4-TraceView

Foglight, originally owned by Quest and now owned by Dell, is a well-known application performance management solution. It is clearly meant for operations monitoring and tracks all web requests. It integrates infrastructure and application monitoring, end to end tracing, and code level visibility on .NET, among other things. It has the required depth, but it’s rather complex to set up and obviously generates alert storms as far as I could experience. It takes a while to configure and get the data you need. Once properly set up though, you get a lot of insight into your .NET application. In a fast moving DevOps scenario though it might take too long to manually adapt to infrastructure changes.

net-5-foglight

AppDynamics is well known in the APM space. Its offering is quite complete and it features .NET monitoring, quite nice transaction flow tracing, user experience, and code level profiling capabilities. It is production capable, though code level visibility may be limited here to reduce overhead. Apart from these features though, AppDynamics has some weaknesses, mainly the lack of IIS request visibility and the fact that it only features walk clock time with no CPU breakdown. Its flash-based web UI and rather cumbersome agent configuration can also be counted as negatives. Compared to others it’s also lacking in terms of infrastructure monitoring. Its pricing structure definitely targets the mid market.

net-6-AppDynamics

Manage Engine has traditionally focused on IT monitoring, but in recent years they added end user and application performance monitoring to their portfolio called APM Insight. Manage Engine does give you metric level insight into .NET applications and transaction trace snap shots which give you code level stack traces and database interactions. However it’s apparent that Manage Engine is a monitoring tool and APM insight doesn’t provide the level of depth one might be accustomed to from other APM tools and profilers.

net-7-ME

JenniferSoft is a monitoring solution that provides nice real-time dashboarding and gives an overview of the topology of your environment. It enables users to see deviations in the speed of transactions with real time scatter charts and analysis of transactions. It provides “profiling” for IIS/.NET transactions, but only on single tiers and has no transaction tracing. Their strong suit is clearly cool dashboarding but not necessarily analytics. For example, they are the only vendor that features 3D animated dashboards.

net-8-JenniferSoft

Conclusion: There’s more buzz around on the APM space than a Google search would reveal on first sight and I did actually discover some cool vendors to target my needs; however, the field clears up pretty much when you dig for end-2-end visibility from code down to infrastructure, including RUM, any web service requests and deep SQL insights. And if you want to pair that with a nice, fluent, ease-of-use web UI and efficient analytics, there’s actually not many left …

Published by:

The “Next Big Thing” series: #BigData

{this is No. 2 of the “Next Big Thing” blog post series, which discusses the revolution to come through ongoing innovation in IT and the challenges involved with’em}

 

When working to compose a definition of what BigData really is, I discovered a good blogpost by CloudVane from earlier this year. CloudVane nicely outlines why BigData as such is essentially a concept – not a technology or a pattern or an architecture. The term BigData summarizes the

  • legal
  • social
  • technology
  • application and
  • business

dimension of the fact that through applications being consumed from the internet, through us being constantly connected, through us sharing loads of content with our social worlds, … a vast amount of information is generated and needs to be managed and efficiently used.

To begin with, the main challenge of the BigData concept was not technology but businesses’ complete lack of vision what to do with all the information gathered. Technology stacks and architecture weren’t a problem for long – though they have matured over time either, of course. However, the biggest concern of businesses was (and sometimes still is) how to use that vast amount of data they suddenly became the master of. Hence, a solid BigData strategy of a certain business does not only need to have a clear understanding of how to collect and master data technically but rather to create a vision of what to derive from it and how to add business value through it.

Clearly, technology does have a role in it. And IT leaders must back business strategists by striving for mastery of the evolving BigData ecosystems within their IT landscape. Besides becoming specialists of newly introduced BigData and Analytics technology (Hadoop, Hive, Pig, Spark, …), this specifically means to have an orchestration story ready, that enables an enterprise’s legacy IT to integrate with all those new services introduced through new data strategies. Automation and orchestration architecture therefore will become a core role within the IT organization in order to support businesses in their striving for data insight and value.

 

{No. 3 of this blog post series is about a social revolution to come}

Published by:

The Next Big Thing (the even looonger End of the Cloud)

With the Pioneers still in the back of my mind, with all the startups’ ideas presented there, with predictions of like 40 billion connected “things” by 2020 (Source: Gartner) and all those many buzzwords around in these areas I am even more convinced that “The Cloud” as a discussable topic – as a matter that needs any kind of consideration whatsoever – is really at its end.

In case you read the writeup of one of my keynotes, you may recall the red line through it which stated a mere end to the early concepts of Cloud Computing as those concepts have so much matured and so deeply entered businesses and the internet as such, that we can securely claim Cloud to be ubiquitous: It is just there. Just as the Internet has been for years now.

So, what’s next? BigData? Social Revolution? Mobile Everywhere? All of that and any combination?

Here comes a series of posts discussing these topics and beyond. It will offer some clarifying definitions and delineations.

The first parts will cover what’s to expect by the bond of data and analytics, mobility and social media. In the second half it will discuss the huge transformation challenges involved with the digitalization of business. The conclusive part is about how IT has to change in order to support businesses rightly in these challenging and ever-changing times.

 

So let’s begin with

The Nexus of Forces

Nexus of Forces

The Nexus of Forces from another perspective

 

I like this paradigm that was originally postulated by Gartner some time ago (I read it first in the “Hype Cycle for Emerging Technologies 2014”). It describes the bonding of Cloud Computing with BigData, Social and Mobile.

Personally – unsurprisingly – I would disagree with Gartner to see “Cloud” as one of the 4 forces; rather my claim would be that Cloud Computing is the underlying basis to everything else. Why? Because any of those ecosystems which are to support the other 3 forces (mobile, social, data) builds inherently along the 5 essential characteristics of Cloud which still define whether a particular service is within or out of the definition:

  • On demand self-service: make things available when they’re needed
  • Broad network access: ensure delivery of the service through ubiquitous networking on high bandwidth
  • Resource pooling: Manage resources consumed by the service efficiently for sharing between service tenants
  • Rapid elasticity: Enable the service to scale up and down based on the demand of consumers
  • Measured: Offer utmost transparency about what service consumers have been using over time match it clearly and transparently with what they’re charged.

Hence, when now continuing to discuss the Nexus of Forces, I will keep it with the three of them and will not query The Cloud’s role in it (sic! “It’s the End of the Cloud as we know it” ;))

 

{No. 2 of this series discusses definition and challenges related to data and analytics}

 

Update: feature image added (found at http://forcelinkpr.net/?p=9612)

Published by:

Why not Cloud?

Edward Snowdon is causing us another headache about privacy and data protection. Even though he didn’t even narrate disruptive stuff. Those pretending surprise about the NSA investigating our utmost private data (which we share in the internet) have probably ceased to think about the US’s proclaimed intention to reveal and chase all terrorism in the world (by whatever means it takes). Folks – one thing upfront: The Patriot Act isn’t new!

What obviously really hits us here is the fact, that so far nobody really thought that the capabilities and technical resources to do that investigation efficiently are available (i.e. Big Data Analytics is not slideware anymore – trend followers out there: face it!)

And what’s the consequence? A new wave of discussion whether moving data into “the cloud” is really wise? The wisest thing to meet this discussion is to clear up with a few facts.

So, spend a few minutes and think about the following questions, if you will:

Do you send email?

If you’re in a company (or if you are a company), you may claim now that you send your email from an on-premise mailserver. Good. Whom do you send mail? Only to parties on other on-premise mailservers? Encrypted? End-to-end? I don’t want to argue to move your mail server into the cloud; this wouldn’t make a difference for the question discussed. What I’m emphasizing is the fact, that every attachment that is sent unencrypted through an unencrypted channel could be listened to and caught by any party interested. Without any cloud provider being involved. 10 years ago already.

Now, what is the real problem here?

The real problem is that the vast majority of email senders don’t give a shit on which channels their information traverses. In 90% of the cases this isn’t even a problem, as nobody really cares about the 127th slide deck proposing a better life when shared with 10 friends. Even not the NSA. The remaining 10% cause a problem if compromised. No matter if sent through a cloud provider or your server in your own cellar.

Do you use a social network?

No? Then forget about this question!

If yes, whom do you communicate with in it? And what? Personally, I don’t know any relevant social network owned by a provider outside the US (or not co-located within US boundaries). I.e.: you’re trapped if you use it. Except – well — except we’re talking about a company social network hosted behind your employer’s firewall. You might be trapped in another way here, but that’s a different story. Hence, it is applicable to say that sharing information within a social network which could use its (your!) data for analysis or open its data to be transferred and analysed by anybody else means opening trackable information about yourself and what you do.

But what is really the problem here?

It’s again the information you share, the information others share about you and the information others share with you without your permission or control; be it your home address and holiday absence or your latest invention you talked about with your friends over a beer. In other words: The real problem is not the cloud as such but what you share with it and how you (can) control openness and transparency (this could – by the way – be a problem with your company social media tool as well).

Do you exchange documents apart from mailing them around?

A company will for sure have already introduced a mature, secured and company compliant private dropbox service (what if not, is subject to another post; well, actually it’s rather boring to repeat what happens when employees need a dropbox and find dropbox.com blocked). But what if you intend to leverage x-company collaboration? Without blowing mailboxes or having the documents lying around in public unsecured mailservers? Rent a cloud collaboration service supposed to be more secure and reliable than any employees uncontrolled dropbox account. Or get your IT to setup an extranet service to collaborate with your external partners (including a lengthy process to add more collaborators to it).

Is this the real problem here?

In a way, yes. It is the move into x-company collaboration that causes headaches for your IT. You could solve this by simply avoiding any open service supporting such collaboration, in which case you can easily skip cloud (and the collaboration itself, too; congratulations; case closed). Or by accepting the duration for adding collaborators to your extranet. Or by using eMail (see above ;)).

Do you use a mobile phone or tablet PC?

If not, forget this paragraph, too?

If yes, you may probably use apps which go beyond email, facebook or the weather forecast. A photo app e.g.; to share a quick scan of some doc page or some instant messaging tool (whatsapp?). I reckon you do know the vendor of your instant messaging app on your mobile phone and he transparently explained to you where your communication threads are stored and which investigation means he offers international homeland security. And of course these means are in line with your privacy expectations. Are not? Well …

So, what’s the real problem here?

Flexibility. This is what poses the challenges. Fewer people are willing to exchange mobilitiy and work-life flexibility against lock-downs for the benefit of security. Which again essentially results into thinking about what to share, controlling the apps respectively and managing the mobile devices to lock them down or wipe them in case of compromising.

So, face it:

Cloud is not black or white.

Moving data into the cloud isn’t a question of “like” or “dislike”. When servers, networks, the Internet, … evolved from mainframe computers (some time ago), IT bent into a path of openness. Today, something has not become less secure just because of the 3rd Industrial Revolution we are facing.

To claim that moving company data into the hands of a cloud provider means to make it open to anybody is equally stubborn as stating that an email sent from (a) to (b) means to make its content available to the whole internet. It is true for certain ways of transporting that mail. And for these ways it was true already some decades ago. Not only now.

Hence, a mature cloud provider would make its service secure, confidential and (most of all) transparent. With that in mind there’s no real way of stopping the move.

P.S.:

Here’s a nice one about transport security and about it being compromised and how: http://news.cnet.com/8301-13578_3-57590389-38/how-web-mail-providers-leave-door-open-for-nsa-surveillance/

Published by:
%d bloggers like this: