The Smile-IT Blog » Blog Archives

Tag Archives: trend

What is Social Media still worth for?

I’m pretty pissed by the recent rumours (let’s call it that way) about the social media platform “twitter” introducing an algorithmic timeline (wanna know more about the matter? either follow the #RIPtwitter hashtag or read this (very great and insightful) article by @setlinger to learn about the possible impact)

So why am I annoyed? – Here’s to share a little

personal history:

When having joined twitter and facebook in 2009, things in both networks were pretty straight forward: Your feed filled with updates from your followers, you could watch things you liked more closely and just run over other boring stuff quickly. Step-by-step facebook started to tailor my feed. It sort-of commenced when I noticed that they were constantly changing my feed setting to (don’t remember the exact wording) “trending stuff first” and I had to manually set it back to “chronological” ever and ever again. At some point that setting possibility vanished totally and my feed remained tailored to – well – what, actually?

Did I back out then? No! Because by that time, I had discovered the advertisement possibilities of facebook. Today, I run about 6 different pages (sometimes, I add some, such as the recent “I AM ELEVEN – Austrian Premiere” page, to promote some causes I am committed to; these go offline again some time later). I am co-administrator  of a page that has more than 37.000 followers (CISV International) and it is totally interesting to observe the effects you achieve with one or the other post, comment, engagement, … whatever. Beautiful things happening from time to time. Personally, in my own feed, I mainly share things randomly (you won’t know me, if you just knew my feed); sometimes it just feels like fun to share an update. Honestly, I’ve given up fully to think, that any real engagement is possible through these kind of online encounters – it’s just fun.

Twitter is a bit different: I like getting in touch with people, whom I do not really know. Funny, interesting, insightful exchanges of information happen within 140 characters. And it gives me food for thought job-wise equally as cause-wise (#CISV, #PeaceOneDay, … and more). I came upon the recently introduced “While you were away” section on my mobile, shook heads about it and constantly skipped it not really bothering about were to switch it off (subsequent answer to subsequent twitter-question: “Did you like this?” – always: “NO”).

And then there was the “algorithmic timeline” announcement!

So, why is this utter bullshit?

I’ll give you three simple answers from my facebook experience:

  • Some weeks back – in November, right after the Paris attacks – I was responsible to post an update to our CISV-International facebook followers. Tough thing, to find the right words. Obviously I got it not too wrong as the reported “reach” was around 150k users in the end. Think about that? A page with some 37k followers reaches some 150k with one post. I was happy about the fact, that it was that much, but thinkin’ twice about it: How can I really know about the real impact of that? In truth, that counter does tell me simply nothing.
facebook post on "CISV International" reaching nearly 150k users

facebook post on “CISV International” reaching nearly 150k users

  • Some days ago, I spent a few bucks to push a post from the “I AM ELEVEN – Austria” page. In the end it reported a reach of 1.8k! “Likes” – however – came mostly from users who – according to facebook – don’t even live in Vienna, though I tailored the ad to “Vienna+20km”. One may argue that even the best algorithm cannot control friends-of-friends engagement – and I do value that argument; but what’s the boosting worth then, if I do not get one single person more into the cinema to see the film?
facebook I AM ELEVEN boosted post

facebook I AM ELEVEN boosted post

  • I am recently flooded with constant appearances of “Secret Escape” ads. I’ve never klicked it (and won’t add a link here – I don’t wanna add to their view count); I’m not interested in it; facebook still keeps showing me who of my friends like it and adds the ad to my feed more than once every day. Annoying. And to stop it I’d have to interact with the ad – which I do not want to. However, I don’t have a simple choice of opting out of it …

Thinking of all that – and more – what would I personally gain from an algorithmic timeline on twitter, if facebook hasn’t really helped me in my endeavours anymore, recently? Nothing! I think. I just don’t have the amount of money to feed the tentacles of those guys, having such ideas, so that their ideas would by any means become worthy for my business or causes. Period.

But as those tentacles rarely listen to users like me but rather to potent advertisers (like “Secret Escape” e.g.), the only alternative will probably again be, to opt out:

Twitter: NO to "best tweets"

Twitter: NO to “best tweets”

 

Having recently read “The Circle” that’s a more and more useful alternative, anyway …

 

Published by:

What is “trending” – anyway?

Source: Gartner (August 2015)

The report “Hype Cycle of Emerging Technologies” – every year’s desperately expected Gartner report about what’s trending in IT – has been out now for a few weeks. Time to bend over it and analyze the most important messages:

1. Evolution

Gartner continues to categorize technologies on the hype cycle by their model of “business eras” (see my post about last year’s Hype Cycle for more details on that). The technologies analyzed for this year’s report are claimed to belong to the last 3 stages of this model: “Digital Marketing”, “Digital Business” and “Autonomous”. Little has changed within the most important technologies supporting these changes:

  • “Internet of Things” is still on its peak
  • “Wearable User Interfaces” has obviously been exchanged by just the term “Wearables” (which makes total sense)
  • “Speech-to-Speech Translation” has advanced beyond its peak
  • “Autonomous Vehicles” is probably the currently most-hyped area around Digital Business

2. Revolution

However, there’s a significant change in the world of technologies to be seen this year: While the plateau of productivity was pretty crowded last year with all sorts of 3D, Analytics and Social stuff (like streams, e.g.), this year’s Hype Cycle doesn’t show much in that area. Which actually proves nothing less than us living in an era of major disruption. Formerly hyped technologies like “Cloud” have vanished from the graph – they’ve become commodity. New stuff like all-things digital, “Cryptocurrencies” or “Machine Learning” are still far from any maturity. So, it’s a great time for re-shaping IT – let’s go for it!

Still, besides that, there remain some questions:

  • Why is “Hybrid Cloud” not moving forward, while “Cloud” is long gone from the Hype Cycle and CIOs are mainly – according to experience with my customers – looking for adopting cloud in a hybrid way? Is there still too little offer from the vendors? Are IT architects still not able to consume hybrid cloud models in a sufficiently significant way? Personally, I suspect “Hybrid” to have further advanced towards productivity than is claimed here; it’s just not that much talked about.
  • Why has Gartner secretly dropped “Software Defined Anything” (it was seen on the rise last year)? All that can be found on this year’s Hype Cycle is “Software-Defined Security”. While I agree, that in low-level infrastructure design the trend of software-defining components co-addresses important aspects of security, “Software-Defined Anything” has a much broader breadth into how IT will be changed in the next couple of years by programmers of any kind and languages of many sorts.
  • IoT Platforms has been introduced newly. With a 5-10 years adoption time? Really? Gartner, i know businesses working on that right now; I know vendors shaping their portfolio into this direction at awesome pace. I doubt this timeframe thoroughly.

3. and More

What’s, though, really important with this year’s Hype Cycle is the concentration of technologies that address “biology” in any sense. Look at the rising edge of the graph and collect what’s hyped there. We got:

  • Brain Computer Interface
  • Human Augmentation
  • 3D Bioprinting Systems
  • Biochips
  • or Bioacoustic Sensing

Not to mention “Smart Robots” and “Connected Homes” … Technologies like these will shape our future life. And it cannot be overestimated how drastically this change will affect us all – even if many of these technologies are still seen with a 5-10 years adoption time until they reach production maturity (however: it wouldn’t be the first time that a timeframe on the Hype Cycle need revision after a year of increased insight).

 

While reading a lot of comments on the Hype Cycle these days, I also fell upon “the five most over-hyped technologies” on venturebeat.com: The author, Chris O’Brien, takes a humorous view on some of the “peaked” technologies on the graph (Autonomous vehicles, self-service Analytics, IoT, Speech-to-speech translation and Machine Learning) – and shares a couple of really useful arguments on why the respective technologies will not be adopted that fast.

I can agree with most of O’Brien’s arguments – however: while some of the things-based stuff invented might be of limited applicability or use (connected forks? huh?), the overall meaningfulness of what “Digital Business” will bring to us all is beyond doubt. The question – as so often before – is not whether we’ll use all that new stuff to come, but whether we’ll be educated enough to use it to our benefit … ?

If you got questions and opinions of your own on that – or if you can answer some of my questions above – please, drop a comment! 🙂

The input for this post, the “Gartner’s 2015 Hype Cycle for Emerging Technologies” is published in the Gartner Newsroom

Published by:
SmileIT

Evaluation Report – Monitoring Comparison: newRelic vs. Ruxit

I’ve worked on cloud computing frameworks with a couple of companies meanwhile. DevOps like processes are always an issue along with these cooperations – even more when it comes to monitoring and how to innovatively approach the matter.

As an example I am ever and again emphasizing Netflix’s approach in these conversations: I very much like Netflix’s philosophy of how to deploy, operate and continuously change environment and services. Netflix’s different component teams do not have any clue on the activities of other component teams; their policy is that every team is self-responsible for changes not to break anything in the overall system. Also, no one really knows in detail which servers, instances, services are up and running to serve requests. Servers and services are constantly automatically re-instantiated, rebooted, added, removed, etc. Such is a philosophy to make DevOps real.

Clearly, when monitoring such a landscape traditional (SLA-fulfilment oriented) methods must fail. It simply isn’t sufficient for a Cloud-aware, continuous delivery oriented monitoring system to just integrate traditional on-premise monitoring solutions like e.g. Nagios with e.g. AWS’ CloudWatch. Well, we know that this works fine, but it does not yet ease the cumbersome work of NOCs or Application Operators to quickly identify

  1. the impact of a certain alert, hence its priority for ongoing operations and
  2. the root cause for a possible error

After discussing these facts the umpteenth time and (again) being confronted with the same old arguments about the importance of ubiquitous information on every single event within a system (for the sake of proving SLA compliancy), I thought to give it a try and dig deeper by myself to find out whether these arguments are valid (and I am therefore wrong) or whether there is a possibility to substantially reduce event occurrence and let IT personal only follow up the really important stuff. Efficiently.

At this stage, it is time for a little

DISCLAIMER: I am not a monitoring or APM expert; neither am I a .NET programming expert. Both skill areas are fairly familiar to me, but in this case I intentionally approached the matter from a business perspective – as least technical as possible.

The Preps

In autumn last year I had the chance to get a little insight into 2 pure-SaaS monitoring products: Ruxit and newRelic. Ruxit back then was – well – a baby: Early beta, no real functionality but a well-received glimpse of what the guys are on for. newRelic was already pretty strong and I very much liked their light and quick way of getting started.

As that project back then got stuck and I ended my evaluations in the middle of getting insight, I thought, getting back to that could be a good starting point (especially as I wasn’t able to find any other monitoring product going the SaaS path that radically, i.e. not even thinking of offering an on-premise option; and as a cloud “aficionado” I was very keen on seeing a full-stack SaaS approach). So the product scope was set pretty straight.

The investigative scope, this time, should answer questions a bit more in a structured way:

  1. How easy is it to kick off monitoring within one system?
  2. How easy is it to combine multiple systems (on-premise and cloud) within one easy-to-digest overview?
  3. What’s alerted and why?
  4. What steps are needed in order to add APM to a system already monitored?
  5. Correlation of events and its appearance?
  6. The “need to know” principle: Impact versus alert appearance?

The setup I used was fairly simple (and reduced – as I didn’t want to bother our customer’s workloads in any of their datacenters): I had an old t1.micro instance still lurking around on my AWS account; this is 1 vCPU with 613MB RAM – far too small to really perform with the stuff I wanted it to do. I intentionally decided to use that one for my tests. Later, the following was added to the overall setup:

  • An RDS SQL Server database (which I used for the application I wanted to add to the environment at a later stage)
  • IIS 6 (as available within the Server image that my EC2 instance is using)
  • .NET framework 4
  • Some .NET sample application (some “Contoso” app; deployed directly from within Visual Studio – no changes to the defaults)

Immediate Observations

2 things popped into my eyes only hours (if not minutes) after commencing my activities in newRelic and Ruxit, but let’s first start with the basics.

Setting up accounts is easy and straight forward in both systems. They are both truly following the cloud affine “on-demand” characteristic. newRelic creates a free “Pro” trial account which is converted into a lifetime free account when not upgraded to “paid” after 14 days. Ruxit sets up a free account for their product but takes a totally different approach – closer resembling to consumption-based pricing: you get 1000 hours of APM and 50k user visits for free.

Both systems follow pretty much the same path after an account has been created:

  • In the best case, access your account from within the system you want to monitor (or deploy the downloaded installer package – see below – to the target system manually)
  • Download the appropriate monitoring agent and run the installer. Done.

Both agents started to collect data immediately and the browser-based dashboards produced the first overview of my system within some minutes.

As a second step, I also installed the agents to my local client machine as I wanted to know how the dashboards display multiple systems – and here’s a bummer with Ruxit: My antivirus scanner alerted me with an Win32.Evo-Gen suspicion:

Avast virus alert upon Ruxit agent install

Avast virus alert upon Ruxit agent install

It wasn’t really a problem for the agent to install and operate properly and produce data; it was just a little confusing. In essence, the reason for this is fairly obvious: The agent is using a technique which is comparable to typical virus intrusion patterns, i.e. sticking its fingers deep into the system.

The second observation was newRelics approach to implement web browser remote checks, called “Synthetics”. It was indeed astonishingly easy to add a URL to the system and let newRelic do their thing – seemingly from within the AWS datacenters around the world. And especially with this, newRelic has a very compelling way of displaying the respective information on their Synthetics dashboard. Easy to digest and pretty comprehensive.

At the time when I started off with my evaluation, Ruxit didn’t offer that. Meanwhile they added their Beta for “Web Checks” to my account. Equally easy to setup but lacking some more rich UI features wrt display of information. I am fairly sure that this’ll be added soon. Hopefully. My take is, that combining system monitoring or APM with insights displaying real user usage patterns is an essential part to efficiently correlate events.

Security

I always spend a second thought on security questions, hence contemplated Ruxit’s way of making sure that an agent really connects to the right tenant when being installed. With newRelic you’re confronted with an extra step upon installation: They ask you to copy+paste a security key from your account page during their install procedure.

newRelic security key example

newRelic security key example

Ruxit doesn’t do that. However, they’re not really less secure; it’s just that they pre-embed this key into the installer package that is downloaded,c so they’re just a little more convenient. Following shows the msiexec command executed upon installation as well as its parameters taken form the installer log (you can easily find that information after the .exe package unpacks into the system’s temp folder):

@msiexec /i "%i_msi_dir%\%i_msi%" /L*v %install_log_file% SERVER="%i_server%" PROCESSHOOKING="%i_hooking%" TENANT="%i_tenant%" TENANT_TOKEN="%i_token%" %1 %2 %3 %4 %5 %6 %7 %8 %9 >con:
MSI (c) (5C:74) [13:35:21:458]: Command Line: SERVER=https://qvp18043.live.ruxit.com:443 PROCESSHOOKING=1 TENANT=qvp18043 TENANT_TOKEN=ABCdefGHI4JKLM5n CURRENTDIRECTORY=C:\Users\thome\Downloads CLIENTUILEVEL=0 CLIENTPROCESSID=43100

Alerting

After having applied the package (both packages) onto my Windows Server on EC2 things popped up quickly within the dashboards (note, that both dashboard screenshots are from a later evaluation stage; however, the basic layout was the very same at the beginning – I didn’t change anything visually down the road).

newRelic server monitoring dashboard

newRelic server monitoring dashboard showing the limits of my too-small instance 🙂

Ruxit server monitoring dashboard

The Ruxit dashboard on the same server; with a clear hint on a memory problem 🙂

What instantly stroke me here was the simplicity of Ruxit’s server monitoring information. It seemed sort-of “thin” on information (if you want a real whole lot of info right from the start, you probably prefer newRelic’s dashboard). Things, though, changed when my server went into memory saturation (which it constantly does right away when accessed via RDP). At that stage, newRelic started firing eMails alerting me of the problem. Also, the dashboard went red. Ruxit in turn did nothing really. Well, of course, it displayed the problem once I was logged into the dashboard again and had a look at my server’s monitoring data; but no alert triggered, no eMail, no red flag. Nothing.

If you’re into SLA fulfilment, then that is precisely the moment to become concerned. On second thought, however, I figured that actually no one was really bothered by the problem. There was no real user interaction going on in that server instance. I hadn’t even added an app really. Hence: why bother?

So, next step was to figure out, why newRelic went so crazy with that. It turned out that with newRelic every newly added server gets assigned to a default server policy.

newRelic's monitoring policy configuration

newRelic’s monitoring policy configuration

I could turn off that policy easily (also editing apparently seems straight forward; I didn’t try). However, to think that with every server I’m adding I’d have to figure out first, which alerts are important as they might be impacting someone or something seemed less on a “need to know” basis than I intended to have.

After having switched off the policy, newRelic went silent.

BTW, alerting via eMail is not setup by default in Ruxit; within the tenant’s settings area, this can be added as a so called “Integration” point.

AWS Monitoring

As said above, I was keen to know how both systems integrate multiple monitoring sources into their overviews. My idea was to add my AWS tenant to be monitored (this resulted from the previously mentioned customer conversations I had had earlier; that customer’s utmost concern was to add AWS to their monitoring overview – which in their case was Nagios, as said).

A nice thing with Ruxit is that they fill their dashboard with those little demo tiles, which easily lead you into their capabilities without having setup anything yet (the example below shows the database demo tile).

Ruxit demo tile example

This is one of the demo tiles in Ruxit’s dashboard – leading to DB monitoring in this case

I found an AWS demo tile (similar to the example above), clicked and ended up with a light explanation of how to add an AWS environment to my monitoring ecosystem (https://help.ruxit.com/pages/viewpage.action?pageId=9994248). They offer key based or role based access to your AWS tenant. Basically what they need you to do is these 3 steps:

  1. Create either a role or a user (for use of access key based connection)
  2. Apply the respective AWS policy to that role/user
  3. Create a new cloud monitoring instance within Ruxit and connect it to that newly created AWS resource from step 1

Right after having executed the steps, the aforementioned demo tiled changed into displaying real data and my AWS resources showed up (note, that the example below already contains RDS, which I added at a later stage; the cool thing here was, that that was added fully unattended as soon as I had created it in AWS).

Ruxit AWS monitoring overview

Ruxit AWS monitoring overview

Ruxit essentially monitors everything within AWS which you can put a CloudWatch metric on – which is a fair lot, indeed.

So, next step clearly was to seek the same capability within newRelic. As far as I could work out, newRelic’s approach here is to offer plugins – and newRelic’s plugin ecosystem is vast. That may mean, that there’s a whole lot of possibilities for integrating monitoring into the respective IT landscape (whatever it may be); however, one may consider the process to add plugin after plugin (until the whole landscape is covered) a bit cumbersome. Here’s a list of AWS plugins with newRelic:

newRelic plugins for AWS

newRelic plugins for AWS

newRelic plugins for AWS

newRelic plugins for AWS

Add APM

Adding APM to my monitoring ecosystem was probably the most interesting experience in this whole test: As a preps for the intended result (i.e.: analyse data about a web application’s performance at real user interaction) I added an IIS to my server and an RDS database to my AWS account (as mentioned before).

The more interesting fact, though, was that after having finalized the IIS installation, Ruxit instantly showed the IIS services in their “Smartscape” view (more on that a little later). I didn’t have to change anything in my Ruxit environment.

newRelic’s approach is a little different here. The below screenshot shows their APM start page with .NET selected.

newRelic APM start page with .NET selected

newRelic APM start page with .NET selected

After having confirmed each selection which popped up step by step, I was presented with a download link for another agent package which I had to apply to my server.

The interesting thing, though, was, that still nothing showed up. No services or additional information on any accessible apps. That is logical in a way, as I did not have anything published on that server yet which resembled an application, really. The only thing that was accessible from the outside was the IIS default web (just showing that IIS logo).

So, essentially the difference here is that with newRelic you get system monitoring with a system monitoring agent, and by means of an application monitoring agent you can add monitoring of precisely the type of application the agent is intended for.

I didn’t dig further yet (that may be subject for another article), but it seems that with Ruxit I can have monitoring for anything going on on a server by means of just one install package (maybe one more explanation for the aforementioned virus scan alert).

However, after having published my .NET application, everything was fine again in both systems – and the dashboards went red instantly as the server went into CPU saturation due to its weakness (as intended ;)).

Smartscape – Overview

So, final question to answer was: What do the dashboards show and how do they ease (root cause) analysis?

As soon as the app was up and running and web requests started to role in, newRelic displayed everything to know about the application’s performance. Particularly nice is the out-of-the-box combination of APM data with browser request data within the first and the second menu item (either switch between the 2 by clicking the menu or use the links within the diagrams displayed).

newRelic APM dashboard

newRelic APM dashboard

The difficulty with newRelic was to discover the essence of the web application’s problem. Transactions and front-end code performance was displayed in every detail, but I knew (from my configuration) that the problem of slow page loads – as displayed – lied in the general weakness of my web server.

And that is basically where Ruxit’s smartscape tile in their dashboard made the essential difference. The below screenshot shows a problem within my web application as initially displayed in Ruxit’s smartscape view:

Ruxit's smartscape view showing a problem in my application

Ruxit’s smartscape view showing a problem in my application

By this view, it was obvious that the problem was either within the application itself or within the server as such. A click to the server not only reveals the path to the depending web application but also other possibly impacted services (obviously without end user impact as otherwise there would be an alert on them, too).

Ruxit smartscape with dependencies between servers, services, apps

Ruxit smartscape with dependencies between servers, services, apps

And digging into the server’s details revealed the problem (CPU saturation, unsurprisingly).

Ruxit revealing CPU saturation as a root cause

Ruxit revealing CPU saturation as a root cause

Still, the amount of dashboard alerts where pretty few. While I had 6 eMails from newRelic telling me about the problem on that server, I had only 2 within Ruxit: 1 telling me about the web app’s weak response and another about CPU saturation.

Next step, hence, would be to scale-up the server (in my environment) or scale-out or implement an enhanced application architecture (in a realistic production scenario). But that’s another story …

Bottom line

Event correlation and alerting on a “need to know” basis – at least for me – remains the right way to go.

This little test was done with just one server, one database, one web application (and a few other services). While newRelics comprehensive approach to showing information is really compelling and perfectly serves the objective of complete SLA compliancy reporting, Ruxit’s “need to know” principle much more meets the needs of what I would expect form innovative cloud monitoring.

Considering Netflix’s philosophy from the beginning of this article, innovative cloud monitoring basically translates into: Every extra step is a burden. Every extra information on events without impact means extra OPS effort. And every extra-click to correlate different events to a probable common root-cause critically lengthens MTTR.

A “need to know” monitoring approach while at the same time offering full stack visibility of correlated events is – for me – one step closer to comprehensive Cloud-ready monitoring and DevOps.

And Ruxit really seems to be “spot on” in that respect!

 

Published by:

Bedürfnispyramide / Hierarchy of Needs

… und auch wenn die allgemeine Digitalisierung und das dauernde Verbundensein grundsätzlich spannende und bereichernde Entwicklungen sind, dürfen wir – gerade dieser Tage – WLan und Akkuleistung auf der Maslow’schen Bedürfnispyramide ruhig ein wenig weiter oben einreihen. Tim Minchin hat da ein paar ganz gute Ideen dazu …


 

… and even though Digitalization and ubiquitous connection of everyThing are interesting and enriching advancements of mankind, we’re surely allowed – especially during these days – to put “WiFi” and “Akku” onto some higher places within Maslow’s “hierarchy of needs”. Tim Minchin has some nice ideas to this, indeed …

 

Published by:

The “Next Big Thing” series wrap-up: How to rule them all?

What is it that remains for the 8th and last issue of the “Next Big Thing” blog post series: To “rule them all” (all the forces, disruptive challenges and game changing innovations) and keep services connected, operating, integrated, … to deliver value to the business.

A bit ago, I came upon Jonathan Murray’s concept of the Composable Enterprise – a paradigm which essentially preaches fully decoupled infrastructure and application as services for company IT. Whether the Composable Enterprise is an entire new approach or just a pin-pointed translation of what is essential to businesses mastering digital transformation challenges is all the same.

The importance lies with the core concepts of what Jonathan’s paradigm preaches. These are to

  • decouple the infrastructure
  • make data a service
  • decompose applications
  • and automate everything

Decouple the Infrastructure.

Rewind into my own application development and delivery times during the 1990ies and the 00-years: When we were ready to launch a new business application we would – as part of the rollout process – inform IT of resources (servers, databases, connections, interface configurations) needed to run the thing. Today, large IT ecosystems sometimes still function that way, making them a slow and heavy-weight inhibitor of business agility. The change to incorporate here is two-folded: On the one hand infra responsibles must understand that they need to deliver on scale, time, demand, … of their business customers (which includes more uniform, more agile and more flexible – in terms of sourcing – delivery mechanisms). And on the other hand, application architects need to understand that it is not anymore their architecture that defines IT needs but in turn their architecture needs to adapt to and adopt agile IT infrastructure resources from wherever they may be sourced. By following that pattern, CIOs will enable their IT landscapes to leverage not only more cloud-like infrastructure sourcing on-premise (thereby enabling private clouds) but also will they become capable of ubiquitously using ubiquitous resources following hybrid sourcing models.

Make Data a Service.

This isn’t about BigData-like services, really. It might be (in the long run). But this is essentially about where the properties and information of IT – of applications and services – really is located. Rewind again. This time only for like 1 or 2 years. The second last delivery framework, that me and my team of gorgeous cloud aficionados created, was still built around a central source of information – essentially a master data database. This simply was the logical framework architecture approach back then. Even only a few months – when admittedly me and my then team (another awesome one) already knew that information needs to lie within the service – it was still less complex (hence: quicker) to construct our framework around such a central source of (service) wisdom. What the Composable Enterprise, though, rightly preaches is a complete shift of where information resides. Every single service, which offers its capabilities to the IT world around it, needs to provide a well-defined, easy to consume, transparently reachable interface to query and store any information relevant to the consumption of the service. Applications or other services using that service simply engage via that interface – not only to leverage the service’s capabilities but even more to store and retrieve data and information relevant to the service and the interaction with it. And there is no central database. In essence there is no database at all. There is no need for any. When services inherently know what they manage, need and provide, all db-centric architecture for the sole benefit of the db as such becomes void.

Decompose Applications.

The aforementioned leads one way into the decomposition pattern. More important, however, is to spend more thorough thinking about what a single business related activity – a business process – really needs in terms of application support. And in turn, what the applications providing this support to the business precisely need to be capable of. Decomposing Applications means to identify useful service entities which follow the above patterns, offer certain functionality in an atom kind-of way via well-defined interfaces (APIs) to the outside world and thereby create an application landscape which delivers on scale, time, demand, … just by being composed through service orchestration in the right – the needed – way. This is the end of huge monolithic ERP systems, which claim to offer all that a business needs (you just needed to customize them rightly). This is the commencing of light-weight services which rapidly adopt to changing underlying infrastructures and can be consumed not only for the benefit of the business owning them but – through orchestration –form whole new business process support systems for cross-company integration along new digitalized business models.

Automate Everything.

So, eventually we’ve ended at the heart of how to breath life into an IT which supports businesses in their digital transformation challenge.

Let me talk you into one final example emphasizing the importance of facing all these disruptive challenges openly: An Austrian bank of high reputation (and respectful success in the market) gave a talk at the Pioneers about how they discovered that they are actually not a good bank anymore, how they discovered that – in some years’ time – they’d not be able to live up to the market challenges and customers’ demands anymore. What they discovered was simply, that within some years they would lose customers just because of their inability to offer a user experience integrated with the mobile and social demands of today’s generations. What they did in turn was to found a development hub within their IT unit, solely focussing on creating a new app-based ecosystem around their offerings in order to deliver an innovative, modern, digital experience to their bank account holders.

Some time prior to the Pioneers, I had received a text that “my” bank (yes, I am one of their customers) now offers a currency exchange app through which I can simply order the amount of currency needed and would receive a confirmation once it’s ready to be handed to me in the nearest branch office. And some days past the Pioneers I received an eMail that a new “virtual bank servant” would be ready as an app in the net to serve all my account-related needs. Needless to say that a few moments later I was in and that the experience was just perfect even though they follow an “early validation” policy with their new developments, accepting possible errors and flaws for the benefit of reduced time to market and more accurate customer feedback.

Now, for a moment imagine just a few of the important patterns behind this approach:

  • System maintenance and keeping-the-lights-on IT management
  • Flexible scaling of infrastructures
  • Core banking applications and services delivering the relevant information to the customer facing apps
  • App deployment on a regular – maybe a daily – basis
  • Integration of third-party service information
  • Data and information collection and aggregation for the benefit of enhanced customer behaviour insight
  • Provision of information to social platforms (to influence customer decisions)
  • Monitoring and dashboards (customer-facing as well as internally to business and IT leaders)
  • Risk mitigation
  • … (I could probably go on for hours)

All of the above capabilities can – and shall – be automated to a certain, a great extent. And this is precisely what the “automate everything” pattern is about.

Conclusion

There is a huge business shift going on. Software, back in the 80ies and 90ies was a driver for growth, had its downturn in and post the .com age and now enters an era of being ubiquitously demanded.

Through the innovative possibilities by combining existing mobile, social and data technologies, through the merge of physical and digital worlds and through the tremendously rapid invention of new thing-based daily-life support, businesses of all kind will face the need for software – even if they had not felt that need so far.

The Composable Enterprise – or whatever one wants to call a paradigm of loosely coupled services being orchestrated through well-defined transparently consumable interfaces – is a way for businesses to accommodate this challenge more rapidly. Automating daily routine – like e.g. the aforementioned tasks – will be key to enterprises which want to stay on the edge of innovation within these fast changing times.

Most importantly, though, is to stay focussed within the blurring worlds of things, humans and businesses. To keep the focus on innovation not for the benefit of innovation as such but for the benefit of growing the business behind.

Innovation Architects will be the business angels of tomorrow – navigating their stakeholders through an ongoing revolution and supporting or driving the right decisions for implementing and orchestrating services in a business-focussed way.

 

{the feature image of this last “The Next Big Thing” series post shows a design by New Jersey and New York-based architects and designers Patricia Sabater, Christopher Booth and Aditya Chauan: The Sky Cloud Skyscraper – found on evolo.us/architecture}

Published by:

The “Next Big Thing” series: Digital Transformation

Beware! No. 7 of the “Next Big Thing” blog post series is probably going to be at the heart of all the big business disruptions to come:

 

“Digital Business”

as a term has more or less become a substitute for the formerly heavily stressed “Industry 4.0”. Digital Business can best be described by a couple of examples illustrating how every business – without exception – will be disrupted by the huge innovative potential rolling along:

Example No. 1 – Retail and Education

School notifies the parents of a boy that he needs a certain educative material by tomorrow; they do that by means of a private message to the parents coming from the school’s facebook profile. The boy’s mother investigates through her mobile phone where the particular material can be purchased, connects to the store chain by means of a mobile app and requests availability information. The store responds with availability and price (through their app) also informs that the particular item has to be sent from a remote outlet and requests confirmation for the purchase and delivery. The mother responds with payment data and the school’s address for target delivery whereas the store chain triggers delivery of the item to the nearest train station, notifies the train operating company that a parcel needs to be delivered by tomorrow to the respective address whereas the train company in turn arranges for delivery to take place to the school’s nearest train station and from there by a drone directly to the school.

Example No. 2 – Weather and Insurance

A terrible thunderstorm destroys a house’s window. The respective sensors thoroughly detect the reason for the breakage of glass not to be from human intervention but from bad weather conditions and notifies the smart home automation gateway of what has happened. The gateway holds police, hospital and insurance contact information as well as necessary private customer IDs. Location address is derived via GPS positioning. The gateway self-triggers a notification and remediation workflow with the insurance company, which in turn assesses the incident to be a valid insurance case, triggers a repair order with an associated window glassworks company. The glassworks company fits the order into their schedule as it is treated an emergency under the given circumstances, rushes to the given location, repairs the windows, the workers report back to the insurance via mobile app and the insurance closes the case. All this happens without any human intervention other than final approval by the house’s owner that everything is OK again.

Example No. 3 – Holiday and Healthcare

The wearable body control device of an elderly lady records asynchronous heartbeat also slowly decelerating. The pattern is maintained within the device as being a situation of life endangering heart condition, hence the device commences transfer of detailed health monitoring data via the lady’s mobile phone to her children on the one hand and to her doctor in charge on the other hand. Both parties have (by means of device configuration) agreed to confirm the reception of data within 5 minutes after start of transmission. As none of this happens (because the kids are on holiday and the doctor is busy doing surgery) the device triggers notification of the nearest ambulance, transmits the patterns of normal health condition plus current condition and includes name, location, health insurance and nearest relatives data as well as the electronic apartment access key. The ambulance’s customer request system notifies the doctor in charge as well as the lady’s children that they’re taking over the case, an ambulance rushes to location, personal opens via mobile phone using the received electronic key, finds the lady breathing short and saves her life by commencing respective treatment immediately.

Fictious?

Well – maybe, today. But technology for all this is available and business models around it have begun to mature.

What these examples show – besides that they all encompass the integration of Things with several or all aspects of the Nexus of Forces discussed earlier in this article series – is an aspect essential to understanding “Digital Business” and that immense digitalization of our daily life: “Digital Business” is nothing else than the seamless (mean it;. literally: s-e-a-m-l-e-s-s) connection of humans, businesses and things (as in the IoT definition). “Digital Business” is a merger of physical and digital worlds!

In turn, this means plain simply, that there will be no business whatsoever that goes without software. Businesses already penetrated by software will experience increasing software, automation and integration challenges and businesses that haven’t yet introduced software into their models will face an increased challenge doing so, as well as to integrate with the digital world around them. Essentially for nothing else than just for staying in business.

 

{the 8th issue of this blog post series covers a way to approach all those challenges through creating a true services ecosystem for the enterprise; and as it’s the last it also wraps up and concludes}

{feature image found on http://marketingland.com/}

 

Published by:

The “Next Big Thing” series: Discussing The Thing

Opening chapter No. 6 of the “Next Big Thing” blog post series by discussing the Internet of Things!

What’s essentially so entirely new about the Internet of Things? Things are not. Connectivity and protocols might be – but not entirely. Mastering the data produced by things – well: we’ve discussed that bit in one of the earlier posts of this series.

What’s really entirely new is the speed of adoption that the topic now reaches; and this speed is based on the possibilities that technology innovation have begun to offer to new things-based ecosystems.

In order to understand that, one has to judge the elements of a things-based architecture. While the article “Understanding the IoT Landscape” offers a quite comprehensive simplified IoT architecture, I would tend to be a little more detailed in assessing the elements of a functioning IoT ecosystem:

Architecture building blocks in an IoT Ecosystem

Figure: Architecture building blocks in an IoT Ecosystem

  1. The Thing itself. The variety of what a “Thing” may be is vast: medical human wearable monitoring devices (e.g. heart monitor), medical measurement devices (such as a diabetes meter), sensors, transmitters and alert devices in automated cars, smart home automation sensors, fitness wearables or simply watches that take over several of the above capabilities at once, … and many more things that aren’t even invented or thought of yet (the aforementioned article gives a decent list of what Things could possibly be). Discussing implementation architectures for the Thing itself would exceed the scope of this article by far, obviously.
  2. The Thing’s UI: This is already an interestingly ambiguous architecture element. Do Things have UIs? Yes, they do. Sometimes only comprised of one or a few LEDs or the-like of it. They could, of course, also have none at all if in case interfacing with a Thing’s user is delegated to either a mobile phone or a gateway which the Thing is connected to.
  3. Thing Connectivity and Communication Layer: The purpose of which is solely to bridge the gap between the Thing itself and the first connectivity point capable of transferring data through well-established protocols. Thing Connectivity may sometimes be reached through WiFi but often also by just using Bluetooth or any other wireless near field communication protocols.
  4. Thing Gateway: Rarely will Things directly feed data into any backend analytics or application layer; simply because it is too costly and complicated to accomplishing high performant, secure and reliable data connection based on proprietary protocols over long connectivity routes. Hence, we’ll often see some kind of gateway being introduced with the Thing which in simple cases could just be a mobile phone.
  5. Data Store: By whatever way Things might be leveraged by the business’s backend IT, we will always see a data collection and storage layer introduced to directly capture and provide Thing data for further compute, analysis and application integration.
  6. Application Integration: One essential topic to consider when introducing Things into business models is to envision an application landscape around the Thing in order to offer app-based Thing interaction to the end consumer as well as information from Things and their usage to the Thing’s business plus to third-party consumers in order to drive cross-business integration. New cross-enterprise business models will evolve anyway – the better Things-centered businesses allow for integration and orchestration, the better they will be able to leverage and let others leverage their disruptive innovations.
  7. Analytics: No Thing-based business – no Things introduction – will make any sense without creating the ability to leverage the information produced for analysis and even more for prediction or even prescription. We will see more of that in the next section of the article.

The impact to IT when discussing the change through IoT cannot be overestimated. Just by assessing the layers described above it does become obvious that we will see quite a lot of new architectural approaches evolve which in turn need to be integrated with existing IT landscapes. Also – as with all the more recent disruptions in enterprise IT – the orchestration of different services maturing in the “Things” space will be key for IT organizations to offer utmost business value when leveraging the Internet of Things.

 

{No. 7 of this blog post series will cover “Digitalization” and “Digital Transformation” – and what it really means to any business}

{feature image borrowed from the IoT wikipedia article}

 

Published by:

The “Next Big Thing” series: What’s Industry 4.0 anyway?

So, here’s to continue with “The Next Big Thing” blog post series. Let’s take a leap into what really matters in the coming years – to all kinds of businesses:

Once upon a time

there was The Web. Then Web 2.0. Web 3.0 (Semantics and Augmentation). Then the saying of the “3rd Industrial Revolution”.

I recall, that in the beginnings of the term being used people explained this to be the raise of Cloud Computing and the ubiquitous social and mobile interconnection, whereas later many have corrected themselves to see it as the Industrial Revolution that was started with the raise of the personal computer.

Nowadays, no one seems to be really talking of any Industrial Revolution anymore (might be that they’re unsure whether it’s the 3rd, the 4th or whether we’re in the midst of a constant revolution anyway), but businesses needed a term to describe their striving for technologies that constantly get smarter and help them grow.

Industry 4.0 was born. And it seemed for some time that the core concepts of Industry 4.0 are robotics and the “Internet of Things” (IoT). Whereas the first is still true, “Industry 4.0” has become a term used mainly in the field of manufacturing: Smart factories supported by intense introduction of robotics-based technologies and machines as well as heavy adoption of Automation form the cornerstones of the Industry 4.0 age.

And while there are expert sources that extend the coverage of the Industry 4.0 term also into a world outside of factories (with smart machines like e.g. drones, driverless cars and human support roboters – see e.g. my German-only blog post “Innovationskraft ist nicht das Problem” or the keynote discussed there), the most confusing definition of Industry 4.0 occurred to me in both the English and the German version of Wikipedia, where the article defining the term (at the moment of writing this post) starts by saying: “Industry 4.0 is a project in the high-tech strategy of the German government”.

Hence, I trust that for the benefit of a clear and focussed discussion within this little blog series, it is of advantage to omit the term “Industry 4.0” for a moment and talk about what really will disrupt business and IT in the next couple of years.

And these are mainly

3 Aspects

of an extensively integrated and orchestrated world:

  • Things
  • Digitalized business
  • and a great amount of lightweight well-orchestrated and automated services

The upcoming issues of this series will cover these aspects in more detail – stay tuned.

 

{We’ll start into the “Things” stuff with No. 6 of this blog post series}

{feature image found on www.automationworld.com}

Published by:

The “Next Big Thing” series: #Mobile Everywhere

{this is No. 4 of the “Next Big Thing” blog post series, which discusses the revolution to come through ongoing innovation in IT and the challenges involved with’em}

 

I would be interested in getting to know, how many readers of this series still know a person not owning a smartphone? (I do, by the way ;))

Even though I have written several times about the danger of it and how important it is to consider behaviour for a healthy adoption of “Mobile Everywhere” (e.g. in “Switch Offor “3 importances for a self-aware social networker) I am still a strong believer in the advantages that elaborate mobile technology brings into day-2-day life.

Not only do mobile phone technology and mobile app ecosystems add significant value to the other two forces (data and social) but additionally they’ve meanwhile “learned” to make vast use of them. You could actually describe a stacked model of this bond of disruptive technologies which are discussed in this series in a way that

  • data is the back-end business layer of the future
  • social platforms are the middleware to bring together information offers and information needs
  • and mobile technology is the front end to support information and data consumption in both ways

The image below turns the “Nexus”-model from the beginning of this series into a stack appearance:

 

Nexus of Forces (stacked)

Nexus of Forces (stacked)

 

Which – essentially – closes the loop with why we do see a bond of not only the technologies in mobility, social media and data and analytics but even more the visions, strategies and concepts of these three. Needless to say, therefore, that businesses who have a strong strategy and vision around the Nexus of Forces and – at the same time – are backed by a strong Service Orchestration roadmap will be the winners of the “race of embrace” of this bond.

Now, thinking of the Pioneers, which I’ve started this blog series with, I recall that one could see all forms of leveraging the aforementioned concepts in the ideas of the startups presenting there. And – unsurprisingly to me – not a single moment during those 2 festival days back in October this year, “Cloud” was even mentioned, let alone discussed. It is no topic anymore. Period.

However, there’s more: The Nexus of Forces as such is only the beginning of a path leading into the next industrial revolution and we’re already well under way. Hence, this blog series will continue discussing concepts and challenges which build upon the Nexus of Forces and takes it to the next level with change to come for each and every enterprise – software-based, hardware-based or not even technology-based at all.

 

{No. 5 of this blog post series takes the first step into “Industry 4.0” and related disruptive topics}

Published by:

The “Next Big Thing” series: From Social Network to #Social #Revolution

{this is No. 3 of the “Next Big Thing” blog post series, which discusses the revolution to come through ongoing innovation in IT and the challenges involved with’em}

 

Along with Cloud patterns the delivery of large engagement platforms – essentially web applications architectured, of course, specifically to serve a vast amount of simultaneous access and a huge stream of information – became possible.

If one does take a look back into history of social media, these platforms step-by-step evolved from pure public-chat and tweet apps into full blown areas for (group) communications, gaming, advertising and (sometimes) simply storing information. Not by what they were originally intended to be (facebook’s core goal was – and still is, if you trust Zuckerberg – to connect everyone) but by how the consumers (private or business ones) developed themselves within them as well as developed and matured their usage patterns.

However, there is a “meta level” beyond the obvious: Observing youth and their approach to using technology surrounding them might lead to thinking: Those guys have completely forgotten about communication and engagement. I trust, the opposite is the case. When I talk to my kids, I learn that they read everything, absorb everything, have a much faster ability to notice news, information, consume different channels, etc. The only thing is: They do not react, if it doesn’t touch them. And that pattern applies not only to advertisement-backed social media feeds but also – and maybe foremost – to direct 1:1 or group conversations. And this is why I believe that the social aspect within the Nexus of Forces will have a much stronger impact than we currently notice.

I tend to claim a social revolution to approach us because – together with the other forces – social media will become the integrative middleware between what we want to consume, businesses want to drive us to consume and how we consume it. No advertising phone calls anymore, no spamming in our mailboxes (hurray!), but a social feed of information which is far better suited to create the impression of personal engagement while in truth being just an efficient aggregation and combination of data that we all have earlier produced ourselves.

Are businesses ready for that revolution? Can they adapt their marketing strategies to leverage those vast new possibilities? Orchestrating services and data in order to feed social platforms with what is considered relevant to the customers of a certain enterprise will become a core IT capability in order to be able to become a player of relevance in the social revolution.

 

{No. 4 of this blog post series talks about the challenges of the “mobile everywhere” culture – soon to come – stay tuned}

feature image found at AFAO talks (http://afaotalks.blogspot.com.au/2012/07/going-social_20.html)

Published by:
%d bloggers like this: