The Smile-IT Blog » Blog Archives

Tag Archives: IT

Integrated File Transfer still required for Automation

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

There is a wide range of sophistication when it comes to the IT systems that businesses operate. Some came on line around 2000, but others have been in use for much longer. Some have constructed systems in order to maintain high quality services for years to come. Still others are constantly adapting their systems to take advantage of the latest technology.

Because of this wide disparity, an automation solution must be able to handle current and future innovations in integration, orchestration and performance. It must also be backwards compatible so it can support legacy technologies.

Saying this, one of the technologies that an automation solution must support is file transfer between systems. Along with this, it must also support elaboration, interpretation, and transformation of file content to create new levels of automation integration for enterprise IT.

Experiences with multiple customers show that replacing legacy file transfer applications with state-of-the-art APIs is sometimes simply too time consuming and costly. However, it is crucial that these legacy system capabilities are provided for an automated and integrated IT landscape. Strangely enough, therefore, being able to address, process, interpret, and transfer files with the demands and challenges of an automated IT environment is still a must-have criteria for an enterprise automation solution.

Support of multiple different file transfer protocols

FTP (file transfer protocols – see a list here[1]) not equals FTP: Following are the most common FTPs still in use which must be supported by the selected automation solution:

  • FTP: This is the standard protocol definition for transferring files in an insecure manner. When operating behind a firewall, using FTP for transporting files is convenient and needs to be an integrated feature of your enterprise automation solution.
  • FTPS: adds support for “Transport Layer Security” (TLS) and the “Secure Socket Layer” (SSL) encryption protocols to FTP. Many enterprises rely on this type of protocol for security reasons, especially when it comes to moving beyond the network.
  • FTPES: This differs from FTPS only in terms of the timing of the encryption and transferring of login information. It adds an additional safety control to FTPS-based file transfers
  • SFTP: has been added to the Secure Shell protocol (SSH) by the Internet Engineering Task Force (IETF)[2] in order to allow for access, transfer and management of files through any reliable (SSH) data stream.

In addition to supporting all of the above protocols, an automation solution can enhance FT integration in automation scenarios by offering a direct endpoint-to-endpoint file transfer – based on a proprietary protocol. Providing this protocol eases the need for a central management engine implementation solely to transport files from one system to another.

Standard FT protocols

The most convenient way to allow connecting FTP capable remote systems based on the protocols listed above is through a graphical UI that allows defining the transfer much the way it is done with standard FTP clients. The actual transfer itself is normally executed by means of a dedicated adapter only initiated by the centrally managed and executed automation workflows. To comply with security requirements limiting login information to only top-level administrators, sensitive information such as username, password, or certificates are stored in separate objects. At the same time, file transfers are integrated into automation flows by specialists who do not have access to the detailed login information but can still make use of the prepared security objects.

Endpoint-to-endpoint File Transfer

In addition to supporting the standard FTP protocols, the automation solution’s ecosystem should offer a direct secure file transfer between two endpoints within the IT landscape.

In this case the automation solution issues the establishment of a direct, encrypted connection between the affected endpoints – normally using a proprietary internal protocol. This type of mechanism eliminates the need for additional tools and increases the performance of file transfers significantly.

Other features the solution should support include:

  • Multiple code translation (e.g. from ASCII to EBCDIC)
  • Data compression
  • Wildcard transfer
  • Regular checkpoint log (in order to re-issue aborted transfers from the last checkpoint recorded)
  • Checksum verification based on hashing algorithms (like e.g. MD5)
  • Industry standard transfer encryption (e.g. AES-128, AES-192 or AES-256)

Process integration

Finally, the key to offering enterprise ready integrated file transfer through any protocol is to allow seamless integration into existing automation workflows while leveraging all the automation functionality without additional re-coding or re-interpretation of transferred files. This includes:

  • Using file transfer results and file content in other automation objects.
  • Including file transfer invocation, execution, and result processing in the scripting environment.
  • Using files within pre or post conditions of action or workflow execution or augmenting pre/post conditions by making use of results from transferred files.
  • Bundling file transfers to groups of endpoints executing similar – but not necessarily identical – file transfer processes.

This allows the integration of legacy file transfers into innovative business processes without losing transparency and flexibility.

Published by:

Automation and Orchestration for Innovative IT-aaS Architectures

This blog post kicks off a series of connected publications about automation and orchestration solution architecture patterns. The series commences with multiple chapters discussing key criteria for automation solutions and will subsequently continue into outlining typical patterns and requirements for "Orchestrators". Posts in this series are tagged with "Automation-Orchestration" and will ultimately together compose a whitepaper about the subject matter.

Introduction & Summary

Recent customer projects in the field of architectural cloud computing frameworks revealed one clear fact repeatedly: Automation is the utter key to success – not only technically for the cloud solution to be performant, scalable and highly available but also for the business, which is using the cloudified IT, in order to remain in advantage compared to competition as well as stay on the leading edge of innovation.

On top of the automation building block within a cloud framework, an Orchestration solution ensures that atomic automation “black boxes” together form a platform for successful business execution.

As traditional IT landscapes take their leap into adopting (hybrid) cloud solutions for IaaS or maybe PaaS, automation and orchestration – in the same way – has to move from job scheduling or workload automation to more sophisticated IT Ops or DevOps tasks such as:

  • Provisioning of infrastructure and applications
  • Orchestration and deployment of services
  • Data consolidation
  • Information collection and reporting
  • Systematic forecasting and planning

In a time of constrained budgets, IT must always look to manage resources as efficiently as possible. One of the ways to accomplish that goal is through use of an IT solution that automates mundane tasks, orchestrates the same to larger solution blocks and eventually frees up enough IT resources to focus on driving business success.

This blog post is the first of a series of posts targeted at

  • explaining key criteria for a resilient, secure and scalable automation solution fit for the cloud
  • clearly identifying the separation between “automation” and “orchestration”
  • providing IT decision makers with a set of criteria to selecting the right solutions for their need of innovation

Together this blog post series will comprise a complete whitepaper on “Automation and “Orchestration for Innovative IT-aaS Architectures” supporting every IT in its strive to succeed with the move to (hybrid) cloud adoption.

The first section of the paper will list key criteria for automation solutions and explain their relevance for cloud frameworks as well as innovative IT landscapes in general.

The second section deals with Orchestration. It will differentiate system orchestration from service orchestration, explain key features and provide decision support for choosing an appropriate solution.

Target audience

Who should continue reading this blog series:

  • Technical decision makers
  • Cloud and solution architects in the field of innovative IT environments
  • IT-oriented pre- and post-sales consultants

If you consider yourself to belong into one of these groups, subscribe to the Smile-IT blog in order to get notified right away whenever a new chapter of this blog series and whitepaper gets published.

Finally, to conclude with the introduction let me give you the main findings that this paper will discuss in detail within the upcoming chapters:

Key findings

  • Traditional “old style” integration capabilities – such as: file transfer, object orientation or audit readiness – remain key criteria even for a cloud-ready automation platform.
  • In an era where cloud has become a commodity, just like the internet as such, service centered IT landscapes demand for a maximum of scalability and adaptability as well as multi-tenancy in order to be able to create a service-oriented ecosystem for the advancement of the businesses using it.
  • Security, maximum availability, and centralized management and control are fundamental necessities for transforming an IT environment into an integrated service center supporting business expansion, transformation, and growth.
  • Service orchestration might be the ultimate goal to achieve for an IT landscape, but system orchestration is a first step towards creating an abstraction layer between basic IT systems and business-oriented IT-services.

So, with these findings in mind, let us start diving into the key capabilities for a cloud- and innovation-ready automation solution.

 

Published by:

Skype-for-Business (Lync) on 1und1

Some time ago I promised to add a post about how to configure all the Lync DNS records on 1und1 (if in case you are hosting your domain there – might however be, that the information applies to other DNS providers as well).

Well … as time flies, 1und1 obviously has improved their domain management portal, hence all that “fancy stuff”, we originally did in order to make it work, is void now, and the best and pretty straight forward explanation on how to do it rightly can be found in the Office365 support pages.

Be happy and enjoy (I am and did – and it works perfect for us)

 

Published by:

Autodiscover!

OR: How to successfully migrate from POP to an Office365 mailbox, when your hoster doesn’t support you!

Yes, @katharinakanns was right, when she recently said, I’d bother with Office365 to get all our IT stuff migrated onto it. Sounds ridiculous, maybe, but it’s 2 domains, 2 mailboxes, a bunch of subdomains, aliases used all around the net, etc. etc. etc. … and we’d like to merge that all into one place.

Before you read on, this one here is – since long – something more down to earth again, so get ready for some bits and bytes 🙂

What’s this about?

These days I started to setup our Office365 tenant to serve both our single-person businesses as well as become the place for joint collaboration (and maybe more lateron). One thing in this that bothered me indeed a bit beyond normal was OneDrive – but that’s a different story to come … Another pretty interesting process was the domain migration. And even though umpteenth blogposts already tell the way to take from different angles, we ran into one bit that wasn’t just to solve by a click. I’ll share a few straight forward steps with domain migration here; but I’ll also share some hints beyond.

1. Know your hoster/provider

The domain you want to migrate into Office365 will most probably be managed by some ISP (like e.g. “1und1.de” or any other hoster; one whom you trust, maybe). Out of our experience, I’d suggest you get in touch with the support hotline of your hoster first and make sure

  • (a) whether editing DNS records for your domain is possible for you yourself (e.g. by some web interface) and (!) to which extent
  • (b) how accurately the hotline reacts in case of problems
  • (c) whether they can help in case of any failure over the weekend (one would want to have a business mailbox up and running on Monday morning, I guess)

I had to migrate 2 domains, one of which was with a hoster not allowing editing DNS myself but reacting swiftly to support requests and executing them just perfectly. The other one allowed editing the DNS by me but only let me enter TXT and MX records (no CNAME records – at least not for the primary domain). Or to be precise: The self-service web interface would let me do that but clearly stated that any existing records for this domain would become invalid by this step – and I wasn’t too sure whether this might run us into troubles with our business website …

1und1-CNAME-warning-screenshot

The 1und1.de warning about deactivating existing records

 

Note: The second was "1und1.de" and they do not offer any possibility of doing anything else in terms of DNS than what is provided for self service. I tried really hard with their support guys. No(!) way(!).

2. How migration works when your ISP cooperates

To begin with, it would of course be possible to simply move DNS management from the ISP to Office365. In that case, all the ISP would have to do is changing the addresses of the name servers managing the respective domain. We didn’t want that for several reasons, hence went for the domain migration option, which is actually pretty straight forward.

The Office365 domain management admin console is totally self-explanatory in this, and there’s umpteenth educational how-to-posts. The keyword is – surprise(!) – “Domains” and you just follow step 1-2-3 as suggested.

Office365 admin console - the place to start off into domain migration

Where you start: Office365 admin console

One can either start at the “Email address” section here (if there’s not yet any custom domain managed within the tenant) or by “Domains” further below:

  1. Office365 wants to know whether the domain is yours. Therefore Office365 shows you a TXT DNS record in the first step, which you have to forward to your hoster to be entered as part of “your” DNS. If you’re able to enter that yourself this step is accomplished in no time. Otherwise it depends simply on the response time of the support line. BTW: DNS propagation in general may take up to 72 hours as we know – however, in reality I didn’t experience any delay after having received the confirmation that the TXT has been entered. I could forward to step 2 instantly.
  2. With step (2) Office365 changes any user’s name that you want to make part of the then migrated domain. Essentially that’s a no-brainer, but an Office365 user currently can only send eMails being identified with exactly this username. Receiving goes by multiple aliases which can be configured separately in the user management console; but sending always binds to the username (there’s ways around this as well – but that’s again a different story). Hence, it is worth some consideration which users you click in this step.
  3. Proceeding to the next step equals stepping into the crucial part; after this change is completed your eMail, Lync and – if chosen – website URL will be redirected to Office365. Admittedly, in both cases I only chose “eMail and Lync” for migration, which means that the website remained with the ISP – for now … As the penultimate step after having chosen the services that you want to switch over, Office365 gives you a list of records that need to be entered as DNS records with your domain.

Let’s have a brief look into those DNS records as they are the ones that eventually bring your migration to life:

  • MX records: This is, normally, one record that identifies where the eMails with the domain in question shall be routed to (to: name@yourdomain.tld). No rocket science here and getting that into your DNS shouldn’t be a bummer, really.
  • CNAME records: The most important of these is the “autodiscover” record. I’d argue this to be the “most compulsory” one. Not having “autodiscover” set into the DNS of your domain means that any eMail client will not be able to discover the server for the respective user automatically, i.e. users will “pull their hair out” over trying to configure their mail clients for their Office365 Exchange account. In all honesty, I actually was not able to find a possibility to figure out the correct mail server string for outlook for our users as it contains the mailboxID (being a GUID@domainname.tld; if anyone of you out there knows one, can you please drop your solution as a comment). So, without the “autodiscover” record, you’ll be pretty lost, I think – at least with mobiles and stuff … The other CNAME records are for Lync and Office365 authentication services. Here‘s a pretty good technet article listing them all.
  • The SPF TXT record helps preventing your domain being used for spam mailing
  • And finally, 2 SRV records are for the Lync information flow and enabling SIP federation

[update] Here’s some hints on how we got Lync to work for our accounts, but for eMail, of all the records above the MX would be fully sufficient; I’d just once more emphasize “autodiscover”, as this caused us some headache, because …

3. What do you do, if your ISP does not add “autodiscover”?

As explained above, one’s in bad shape, if an ISP refuses to add the “autodiscover” CNAME record demanded by Office365 for a custom domain. In the case of “1und1” this was exactly what ran us into troubles. However, there’s a pretty simple solution to it, but to begin with – here’s some things that don’t work (i.e.: you don’t need to dig for them):

  • Enter CNAME records into the respective PCs hosts file: Normally a hosts file can be used locally to replicate a DNS – but only for resolving names to IPs, not for CNAMEs.
  • Install a local DNS server: Might work, but seemed like some more work. I didn’t want to dive into this for one little DNS record.
  • Find out the mailbox server for manual configuration: Well – as said above: I didn’t succeed in due course.

Finally @katharinakanns found the – utterly simple – solution by just asking the world for “autodiscover 1und1“. So here’s what probably works with any petulant ISP:

  • create a subdomain named “autodiscover.yourdomain.tld” with your ISP (normally, every ISP allows creation of unlimited subdomains)
  • create a CNAME record for this new subdomain and enter “autodiscover.outlook.com” as its value/address portion
1und1 CNAME for subdomain setting

The CNAME config screen again – and now we’re fine with checking the box at the bottom

Done. This is it. Mailclients discovered the correct mailserver automatically and configuration instantly became a matter of seconds 🙂

[update] 1und1 has updated there domain dashboard, hence config is easier now – find hints here!

 

{feature image from Ken Stone’s site http://masterstrack.com/ – I hope, he don’t mind me using it here}

 

Published by:

The “Next Big Thing” series wrap-up: How to rule them all?

What is it that remains for the 8th and last issue of the “Next Big Thing” blog post series: To “rule them all” (all the forces, disruptive challenges and game changing innovations) and keep services connected, operating, integrated, … to deliver value to the business.

A bit ago, I came upon Jonathan Murray’s concept of the Composable Enterprise – a paradigm which essentially preaches fully decoupled infrastructure and application as services for company IT. Whether the Composable Enterprise is an entire new approach or just a pin-pointed translation of what is essential to businesses mastering digital transformation challenges is all the same.

The importance lies with the core concepts of what Jonathan’s paradigm preaches. These are to

  • decouple the infrastructure
  • make data a service
  • decompose applications
  • and automate everything

Decouple the Infrastructure.

Rewind into my own application development and delivery times during the 1990ies and the 00-years: When we were ready to launch a new business application we would – as part of the rollout process – inform IT of resources (servers, databases, connections, interface configurations) needed to run the thing. Today, large IT ecosystems sometimes still function that way, making them a slow and heavy-weight inhibitor of business agility. The change to incorporate here is two-folded: On the one hand infra responsibles must understand that they need to deliver on scale, time, demand, … of their business customers (which includes more uniform, more agile and more flexible – in terms of sourcing – delivery mechanisms). And on the other hand, application architects need to understand that it is not anymore their architecture that defines IT needs but in turn their architecture needs to adapt to and adopt agile IT infrastructure resources from wherever they may be sourced. By following that pattern, CIOs will enable their IT landscapes to leverage not only more cloud-like infrastructure sourcing on-premise (thereby enabling private clouds) but also will they become capable of ubiquitously using ubiquitous resources following hybrid sourcing models.

Make Data a Service.

This isn’t about BigData-like services, really. It might be (in the long run). But this is essentially about where the properties and information of IT – of applications and services – really is located. Rewind again. This time only for like 1 or 2 years. The second last delivery framework, that me and my team of gorgeous cloud aficionados created, was still built around a central source of information – essentially a master data database. This simply was the logical framework architecture approach back then. Even only a few months – when admittedly me and my then team (another awesome one) already knew that information needs to lie within the service – it was still less complex (hence: quicker) to construct our framework around such a central source of (service) wisdom. What the Composable Enterprise, though, rightly preaches is a complete shift of where information resides. Every single service, which offers its capabilities to the IT world around it, needs to provide a well-defined, easy to consume, transparently reachable interface to query and store any information relevant to the consumption of the service. Applications or other services using that service simply engage via that interface – not only to leverage the service’s capabilities but even more to store and retrieve data and information relevant to the service and the interaction with it. And there is no central database. In essence there is no database at all. There is no need for any. When services inherently know what they manage, need and provide, all db-centric architecture for the sole benefit of the db as such becomes void.

Decompose Applications.

The aforementioned leads one way into the decomposition pattern. More important, however, is to spend more thorough thinking about what a single business related activity – a business process – really needs in terms of application support. And in turn, what the applications providing this support to the business precisely need to be capable of. Decomposing Applications means to identify useful service entities which follow the above patterns, offer certain functionality in an atom kind-of way via well-defined interfaces (APIs) to the outside world and thereby create an application landscape which delivers on scale, time, demand, … just by being composed through service orchestration in the right – the needed – way. This is the end of huge monolithic ERP systems, which claim to offer all that a business needs (you just needed to customize them rightly). This is the commencing of light-weight services which rapidly adopt to changing underlying infrastructures and can be consumed not only for the benefit of the business owning them but – through orchestration –form whole new business process support systems for cross-company integration along new digitalized business models.

Automate Everything.

So, eventually we’ve ended at the heart of how to breath life into an IT which supports businesses in their digital transformation challenge.

Let me talk you into one final example emphasizing the importance of facing all these disruptive challenges openly: An Austrian bank of high reputation (and respectful success in the market) gave a talk at the Pioneers about how they discovered that they are actually not a good bank anymore, how they discovered that – in some years’ time – they’d not be able to live up to the market challenges and customers’ demands anymore. What they discovered was simply, that within some years they would lose customers just because of their inability to offer a user experience integrated with the mobile and social demands of today’s generations. What they did in turn was to found a development hub within their IT unit, solely focussing on creating a new app-based ecosystem around their offerings in order to deliver an innovative, modern, digital experience to their bank account holders.

Some time prior to the Pioneers, I had received a text that “my” bank (yes, I am one of their customers) now offers a currency exchange app through which I can simply order the amount of currency needed and would receive a confirmation once it’s ready to be handed to me in the nearest branch office. And some days past the Pioneers I received an eMail that a new “virtual bank servant” would be ready as an app in the net to serve all my account-related needs. Needless to say that a few moments later I was in and that the experience was just perfect even though they follow an “early validation” policy with their new developments, accepting possible errors and flaws for the benefit of reduced time to market and more accurate customer feedback.

Now, for a moment imagine just a few of the important patterns behind this approach:

  • System maintenance and keeping-the-lights-on IT management
  • Flexible scaling of infrastructures
  • Core banking applications and services delivering the relevant information to the customer facing apps
  • App deployment on a regular – maybe a daily – basis
  • Integration of third-party service information
  • Data and information collection and aggregation for the benefit of enhanced customer behaviour insight
  • Provision of information to social platforms (to influence customer decisions)
  • Monitoring and dashboards (customer-facing as well as internally to business and IT leaders)
  • Risk mitigation
  • … (I could probably go on for hours)

All of the above capabilities can – and shall – be automated to a certain, a great extent. And this is precisely what the “automate everything” pattern is about.

Conclusion

There is a huge business shift going on. Software, back in the 80ies and 90ies was a driver for growth, had its downturn in and post the .com age and now enters an era of being ubiquitously demanded.

Through the innovative possibilities by combining existing mobile, social and data technologies, through the merge of physical and digital worlds and through the tremendously rapid invention of new thing-based daily-life support, businesses of all kind will face the need for software – even if they had not felt that need so far.

The Composable Enterprise – or whatever one wants to call a paradigm of loosely coupled services being orchestrated through well-defined transparently consumable interfaces – is a way for businesses to accommodate this challenge more rapidly. Automating daily routine – like e.g. the aforementioned tasks – will be key to enterprises which want to stay on the edge of innovation within these fast changing times.

Most importantly, though, is to stay focussed within the blurring worlds of things, humans and businesses. To keep the focus on innovation not for the benefit of innovation as such but for the benefit of growing the business behind.

Innovation Architects will be the business angels of tomorrow – navigating their stakeholders through an ongoing revolution and supporting or driving the right decisions for implementing and orchestrating services in a business-focussed way.

 

{the feature image of this last “The Next Big Thing” series post shows a design by New Jersey and New York-based architects and designers Patricia Sabater, Christopher Booth and Aditya Chauan: The Sky Cloud Skyscraper – found on evolo.us/architecture}

Published by:

Gartner ITxpo: IT im größten denkbaren Wandel

Von Montag 9. bis Donnerstag, 13.11. ging in Barcelona das heurige Gartner Symposium mit der ITxpo über die Bühne. Glaubt man den Betonungen der Gartner Analysten selbst, so ist dies eine der wichtigsten Trend-Konferenzen des Jahres und in der Tat haben die Einschätzungen des IT Research Unternehmens durchaus Hand und Fuß.

In der Keynote – wie könnte es anders sein – steht die “Digitale Wirtschaft” (eine ansich sperrige Übersetzung des Begriffs “Digital Business”) natürlich im Mittelpunkt und Peter Sondergaard, Senior Vice President und Head of Research, wartet mit 3 wesentlichen Zukunftsaussagen auf:

  • IT und Geschäftsanwendung haben sowohl grundsolide als auch sehr flexibel zu sein, um mit den sich schneller ändernden Anforderungen an Integrationen, Beziehungen, Kommunikationspfaden, … etc. Schritt halten zu können, die mit dem “Internet der Dinge” und der nahtlosen Verknüpfung von Mensch, Unternehmen und Dingen einher gehen.
  • Jedes Unternehmen ist ein Technologie-StartUp, denn mit dem massiven Einzug von Software in nahezu jedes Geschäftsmodell (Gartner nimmt IT Ausgaben von 1.3Mrd im Jahr 2015 im EMEA Raum an) ergeben sich völlig veränderte Ausgangslagen für sowohl Technologie- als auch Non-Technologie-Unternehmen.
  • In einer digitalisierten Wirtschaft müssen IT-Organisationen ihren Zugang zu Sicherheitsfragen und Risikomanagement grundlegend ändern, denn in einer Welt, in der jeder mit allem und alles mit jedem vernetzt sein kann, sich diese Verbidnungen zu jeder Zeit neu definieren können und durch wesentlich raschere Innovation jederzeit neue Verbindungen entstehen können, bleibt keine Zeit für Vorabminimierung von Risikoszenarien – ja: sind diese nicht einmal vollinhaltlich abgrenzbar. Im Gegenteil, es müssen Risiken bewusst in Kauf genommen und proaktiv gemanaged werden.

Die Pressemitteilung zur Keynote von Peter Sondergaard kann im Gartner Newsroom im Wortlaut nachgelesen werden.

 

Published by:

Innovationskraft ist nicht das Problem!

Und so ist dies hier also mein erster österreichischer (vulgo: deutschsprachiger) Blogbeitrag. Garnicht so einfach, stelle ich gerade fest, wenn man es gewohnt ist, “English” zu schreiben … Und warum das Ganze? Weil diese Methode – “Arse First”, Greg Ferro’s Heransgehen an das Bloggen einfach immer noch funktioniert.

Bleibt die simple Frage:

Was war es diesmal

…, das mich dazu veranlasst hat, etwas zu schreiben?

Vergangene Woche besuchte ich das Pioneers Festival in der Wiener Hofburg: Eine Manifestation der Innovationskraft in der IT, ein Fingerzeig in die Richtung, in der sich die IT – nicht nur in diesem Lande, der Region oder Europa schlechthin – sondern einfach überhaupt hinbewegt. Eine grandiose Veranstaltung, die sogar einen nicht unbedingt genuinen Gründer wie mich (so, von der prinzipiellen Art her) motiviert. Ganz einfach durch den “Spirit”, der zwei Tage lang durch die altehrwürdigen Hallen der Hofburg wehte …

Und dann, am Abend des zweiten Tages, ergab es sich, dass ich wieder einmal Zeit fand, eine der regelmäßigen – durchaus guten – Veranstaltungen der APA EBC (eBusiness Community) zu besuchen. Ein Impulsvortrg mit Podiumsdiskussion zum Thema “Das neue Maschinenzeitalter: Wie die Automatisierung die Arbeitswelt verändert“. Peter Brandl (evolaris) sprach den Vortrag, der sich im wesentlichen mit Industrie 4.0 und IoT beschäftigte (der Mann hatte Gartner gründlich studiert und die wichtigsten Entwicklungen durchaus gut und launig zusammengesfasst). Vertreter von IBM, Kapsch und der TU Wien diskutierten danach mit ihm die brennenden Fragen rund um das Thema des Veranstaltungstitels, von welchen die heißeste offenbar jene nach dem möglichen Verlust von Arbeitsplätzen durch die nahen IT-technischen Zukunftsentwicklungen zu sein schien (Zusammenfassung gefällig?)

Während Andreas Kugi (TU) noch einige Male einbrachte, dass die innovativen und umwälzenden Entwicklungen der nächsten Jahre vor allem einer reformierten Art der Ausbildung bedürfen, hatten die übrigen Gesprächspartner offensichtliche Mühe, sich von Gemeinplätzen wegzubewegen. Warum? Weil ein Thema in der gesamten Diskussion – auch bei den Wortmeldungen aus dem Publikum (deren aus Zeitgründen überhaupt nur 3 zugelassen werden konnten) – völlig unter den Tisch fiel: Der Einfluss der Legislative an der Weiterentwicklung der IT in unserem und den übrigen Europäischen Ländern!

Letztendlich ist die Sachlage in unseren Breitengraden relativ simpel: Es gibt

3 einfache Punkte

für das Scheitern des Digitalzeitalters (neudeutsch: “Digital Business”) in unseren Landen:

  1. Während andernorts längst außer Diskussion steht, dass die Verbindung und nahtlose technologisierte Kommunikation von Menschen, Unternehmungen und Dingen Einzug in unseren täglichen (nein: nicht nur den Arbeits-)Alltag halten wird, war jene oben zitierte Diskussion über weite Strecken noch von der Frage geprägt, in welchem Ausmaß uns diese disruptiven Veränderungen treffen werden. Voll und ganz werden sie es – das ist relativ einfach vorherzusagen.
  2. Am – ebenfalls oben bereits erwähnten – Pioneers Festival meinte der Amerikanische Venture Capitalist Erik Bovee (http://speedinvest.com/ – Wien, Silicon Valley) wörtlich: “Venture Capitalists hassen Österreichisches Recht und Deutsche Besprechungsprotokolle”. Was als launige Bemerkung in einer einstündigen Präsentation zu StartUp-Tips gedacht war, zeigt eines schon sehr deutlich: StartUps und junge Unternehmer, die ihre Ideen vor allem mit den neuen Möglichkeiten der IT-Veränderungen unseres Zeitalters umzusetzen wissen, siedeln sich eher in Ländern an, die ihnen unterstützend unter die Arme greifen, als in solchen, die durch ihre Gesetzgebung oder regulative Kraft die Entwicklung und den Höhenflug einer brillanten Idee zu stoppen wissen.
  3. Eine weitere viel zu schwergewichtig in der genannten APA Podiumsdiskussion erwähnte Fragestellung war jene der Privatsphäre. O-Ton: “Natürlich ist es erforderlich, sich im Zuge des Platz-Greifens all dieser Industrie 4.0 und IoT-Technologien über den Umgang mit sensiblen Daten klar zu werden und dafür geeignete Maßnahmen zu ergreifen.” (eine Ebene, die übrigens bereits vor 6 Jahren in der damals hierzulande beginnenden Cloud-Diskussion immer wieder erklommen wurde – wohl um sich um die konkreten Cloud Computing Fakten herumzuschummeln – siehe auch diesen Blogbeitrag zum Thema). Die Frage nach den Chancen wird also offensichtlich wenn dann erst nach sorgsamer Betrachtung, Beantwortung und Regulierung möglicher Risiken in Augenschein genommen.

Ich glaube, wir sollten uns darüber im Klaren sein, dass die Weiterentwicklung von allem, was auf Basis von Cloud Computing in unsere alltägliche Lebenswelt Einzug gehalten hat – mobile Verfügbarkeit, der Einsatz sozialer Netzwerke für alles mögliche, Datananalyse in Echtzeit, inklusive entsprechender Schlussfolgerungen, die Verknüpfung von Informationen von uns, unserem Verhalten, den Dingen, mit welchen wir interagieren, … – nicht aufzuhalten ist. Wir sollten uns auch darüber im Klaren sein, dass diese Weiterentwicklung eine Unmenge an Chancen mit sich bringt, unser Leben – bei entsprechend weisem, bewussten Umgang damit – in jeder Hinsicht zu bereichern. Und wir sollten uns darüber im Klaren sein, dass da draußen irgendwo eine schier unglaubliche Anzahl an intelligenten, kreativen Menschen herumläuft (über 3.000 alleine am Pioneers Festival), die mit täglich neuen Ideen diese Weiterentwicklung aufgreifen, in Lösungen integrieren und vorantreiben.

Und wenn wir uns hinter Regularien und Gesetzen verstecken, die zu unserem angeblichen Schutz verankert werden – nun: Dann werden diese Menschen eben wo anders hingehen, um ihre Ideen zu verwirklichen. In der Tat: Das “Internet der Dinge”, intelligente Maschinen und Industrie 4.0 wird Arbeitsplätze lediglich verändern, nicht vernichten – in diesem Punkt stimme ich den Diskussionsteilnehmern der APA EBC Veranstaltung unbedingt zu. Vernichtet werden Arbeitsplätze in unserem Lande dadurch, dass den Möglichkeiten durch die Weiterentwicklung von Technologien und innovativen Ansätzen nicht genügend Platz, Raum und Recht gegeben wird.

In Österreich war Innovationskraft noch nie das eigentliche Problem! Das Problem war meist, dass sie nur in anderen Ländern wirklich Nutzen-schaffend ausgelebt werden konnte. Es wäre an der Zeit, das zu ändern. Dringend!

 

Update: Link zur Keynote von Peter Brandl

Published by:

I wasn’t cheating (It’s still the End of the Cloud)

I swear, i haven’t known any of what I’m going to share with you in this post, when I wrote “The End of the Cloud (as we know it)“. If I would have I probably may not have written it.

Anyway, this is about Jonathan Murray, CTO of Warner Music Group (@Adamalthus), defining something that isn’t even mentioned yet by Gartner and the-like (google it, for proof; even bing it; the result remains the same). And – hey! – he did that back in April 2013.

What he’s essentially saying is, that in order to enable businesses to compete in an ever more demanding, more dynamic, more agile era of interacting services and capabilities, in an era where things are connected to the net, to systems, to businesses, to humans, in an era where everything is going to be digitalized, IT needs to disruptively change not only the way it is delivered to businesses but actually the way it is built.

What Jonathan demands for (and according to his own account is realizing within Warner) is “IT Factories”. No Silos anymore. No monolithic application engineering and delivery. A service oriented component architecture for everything IT needs to deliver and connect with. And the term “Cloud” isn’t even mentioned. Because we’ve landed beyond even discussing it. The models are clear. The need for adopting them for the benefit of a new – an agile – IT is now!

What Jonathan demands for is “The Composable Enterprise” – essentially consisting of 4 core patterns:

  • Decouple the Infrastructure
  • Make Data a Service (awesome, disruptive quote from Jonathan’s September 2013 “CloudFoundry Conference” talk: “Stored procedures need to be strangled at birth!”)
  • Decompose Applications
  • and finally, of course: Automate Everything

Read the concept in his blog post on adamalthus.com.

And here’s two recordings of talks given by Jonathan – both really worth watching:

And then, let’s go build it …!

 

Published by:

Security is none of my business!

… says the home IT expert. “I’m just not experienced enough to bother with these kind of nightmare”, says he, until malware kicks in …

I’ve gone through a little sweat and threat the past few hours as I wasn’t really able to securely (!) manage my NAS while absent from home and at the same time reading from the SynoLocker ransomware attack.

This isn’t particularly pleasant nor comforting in any way, but luckily it seems that I had at least done a few things right when configuring my gear (my data remained untouched so far). So, I thought to share some thoughts:

1 password – 1 service

The simplest and for sure one of the most silly reasons for being attacked is using some kind of default password (like “12345” or the username) or using one password for many services. This obviously wasn’t particularly the reason for SynoLocker to be able to attack, but still it gave comfort during the time today, when it was still unclear whether the attack is run via a password vulnerability.

I have to admit to gain some confidence in using an easy to remember algorithm which leads to a different password with whatever new service I consume.

No standard ports

For some reason I have this weird habit of changing ports when opening up a new service from the private LAN to the Internet. I never really go with what is suggested to be used, so when configuring my NAS for the first time, I chose to deviate from the given port 5000 and 5001 for the control panel. So far, I cannot be sure, but possibly this action saved me from SynoLocker’s attack.

So I better keep the habit and whenever I do have the possibility, I change any port exposed to that dangerous world outside.

Yes, I appreciate that for certain services one cannot do that as the service consumers – like a client application – rely on fixed predefined ports. Let’s hope, then, that data accessible through this service are just not that important and you got a backup of the same.

Firewall rules are OK

Yes. They are. Especially the ones allowing you to manage your gear within the local private network. I got a rule to allow access from the private subnet

Ports: All - Protocol: All - Source IP: <my subnet IP, 16 bit + subnet mask> - Action: Allow

while at the same time everything not matching an existing rule is allowed. Now, when needing to close down everything I just need to enter the panel, chose to “deny all that doesn’t match a rule” and – BOOM – doors shut. Leaves me without management control form the outside. But – Hey! – that’s minor compared to encrypted data put at ransom. And rather sooner than later I’ll make my way into the private LAN anyway to fix things.

Let the service dictate

My NAS has quite a nice feature – which probably every NAS has anyway: When all is closed down for access from the outside, it tells you what to open for the particular service to work. Hence, I always close the firewall completely when installing or starting a new service. With that – through the respective warning popup – i instantly learn what shall be opened for this service to operate. There’s just a few things to keep in mind:

  • No, you do NOT want to chose the “suppress firewall messages in future” option
  • No, you do NOT want to click OK without investigating the list
  • Yes, you DO want to spend time to figure out how to – here we are again! – change the default ports without disrupting the respective service

And so – to conclude: Here’s what put me under sweat and threat, eventually:

Update!

I didn’t. It turned out that I ran my gear with exactly the version vulnerable to SynoLocker without ever having even thought of checking for updates – without having checked the option to do critical updates automatically in the background.

“Why”, you ask? Honestly, because I tend to think that malfunctioning updates can break stuff as well. Question is only: What is worse? Malfunctioning update or malware injection?

Well – the option is checked now, the above saved me from a nightmare of data loss or at least financial implications and my gear remains closed to the public until further notice. From the vendor.

 

Published by:
Uncategorized

Arse First

@etherealmind says: “Get angry about something. That’s where you find your inspiration for a blog post.” (more about @etherealmind’s book here)

I’ll pick that, turn it a bit and rather state: “Get LoL about something …”.

… that’s what I said in the primary edition of the Smile-IT blog; but as there’s more to discover, more to laugh about, more to think about, … and more to write about – here’s the second edition: Another collection of smiley thoughts about Life, the Universe and Everything …

Published by:
%d bloggers like this: