The Smile-IT Blog » Blog Archives

Tag Archives: Security

Mein Passwort

Innerhalb weniger Stunden – 2 Hacker-Fälle in meinem Freundeskreis. In beiden Fällen – vermutet! – ein zu einfaches Passwort.

In beiden Fällen habe ich den Betroffenen mit einem (mir wenig epochal erscheinenden) Tip unter die Arme gegriffen. In beiden Fällen waren die Betroffenen sehr dankbar dafür. Und daher schreib ich den Tip nun auf, damit ich in Zukunft nur noch den Link versenden muss 😉

Sicheres Passwort ohne Passwort-Safe

Wenn Sie jemand sind, der ein Tool zur Verwaltung der vielen verschiedenen Passwörter, die man so braucht, verwendet, dann endet dieser Artikel für Sie hier. Ich war aus mir unerfindlichen Gründen nie ein Fan von derartigen Tools (und habe keinerlei Wertung dafür).

Mein System kommt ohne Password-Safe aus. Es basiert auf dem Prinzip, dass Passwörter nicht kompliziert sein müssen. Sie müssen nur

  • ausreichend variantenreich sein, um nicht erraten werden zu können
  • kurz genug sein, um es beim Tippen einfach zu haben
  • und leider – auf Grund dummer Passwortregeln diverser Plattformen – immer auch ein paar Zahlen oder Sonderzeichen enthalten, damit sie akzeptiert werden

Daraus ergeben sich nun zwei simple Möglichkeiten:

  1. Wählen Sie Ihren Lieblingssatz. Eine Aussage, die Sie inspiriert, die Sie nie vergessen würden, die etwas Besonderes für Sie birgt (beschränken Sie’s wenn möglich auf 6-8 Wörter). Schreiben Sie sie nieder in der Schreibweise, die Ihnen angenehm ist (den Zettel werfen Sie später weg ;)). Nun nehmen Sie die Anfangsbuchstaben der Wörter des Satzes; alle in der korrekten Schreibweise (in aller Regel haben Sie nun ein Gemisch aus großen und kleinen Anfangsbuchstaben).
  2. Wählen Sie Ihr Lieblingswort. Einfach ein Wort, an das Sie sich immer ganz von selbst erinnern würden.

Mit beiden Varianten erhalten Sie am Ende einen Buchstabensalat, den niemand außer Ihnen weiß.

Und jetzt die Ziffern

An irgendeiner Stelle in dem Buchstabensalat müssen Sie nun eine Ziffernkombination einfügen, die Sie sich eben so leicht merken können, wie den Buchstabensalat (z.B: das Geburtsdatum Ihres Ameisenbären). Wenig Erinnerungsvermögen erfordert das Einfügen am Beginn oder am Ende. In der Mitte geht dann gut, wenn das Wort oder der Satz eine natürliche, leicht zu merkende Bruchstelle hat (z.B. “CISV inspires action -Bruchstelle- for a more just and peaceful World”).

Sonderzeichen

Manch ein System verlangt leider, dass ein Passwort auch Sonderzeichen enthält (also z.B. ! $%&[?). Wären Computer benutzende Menschen von Anfang an dieser Anleitung hier gefolgt, würden wir uns das ersparen, denn dann wüssten Programmierer, dass niemand z.B. “abcdefg” oder “qweasd” als Passwort benutzte. Nachdem es immer noch derartige Kreationen gibt, müssen Systeme leider nach wie vor ein wenig mehr Sicherheit verlangen. Wenn Sie also dazu genötigt werden, ein Sonderzeichen in Ihrem Passwort zu verwenden, dann tauschen Sie einfach z.B. alle Vorkommen von “1” gegen ein “!” aus. Fertig,

Variantenreich

Bis hier her ist unser Passwort zwar schwer erratbar; wir verwenden aber leider immer noch ein und das selbe Passwort für alle Systeme, für welche wir eines brauchen.

Das können wir ändern (dank @katharinakanns, der ich diesen Teil des Tips verdanke): Jede Plattform (z.B. Internet-Seite, Online-Shop, Kunden-Portal, eMail-System, …) hat eine Adresse; normalerweise eine Internet-Adresse oder URL (z.B.: www.amazon.de). Wir picken uns aus dieser Adresse einen Buchstaben heraus:

  • den ersten (einfach)
  • den letzten (auch nicht gerade “rocket-science”)
  • den 4. (weil wir 4 Kinder haben z.B.)
  • oder irgendeinen anderen

Im Beispiel oben wäre das das “z” – das “www.” lassen wir weg  (Groß- oder Kleinschreibung dürfen Sie entscheiden – nur, bitte: bleiben Sie dabei!). Den Buchtaben (es dürfen natürlich auch gern zwei sein) fügen wir an einer geeigneten Stelle ein – und damit ist das Passwort-Unikat fertig!

Beispiel

Nehmen wir an, wir wären absolut begeistert von der Kinder- und Jugendlichen-Organisation “CISV” (falls Sie wissen möchten, was das ist – hier gibt es ein paar Informationen: http://www.cisv.org). Daraus ergäbe sich folgendes Passwort-Beispiel:

  • Leitsatz: Building global Friendship
  • Buchstaben: BgF
  • Ziffern – das Gründungsjahr von CISV: 1951
  • Buchstabe der jeweiligen Webadresse: letzter
  • Stelle im Passwort: Anfang

Das Passwort für AMAZON.DE wäre daher also: n1951BgF oder n!951BgF

Und nun nennen Sie mir den BOT der da draufkommen soll …

Sie hingegen merken sich das durch die persönliche, emotionale Bindung vermutlich ewig! Auch ohne Tool …

 

{feature image courtesy of IDR-Welle}

Published by:

Vicious Circle into the Past

We are on the edge of an – as businessinsider.com recently called it – exploding era: The IoT Era. An interesting info graphic tells us stunning figures of a bright future (at least when it comes to investment and sales; see the full picture further below or in the article).

The info graphic in fact stresses the usual numbers (billions of devices, $ trillion of ROI) and draws the following simple explanation of the ecosystem:

IoT and BigData Analysis (info graphic clip)

A simple explanation of IoT and BigData Analysis

Devices are receiving requests to send data, in return they do send data and data gets analyzed. Period.

Of course, this falls short of any system integration or business strategy aspect of the IoT evolution. But there’s more of a problem with this (and other similar) views onto IoT. In order to understand that, let us have a bullet point look at the mentioned domains and their relation with IoT (second part of the graph; I am intentionally omitting all numbers):

  • Manufactoring: smart sensors use increases
  • Transportation: connected cars on advance
  • Defense: more drones used
  • Agriculture: more soil sensors for measurements
  • Infrastructure, City: spending on IoT systems increases
  • Retail: more beacons used
  • Logistics: tracking chips usage increases
  • Banking: more teller-assist ATMs
  • Mining: IoT systems increase on extraction sites
  • Insurance (the worst assessment): IoT system will disrupt insurances (surprise me!)
  • Home: more homes will be connected to the internet
  • Food Services: majority of IoT systems will be digital signs
  • Utilities: more smart meter installations
  • Hospitality: room control, connected TVs, beacons
  • Healthcare: this paragraph even contents itself with saying what devices can do (collect data, automate processes, be hacked ?)
  • Smart Buildings: IoT devices will affect how buildings are run (no! really?)

All of these assessments fall short of any qualification of either which data is being produced, collected and processed and for which purpose.

And then – at the very beginning – the info graphic lists 4 barriers to IoT market adoption:

  • Security concerns
  • Privacy concerns
  • Implementation problems
  • Technological fragmentation

BusinessInsider, with this you have become part of the problem (as so many others already have): Just like in the old days of cloud commencement, the most discussed topics are security and privacy – because it is easy to grasp, yet difficult to explain, what the real threat would possibly be.

Let us do ourselves a favour and stop stressing the mere fact that devices will provide data for processing and analysis (as well as more sophisticated integration into backend ERP, by the way). That is a no-brainer.

Let us start talking about “which”, “what for” and “how to show”! Thereby security and privacy will become and advantage for IoT and the digital transformation. Transparency remains the only way of dealing with that challenge, because – just as with cloud – those concerns will ultimately not hinder adoption anyway!

 

The IoT Era will explode (BusinessInsider Info Graphic)

The IoT Era will explode (BusinessInsider Info Graphic)

{feature image from www.thedigitallife.com}

Published by:

Automation Security

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

An obvious key point to consider when choosing an automation solution is security. We’ve discussed Audit & Compliance separately from security since audit trails and compliance need the architectural support by the solution but are both less technical in itself compared to security.

Considering security issues for an automation solution means focusing on the following areas:

  • Confidentiality: How does the solution manage authorized access?
  • Integrity: How does the solution ensure that stored objects and data are consistently traceable at any point in time?
  • Availability: How does the solution guarantee availability as defined, communicated, and agreed upon?
  • Authenticity: How does the solution ensure the authenticity of identities used for communication of partners (components, objects, users)
  • Liability: How does the solution support responsibility and accountability of the organization and its managers?

None of these areas rely on one particular architectural structure. Rather they have to be assessed by reviewing the particular solution’s overall architecture and how it relates to security.

User security

Authentication

Any reputable automation solution will offer industry standard authentication mechanisms such as password encryption, strong password policy, and login protection upon fail. Integrating with common identity directories such as LDAP or AD provides a higher level of security for authenticating user’s access. This allows for the “bind request” to be forwarded to the specific directory and thereby leveraging the directory’s technologies not only to protect passwords and users but also to provide audit trail data for login attempts. Going a step further, an authentication system provided through an external, integrated LDAP might offer stronger authentication – such as MFA – out-of-the-box without the need to augment the solution to gain greater security.

In addition, the solution should provide a customized interface (e.g. provided through an “exit – callback” mechanism) for customers to integrate any authentication mechanism that is not yet supported by the product out-of-the-box.

Personnel data base

Most organizations use one core personnel database within their master data management (MDM) process. For example, new employees are onboarded through an HR-triggered process which, in addition to organizational policies, ensures creation of access permissions to systems that employees use every day. As part of an automation system’s architecture, such an approach involves the need to offer automatically available interfaces and synchronization methods for users – either as objects or links. The automation workflow itself, which supports the HR onboarding process, would subsequently leverage these interfaces to create necessary authentication and authorization artifacts.

Authorization & Access

Enterprise-grade automation solutions should offer a variety of access control for managed objects. In addition to the core capabilities already discussed, IT operations should expect the solution’s support for securing various layers and objects within it. This involves:

  • Function level authorization: The ability to grant/revoke permission for certain functions of the solution.
  • Object level authorization: The ability to create access control lists (ACLs) at the single object level if necessary.
  • ACL aggregation: The ability to group object level ACLs together through intelligent filter criteria in order to reduce effort for security maintenance.
  • User grouping: The ability to aggregate users into groups for easy management.

In addition, a secure solution should protect user and group management from unauthorized manipulation through use of permission sets within the authorization system.

API

Automation solutions that do not include APIs are rarely enterprise ready. While compatible APIs (e.g. based on java libraries) would inherently be able to leverage previously discussed security features, Web Service APIs need to offer additional authentication technologies along commonly accepted standards. Within REST, we mainly see three different authentication methods:

  1. Basic authentication is the lowest security option as it involves simply exchanging a base64 encoded username/password. This not only requires additional security measures for storing, transporting, and processing login information, but it also fails to support authenticating against the API. It also opens external access for any authorized users through passwords only.
  2. OAuth 1.0a provides the highest level of security since sensitive data is never transmitted. However, implementation of authentication validation can be complex requiring significant effort to set up specific hash algorithms to be applied with a series of strict steps.
  3. OAuth 2.0 is a simpler implementation, but still considered a sufficiently secure industry standard for API authentication. It eliminates use of signatures and handles all encryption through transport level security (TLS) which simplifies integration.

Basic authentication might be acceptable for an automation solution’s APIs being operated solely within the boundaries of the organization. This is becoming less common as more IT operations evolve into service oriented, orchestrated delivery of business processes operating in a hybrid environment. Operating in such a landscape requires using interfaces for external integration, in which case your automation solution must provide a minimum of OAuth 2.0 security.

Object level security

The levels of authorization previously mentioned set the stage for defining a detailed authorization matrix within the automation solution’s object management layer. An object represents an execution endpoint within a highly critical target system of automated IT operation. Accessing the object representing the endpoint grants permission for the automation solution to directly impact the target system’s behavior. Therefore, an automation system must provide sufficiently detailed ACL configuration methods to control access to:

  • Endpoint adapters/agents
  • Execution artifacts such as processes and workflows
  • Other objects like statistics, reports, and catalogues
  • Logical tenants/clients

The list could be extended even further. However, the more detailed the authorization system, the greater the need for feasible aggregation and grouping mechanisms to ease complexity. At the same time, the higher the number of possibilities for controlling and managing authorization, the better the automation solution’s managability.

Separation of concern

Finally, to allow for a role model implementation that supports a typical IT organizational structure, execution must be separated from design and implementation. Object usage must not automatically imply permission for object definition. This allows another automation specialist to access the system to construct workflows with this and other objects without revealing the underlying credentials.

Communication Security

Securing the communication between systems, objects, and endpoints is the final security issue to be considered when assessing an automation solution. This includes

  • Encryption
  • Remote endpoint authentication – the ability to allow configuration of target endpoints authentication when interacting with the core automation management engine

For communication between components, encryption must be able to leverage standard algorithms. The solution should also allow configuration of the desired encryption method. At minimum, it should support AES-256.

Endpoint authentication provides a view of security from the opposite side of automation. To this point, we’ve discussed how the solution should support security implementation. When a solution is rolled out, however, endpoints need to automatically and securely interact with the automation core. Ideally the automation solution should generate a certification key deployable as a package to endpoint installations. Ideally this would happen via a separate, secure connection. This configuration enables a unique fingerprint for each endpoint and avoids intrusion of untrusted endpoints into the automation infrastructure.

Published by:

Audit & Compliance for Automation Platforms

This post is part of the "Automation-Orchestration" architecture series. Posts of this series together comprise a whitepaper on Automation and Orchestration for Innovative IT-aaS Architectures.

 

Audit and Compliance has assumed greater importance in recent years. Following the Global Financial Crisis of 2007-08 – one of the most treacherous crises of our industrial age (Wikipedia cross-references various sources to the matter) – audit and standardization organizations as well as governmental institutions invested heavily into strengthening compliance laws, regulations, and enforcement.

This required enterprises in all industries to make significant investments to comply with these new regulations. Standards have evolved that define necessary policies and controls to be applied as well as requirements and procedures to audit, check, and enhance processes.

Typically, these policies encompass both business and IT related activities such as authentication, authorization, and access to systems. Emphasis is placed on tracking modifications to any IT systems or components through use of timestamps and other verification methods – particular focused on processes and communications that involve financial transactions.

Therefore, supporting the enforcement and reporting of these requirements, policies and regulations must be a core function of the automation solution. Following are the key factors to consider when it comes to automation and its impact on audit and compliance.

Traceability

The most important feature of an automation solution to meet compliance standards is traceability. The solution must allow for logging capabilities that tracks user activity within the system. It must provide tracking of all modifications to the system’s repository and include the user’s name, date and time of the change, and a copy of the data before and after the change was made. Such a feature ensures system integrity and compliance with regulatory statutes.

Statistics

Statistical records are a feature that ensures recording of any step performed either by an actual user or one initiated by an external interface (API). Such records should be stored in a hierarchy within the system’s backend database allowing follow up checking as to who performed what action at what specific time. Additionally, the system should allow for comments on single or multiple statistical records, thereby supporting complete traceability of automation activities by documenting additional operator actions.

Version Management

Some automation solutions offer the option of integrated version management. Once enabled, the solution keeps track of all changes made to tasks and blueprint definitions as well as to objects like calendars, time zones etc. Every change creates a new version of the specific object which can be accessible at any time for follow up investigation. Objects include additional information like version numbers, change dates and user identification. In some cases, the system allows for restoring an older version of the specific objects.

Monitoring

All of the above handle, process and record design-time activity of an automation system, ensuring stored data and data changes are documented to comply with audit needs. During execution, an automation system should also be able to monitor the behavior of every instantiated blueprint. Monitoring records need to track the instance itself as well as every input/output, changes performed to or by this instance (e.g. putting a certain task on hold manually).

Full Audit Trails

All of the above features contribute to a complete audit trail that complies with the reporting requirements as defined by the various standards. Ultimately an automation system must be able to easily produce an audit trail of all system activity from the central database in order to document specific actions being investigated by the auditor. An additional level of security that also enables compliance with law and regulations is the system’s ability to restrict access to this data on a user/group basis.

Compliance Through Standardization

Finally, to ease compliance adherence, the automation solution must follow common industry standards. While proprietary approaches within a system’s architecture are applicable and necessary (e.g. scripting language – see chapter “Dynamic Processing Control”), the automation solution itself must strictly follow encryption methods, communication protocols, and authentication technologies that are widely considered as common industry best practice. Any other approach in these areas would significantly complicate the efforts of IT Operations to prove compliance with audit and regulatory standards. In certain cases, it could even increase the audit cycle to less than a year depending on the financial and IT control standard being followed.

Published by:

Synology SSL Certificate – a guideline

Ever struggled with securing access to your (Synology) NAS box through use of SSL and web server certificates? Well – here’s why you struggled; and maybe also how not to struggle anymore … have fun 🙂

A few things to know first

As always: The boring things go first! There’s some knowledge captured in them, but if you’re the impatient one (like me ;)) -> get over it!

What is:

  • SSL/TLS: secure socket layer – now more correctly referred to as “transport layer security” – is a crypto protocol which on the one hand ensures that the web server’s identity which is accessed is securely confirmed and on the other hand supports encryption of the connection between a client (browser) and a web server.
  • a cert (certificate): (for the purpose of this article let’s keep it with it being) a file that holds identity information of the webserver where it comes from as well as supports creation of a public key for encryption
  • a CA: “certificate authority” – a company or entity which is capable of signing a certificate to confirm its validity; this could be a Root-CA – in that case being the last in a -> “certificate chain” – or an Intermediate-CA, thus being able to sign certs but itself needing to be signed by a Root-CA; CAs are presented through certs themselves (so, you’ll see a whole bunch’o’ different cert files handled here)
  • a private key: the private – confidential, not to be distributed – part of pair of keys for symmetric encryption; one party encrypts content by using a public key and sends that content to the other party, which is capable of decrypting the content by use of its private key. Obviously the sending party must use a public key that corresponds to the private key
  • a certificate chain (cert chain): a chain of certificates, the first in the chain being used for the actual purpose (e.g. encrypting traffic, confirming identity), the last one representing and confirming the ultimately trusted party. Each cert in a chain confirms the correctness of its respective successor
  • a SAN cert: a “Subject Alternate Names” certificate file; this is a certificate confirming a web server’s identity by more than one specific name (URL), e.g. yourbankaccount.com; login.yourbankaccount.com; mobile.yourbankaccount.com; yourbankshortname.com; … whatsoever … (note: the use of SANs with your server’s cert only depends on how you’ve configured DNS and IP addresses to access the server; if there’s only one URL for your server, you won’t need that)

What happens when surfing https in your browser?

The “s” in https says: This is an SSL connection.

  • When accessing a web server via https, your browser expects a cert being presented by the server.
  • This cert is checked in 2 ways:
    • (1) can I trust tis cert to be what it says to be and
    • (2) is it telling me the true identity of the server that I am accessing

So, if e.g. you’re going to https://www.yourbankaccount.com and the cert holds “www.yourbankaccount.com” and your browser can read from the cert chain, that it can trust this cert, all will be fine and you can safely access that server.

How can trust be established?

As explained above, trust is confirmed through a chain of certificates. The last cert in the chain (say: the Root-CA cert) says “yes, that cert you’re showing me is OK”; the next one (say: the intermediary) says “yes, that next one is OK, too” … and so on until eventually the first cert – the one having been presented to your browser – is confirmed to be OK.

In other words: At least the last cert must be one that the client (your browser) can say of: “I can trust you!”

And how could it do that: Because, that last cert in the chain is one of a commonly trusted, public CA; commonly trusted authorities are e.g. Symantec (who purchased Verisign), Comodo, GoDaddy, DigiCert, … (wikipedia has a list plus some references to studies and surveys).

And finally: How can YOU get hold of a cert for YOUR web server, that is confirmed by one of those trusted certificate authorities? By paying them money to sign your cert. As simple as that.

Do you wanna do that just for your home disk station? No – of course not.


And this is were self-signed certificates kick in …

Now: What does Synology do?

Your disk station (whatever vendor, most probably) offers browser access for administration and so … plus a lot of other services that can be (and should be) accessed via SSL, hence: you need a cert. Synology has certificate management in the control panel (rightmost tab in “Security”).

The following screenshot shows the options that are available when clicking “Create certificate”:

Synology: Create SSL Certificate - Options

Synology: Create SSL Certificate – Options

Create certificate options briefly explained (with steps that happen when executing them):

  1. Create a self signed certificate:
    1. In the first step, enter data for the root certificate (see screenshotmind the headline!)
    2. Second step: Enter data for the server certificate itself (here’s a screenshot also for this; note, that you can even use IP addresses in the SAN field at the end – more on this a little later or – sic! – further above)
    3. When hitting apply, the disc station generates 2 certificates + their corresponding private keys and
    4. uses the first one to sign the second one
    5. Eventually, both are applied to the disc station’s web server and the web server is restarted (all in one single go; expect to be disconnected briefly)
    6. Once you’re back online in the control panel, the “Export Certificate” button allows download of both certificate and corresponding private keys (keep the private keys really, really – REALLY! – secluded!)
  2. Create certificate signing request (CSR) – this is a file to be sent to a certificate authority for signing and creation of a corresponding certificate:
    1. The only step to be done is to provide data for a server certificate (same information as in step 2 of (1) above)
    2. When done, click next and the disc station generates the CSR file + its corresponding private key
    3. Eventually the results are provided for download
    4. Download the files; in theory, you could now send them to a trusted CA. All CAs got slightly different procedures and charge slightly different rates for doing this
  3. Renew certificate:
    1. When clicking this option, a new CSR + private key of the existing certificate (the one already loaded) is generated and
    2. offered for download
    3. Same as step 4 in (2) above applies for the downloaded files
  4. Sign certificate signing request (CSR) – this allows to use the root certificate, that you could e.g. have created in (1) above, to create another certificate, which you could e.g. have created a CSR for in (2) above:
    1. The only step to execute is to select the CSR file,
    2. enter a validity period and
    3. specify alternate names for the target server (if need be)
    4. Again – upon completion – download of the resulting files is offered

That last step would e.g. allow you to generate only 1 root certificate (by using option (1) above) and use that for signing another server certificate of a second disc station. Note, that the respective CSR file needs to be generated in that second disc station. As a result in this case (after having downloaded everything) you should have:

  • 1 ca.crt file + corresponding ca.key file
  • 2 server.crt files + corresponding server.key files

The second server.crt + server.key file would be the one to be uploaded into your second disc station by using the “Import certificate” button right next to “Create certificate” in the control panel.

What did I do in fact?

In order to secure my 2 disc stations I executed the following steps:

  • “Create certificate” – (1) above – on the first disc station
  • Download all files
  • “Create CSR” – (2) above – on the second disc station
  • Download all files
  • Execute “Sign cert” – (4) above – on the first disc station with the results of the “Create CSR” step just before
  • Again, download everything
  • “Import cert” on the second disc station with the results from the signing operation just before

Now both my disc stations have valid certificates + corresponding private keys installed and working.

Does that suffice?

Well, not quite. How would any browser in the world know that my certificates can be trusted? Mind, we’re using self-signed certs here; the CA is my own disc station; this is nowhere mentioned in any trusted cert stores of any computer (hopefully ;)).

What I can do next is distribute the cert files to clients that I want to provide with secure access to my disc station.

On an OS X client:

  • Open the “Keychain Access” app
  • Import the .crt files from before (NOT the .key files – these are highly private to your disc station – do never distribute those!)
  • For the certificate entry resulting from the ca.crt file
    • Ctrl+click on the file
    • From the menu chose “Get Info”
    • Expand the “Trust” node
    • Set “When using this certificate” to “Always Trust” (as you can safely trust your own CA cert)

The 2 other cert files should now read “The certificate is valid” in the information on top.

On a Windows client:

  • Open the Management Console (click “Start” -> type “MMC”)
  • Insert the “Certificate” snap-in for “Computer Account” -> “Local Computer”
  • Expand the “Trusted Certificate Authorities” node until you can see all the certificates in the middle pane
  • Right click the “Certificates” node
  • Go to “All tasks” -> “Import…” and
  • Import the ca.crt file
  • Expand the “Trusted Sites” node until you can see all the certificates in the middle pane
  • Right click the “Certificates” node
  • Go to “All tasks” -> “Import…” and
  • Import both the server.crt files
  • Close MMC w/o saving; you’re done here

With that, the respective client now trusts your disc station servers; browsers on your client would not issue a security warning anymore, when accessing your disc station.

Does that solve everything?

Well … ähm … no!

Because I’ve used SAN certificates with my disc stations and because Google Chrome does not support SAN certificates correctly, Chrome still issues a security warning – a pretty weird one telling me that the cert was issued for server abc.def.com while the server presents itself as abc.def.com; well, Google, you better work on that 🙂

But Apple Safari and Microsoft Edge securely route access to my DSs – and that’s pretty neat 🙂

And is this the ultimate solution?

Well … I guess … not so, because: I cannot really easily distribute all my cert files to anyone who’d possibly need to access my disc stations. This is were commonly trusted, public CAs come into play. Money makes the world go ’round, isn’t it?

 

Published by:

Private Cloud Storage: 1 + 1 = 27 backup solutions

Why would a convinced “pro-cloudian” invest into a geo-redundant backup and restore solution for private (cloud) storage? The reasons for this were fairly simple:

  1. I store quite a bit of music (imagery and audio all-the-same); storing that solely in the cloud is (a) expensive and (b) slow when streamed (Austrian downstream is not yet really that fast)
  2. In addition to that, I store quite a lot of important projects data meanwhile (in different public clouds, of course, but also on my office NAS); at one point I needed a second location to further secure this data
  3. I wanted a home media streaming solution close by my hifi

My current NAS used to be a Synology DS411 (4 2TB discs, Synology Hybrid Raid – SHR – which essentially is RAID5). My new one is now a DS416 (same configuration; I just upgraded discs in a way that both NASs now run 2 2TB and 2 3TB discs – mainly disc lifetime considerations were leading to this, and the fact that I didn’t wanna throw away still-good harddiscs (if you’re interested in the upgrade process, just post a quick comment and I’ll come back to that gladly – but with Synology that’s pretty straight forward).

Bored already and not keen on learning all the nitty-gritty details: You can jump to the end, if you really need to 😉

More than 1 backup

Of course, it’s not 27 options – as in the headline – but it’s a fair lot of possibilities to move data between to essentially identical NASs for the benefit of data resilience. Besides that, a few additional constraints come into account when setting it up for geo-redundancy:

  • Is one of the 2 passively taking backup data only or are both actively offering services? (in my case: the latter, as one of the 2 would be the projects’ storage residing in the office and the other would be storage for media mainly – but not only – used at home)
  • How much upstream/downstream can I get for which amount of data to be synced? (ease of thought for me: both locations are identical in that respect, so it boiled down to data volume considerations)
  • Which of the data is really needed actively and where?
  • Which of the data is actively accessed but not changed (I do have quite a few archive folder trees stored on my NAS which I infrequently need)

Conclusion: For some of the data incremental geo-backup suffices fully; other data needs to be replicated to the respective other location but kept read-only; for some data I wanted to have readable replications on both locations.

First things first: Options

Synology Backup related Packages

Synology Backup related Packages

The above screenshot shows available backup packages that can be installed on any Synology disc station:

  • Time Backup is a Synology owned solution that offers incremental/differential backup; I recently heard of incompatibilities with certain disc stations and/or harddiscs, hence this wasn’t my first option (whoever has experiences with this, please leave a comment; thanx)
  • Of all the public cloud backup clients (ElephantDrive, HiDrive, Symform and Glacier) AWS Glacier seemed the most attractive as I’m constantly working within AWS anyway and I wasn’t keen on diving into extended analysis of the others. However, Glacier costs for an estimate of 3 TB would be $36 in Frankfurt and $21 in the US. Per month. Still quite a bit when already running 2 disc stations anyway which both are far from being over-consumed – yet.
  • Symform offers an interesting concept: In turn to contribution to a peer-to-peer network one gets ever more free cloud storage for backup; still I was more keen on finding an alternative without ongoing effort and cost
BTW: Overall CAPEX for the new NAS was around EUR 800,- (or less than 2 years of AWS Glacier storage costs for not even the full capacity of the new NAS). No option, if elasticity and flexibility aren't key that much ...

The NAS-to-NAS way of backup and restore

For the benefit of completeness:

  • Synology “Cloud Sync” (see screen shot above) isn’t really backup: It’s a way of replicating files and folders from your NAS to some public cloud file service like GoogleDrive or Dropbox. I can confirm, it works flawlessly, but is no more than a bit of a playground if one intends to have some files available publicly – for whatever reason (I use it to easily mirror and share my collection of papers with others without granting them access to my NAS).
  • Synology Cloud Station – mind(!) – is IMHO one of the best tools that Synology did so far (besides DSM itself). It’s pretty reliable – in my case – and even offers NAS-2-NAS synchronization of files and folders; hence, we’ll get back to this piece a little later.
  • Finally – and that’s key for what’s to come – there’s the DSM built-in “Backup & Replication” options to be found in the App Launcher. And this is mainly what I bothered with in the first few days of running two of these beasts.
Synology Backup and Replication AppLauncher

Synology Backup and Replication AppLauncher

“Backup and Replication” offers:

  • The activation and configuration of a backup server
  • Backup and Restore (either iSCSI LUN backup, if used, or data backup, the latter with either a multi-version data volume or “readable” option)
  • Shared Folder Sync (the utter Synology anachronism – see a bit further below)

So, eventually, there’s

  • 4 Cloud backup apps
  • 4 Synology owned backup options (Time Backup, iSCSI LUN backup, data volume backup and “readable” data backup) and
  • 3 Synology sync options (Cloud Sync, Cloud Station and Shared Folder Sync)

Not 27, but still enough to struggle hard to find the right one …

So what’s wrong with syncing?

Nothing. Actually.

Cloud Station is one of the best private cloud file synchronization solutions I ever experienced; dropbox has a comparable user experience (and is still the service caring least about data privacy). So – anyway, I could just have setup the whole of the two NASs to sync using Cloud Station. Make one station the master and connect all my devices to it and make the other the backup station and connect it to the master, either.

However, the thought of awaiting the initial sync for that amount of data – especially as quite a bit of it was vastly static – let me disregard this option in the first place.

Shared Folder Sync sounded like a convenient idea to try. It’s configuration is pretty straight forward.

1: Enable Backup Services

The destination station needs to have the backup service running; so that is the first thing to go for. Launching the backup service is essentially kicking off an rsync server which can accept rsync requests from any source (this would even enable your disc station to accept workstation backups from pc, notebook, etc., if they’re capable of running rsync).

To configure the backup service, one needs to launch the “Backup and Replication” App and go to “Backup Service”:

Synology Backup Service Configuration

Synology Backup Service Configuration

NOTE: I do always consider to change the standard ports (22 in this case) to something unfamiliar - for security reasons (see this post: that habit saved me once)!

Other than that, one just enables the service and decides on possible data transfer speed limits (which can even be scheduled). The “Time Backup” tab allows enabling the service for accepting time backups; (update) and third tab makes volume backups possible by just ticking a checkbox. But that’s essentially it.

2: Shared Folder Sync Server

Synology Shared Folder Sync Server

Synology Shared Folder Sync Server

In order to accept sync client linkage, the target disc station needs to have the shared folder sync server enabled, additionally to the backup service. As the screenshot suggests, this is no big deal, really. Mind, though, that it is also here, where you check and release any linked shared folders (a button would appear under server status, where this can be done).

Once “Apply” is hit, the disc station is ready to accept shared folder sync requests.

3: Initiate “Shared Folder Sync”

This is were it gets weird for the first time:

  • In the source station, go to the same page as shown above, but stay at the “Client” tab
  • Launch the wizzard with a click to “Create”
  • It asks for a name
  • And next it asks to select the folders to sync
  • In this very page it says: “I understand that if the selected destination contains folders with identical names as source folders, the folders at destination will be renamed. If they don’t exist at destination they will be created.” – You can’t proceed without explicitly accepting this by checking the box.
  • Next page asks for the server connection (mind: it uses the same port as specified in your destination’s backup service configuration setup previously (see (1) above))
  • Finally, a confirmation page allows verification – or, by going back, correction – of settings and when “Apply” is hit, the service commences its work.

Now, what’s it doing?

Shared Folder Sync essentially copies contents of selected shared folders to shared folders on the destination disc station. As mentioned above, it initially needs to explicitly create its link folder on the destination, so don’t create any folders in advance when using this service.

When investigating the destination in-depth, though, things instantly collapse into agony:

  1. All destination shared folders created by shared folder sync have no user/group rights set except for “read-only” for administrators
  2. Consequentially, the attempt to create or push a file to any of the destination shared folders goes void
  3. And altering shared folder permissions on one of these folders results in a disturbing message
Synology Permission change on Shared Folder Sync folders

Synology Permission change on Shared Folder Sync folders

“Changing its privilege settings may cause sync errors.”  – WTF! Any IT guy knows, that “may” in essence means “will”. So, hands off!

Further:

  • It did not allow me to create more than two different sync tasks
  • I randomly experienced failures being reported during execution which I couldn’t track down to their root cause via the log. It just said “sync failed”.

Eventually, a closer look into Synology’s online documentation reveals: “Shared Folder Sync is a one way sync solution, meaning that the files at the source will be synced to the destination, but not the other way around. If you are looking for a 2-way sync solution, please use Cloud Station.” – Synology! Hey! Something like this isn’t called “synchronization”, that’s a copy!

While writing these lines, I still cannot think of any real advantage of this over

  • Cloud Station (2-way sync)
  • Data backup (readable 1-way copy)
  • Volume backup (non-readable, incremental, 1-way “copy”)

As of the moment, I’ve given up with that piece … (can anyone tell me where I would really use this?)

BUR: Backup & Restore

The essential objective of a successful BUR strategy is to get back to life with sufficiently recent data (RPO – recovery point objective) in sufficiently quick time (RTO – recovery time objective). For the small scale of a private storage solution, Synology already offers quite compelling data security by its RAID implementation. When adding geo redundancy, the backup options in the “Backup & Replication” App would be a logical thing to try …

1: Destination first

As was previously mentioned, the destination station needs to have the backup service running; this also creates a new – administrable, in this case – shared folder “NetBackup” which could (but doesn’t need to) be the target for all backups.

Targets (called “Backup Destination” here), which are to be used for backups, still must be configured at the source station in addition to that. This is done in the “Backup & Replication” App at “Backup Destination”:

Even at this place – besides “Local” (which would e.g. be another volume or some USB attached harddisc) and “Network”- it is still possible to push backups to AWS S3 or other public cloud services by chosing “Public Cloud Backup Destination” (see following screenshots for S3).

Synology Cloud Backup: Selecting the cloud provider

Synology Cloud Backup: Selecting the cloud provider

 

Synology Cloud Backup: Configuring AWS S3

Synology Cloud Backup: Configuring AWS S3

NOTE, that the Wizzard even allows for bucket selection in China (pretty useless outside China, but obviously they sell there and do not differentiate anywhere else in the system ;))

As we’re still keen on getting data replicated between two privately owned NASs, let’s now skip that option and go for the Network Backup Destination:

  • Firstly, chose and enter the settings for the “Synology server” target station (mind, using the customized SSH port from above – Backup Service Configuration)
Synology Network Backup Destination Selection

Synology Network Backup Destination Selection

  • Secondly, decide on which kind of target backup data format to use. The screenshot below is self-explaining: Either go for a multi-version solution or a readable one (there we go!). All backup sets relying on this very destination configuration will produce target backup data according to this very selection.
Synology Network Backup Destination Target Format

Synology Network Backup Destination Target Format

2: And now: For the backup set

Unsurprinsingly, backup sets are created in the section “Backup” of the “Backup and Replication” App:

  • First choice – prior to the wizzard even starting – is either to create a “Data Backup Task” or an iSCSI “LUN Backup Task” (details on iSCSI LUN can be found in the online documentation; however, if your Storage App isn’t mentioning any LUNs used, forget about that option – it obviously wouldn’t have anything to backup)
  • Next, chose the backup destination (ideally configured beforehand)
Synology Backup Task - Select Destination

Synology Backup Task – Select Destination

  • After that, all shared folders are presented and the ones to be included in the backup can be checkmarked
  • In addition, the wizzard allows to include app data into the backup (Surveillance Station is the only example I had running)
Synology Backup Task - Selecting Apps

Synology Backup Task – Selecting Apps

  • Finally some pretty important detail settings can be done:
Synology Backup Task - Details Settings

Synology Backup Task – Details Settings

  • Encryption, compression and/or block-level backup
  • Preserve files on destination, even when source is deleted (note the ambiguous wording here!)
  • Backup metadata of files as well as adjacent thumbnails (obviously more target storage consumed)
  • Enable backup of configurations along with this task
  • Schedule the backup task to run regularly
  • And last not least: bandwidth limitations! It is highly recommended to consider that carefully. While testing the whole stuff, I ran into serious bandwidth decrease within my local area network as both disc stations where running locally for the tests. So, a running backup task does indeed consume quite a bit of performance!

Once the settings are applied, the task is created and stored in the App – waiting to be triggered by a scheduler event or a click to “Backup Now”

So, what is this one doing?

It shovels data from (a) to (b). Period. When having selected “readable” at the beginning, you can even see folders and files being created or updated step by step in the destination directory. One nice advantage (especially for first-time backups) is, that the execution visibly shows its progress in the App:

Synology Backup Task - Progression

Synology Backup Task – Progression

Also, when done, it pushes a notification (by eMail, too, if configured) to inform about successful completion (or any failure happened).

Synology Backup Completion Notification

Synology Backup Completion Notification

Below screenshot eventually shows what folders look like at the destination:

Synology Backup Destination Directory Structure

Synology Backup Destination Directory Structure

And when a new or updated file appears in the source, the next run would update it on the destination in the same folder (tested and confirmed, whatever others claim)!

So, in essence this method is pretty useable and useful for bringing data across to another location, plus: maintaining it readable there. However, there’s still some disadvantages which I’ll discuss in a moment …

So, what about Cloud Station?

Well, I’ve been using Cloud Station for years now. Without any ado; without any serious fault; with

  • around 100.000 pictures
  • several 1000 business data files, various sizes, types, formats, …
  • a nice collection of MP3 music – around 10.000 files
  • and some really large music recording folders (some with uncut raw recordings in WAV format)

Cloud Station works flawlessly under these conditions. For the benefit of Mr. Adam Armstrong of storagereview.com, I’ve skipped a detailed explanation of Cloud Station and will just refer to his – IMHO – very good article!

Why did I look into that, though data backup (explained before) did a pretty good job? Well – one major disadvantage with backup sets in Synology is that even if you chose “readable” as the desired destination format, there is still not really a way of producing destination results which resemble the source in a sufficiently close way, meaning, that with backup tasks, the backed-up data goes into some subdir within the backup destination folder – thereby making permission management on destination data an utter nightmare (no useful permission inheritance from source shared folder, different permissions intended on different sub-sub-folders for the data, etc.).

Cloud Station solves this, but in turn has the disadvantage that initial sync runs are always tremendously tedious and consume loads of transfer resources (though, when using Cloud Station between 2 NASs this disadvantage is more or less reduced to a significantly higher CPU and network usage during the sync process). So, actually we’d be best to go with Cloud Station and just Cloud-sync the two NASs.

BUT: There’s one more thing with this – and any other sync – solution: Files are kept in line on both endpoints, meaning: When a file is deleted on one, its mirror on the other side is deleted, too. This risk can be mitigated by setting up recycle bin function for shared folders and versioning for Cloud Station, but still it’s no real backup solution suitable for full disaster recovery.

What the hell did I do then?

Neither of the options tested was fully perfect for me, so: I took all of them (well: not fully in the end; as said, I can’t get my head around that shared folder sync, so at the moment I am going without it).

Let’s once more have a quick glance at the key capabilities of each of the discussed options:

Synology: Backup Options

Synology: Backup Options

  • Shared Folder Sync is no sync; and it leaves the target essentially unusable. Further: A file deleted in the source would – by the sync process – instantly be deleted in the destination as well.
  • Data Backup (if chosen “readable”) just shifts data 1:1 into the destination – into a sub-folder structure; the multi-version volume option would create a backup package. IMHO great to use if you don’t need instant access to data managed equally to the source.
  • Cloud Station: Tedious initial sync but after that the perfect way of keeping two folder trees (shared folders plus sub-items) in sync; mind: “in sync” means, that destroying a file destroys it at both locations (can be mitigated to a certain extent by using versioning).

I did it may way:

  1. Business projects are “Cloud Station” synced from the office NAS (source and master) to the home NAS; all devices using business projects connect to the office NAS folders of that category.
  2. Media files (photos, videos, MP3 and other music, recordings, …) have been 1:1 replicated to the new NAS by a one-time data backup task. At the moment, Cloud Station is building up its database for these shared folders and will maybe become the final solution for these categories. Master and source is the home NAS (also serving UPnP, of course); the office NAS (for syncing) and all devices, which want to stream media or manage photos, connect to this one.
  3. Archive shared folders (with rare data change) have been replicated to the new NAS and are not synced at the moment. I may go back to a pure incremental backup solution or even set some of these folders to read-only by permission and just leave them as they are.

Will that be final? Probably not … we’ll see.

Do you have a better plan? Please share … I’m curious!

 

Published by:

So sollte Werbung sein

Ausnahmsweise Werbung am Blog:

A1 hat sich ein Quäntchen Hochachtung für Werbekreativität erwirkt – mit dem Titelblatt (eigentlich: dem Titel-Überblatt) zur Samstagsausgabe der Zeitung unseres Vertrauens. Das sah nämlich gestern so aus:

A1 Zeitungsseite - Informationen schein-verschlüsselt

A1 Zeitungsseite – Informationen schein-verschlüsselt

Und nach kurzem Erstaunen und kopfschüttelndem Umblättern fand sich auf der Innenseite das:

A1 Werbung "Datenverschlüsselung" - Die Innenseite

A1 Werbung “Datenverschlüsselung” – Die Innenseite

Gut gemacht, A1. So muss Werbung … sein. Gscheit und Aufmerksamkeit erregend.

 

Published by:

Patriot Act: Illegal?

Woke up this morning to find this in my newsfeed: A New York Times article about the NSA collection of bulk call data being illegal!

“Significant”, to quote Ed Snowdon.

In essence, the ruling comes to the conclusion that

a provision of the U.S.A. Patriot Act, known as Section 215, cannot be legitimately interpreted to allow the bulk collection of domestic calling records.

This is the first time ever, that a higher court has reviewed this program and defined at least a section of it as being illegal. I cannot emphasize enough how important it is for anyone having the slightest interest in privacy and security to read this article, the details around the ruling and the consequences to expect from it.

Speaking of the consequences, however, I am asking myself: When in the past has any national security and/or investigative agency been acting within the boundaries of legitimacy? Best case is: they extend’em … I dearly hope, that one consequence of this is to continue surveillance practice on a legal basis where applicable and tremendously increasing transparency about it!

 

Published by:

Digitalization, IoT, networks and money

I am sitting at the Celtic-Plus spring event here in Vienna. Ridicolously high suit-rate compared to the fact that they intend to target future dynamics of how technology is going to be delivered to people and businesses. I’m in jeans and probably a bit of an outlaw – which fits pretty well in fact, because none of the pitched projects does really offer a mature collaboration opportunity for me.

However, it is still interesting to learn what’s the innovative potential in the field of wireless networking and media. Interested in more detail? You may wanna check out their hashtag: #celticevent

What’s pretty obvious here is that also mobile network providers and researchers in that area want to leverage and support innovation wrt Digital Business and the Internet of Things. Every other project is taking you onto a trip on how they will improve the world by providing people, businesses and things with an enhanced connectivity experience. 5G and the 5G Infrastructure Private Public Partnership is one of the most re-known and most heavily discussed topics here (heavily criticized, either, for their lack of reasearch into flexible broadband spectrum usage).

When listening to the project pitches, which mainly aim at finding partners within the community in order to execute on their research objectives, I was presented with a couple of really cool ideas; and also the project exhibition area offered a good insight into what is being done to improve worldwide connectivity (take e.g. the initiative to improve the reach of traditional wireless transmitter stations in order to bring wireless connectivity into remote areas; Serengeti area as a concrete example).

However, there was one particular problem with nearly all of the presented award winning or pitching projects: A severe lack of productizable and monetizeable results.

Here’s a few examples:

1. Tilas

TILAS explores possibilities to provide wireless to a huge amount of interconnected devices (like e.g. in heavily populated rural areas) and thereby making large deployments of huge amount of wearables and devices possible in the future.

On their folder, their achievements are described as “solutions to overcome the already detected technical problems in current large cities.” And it continues: “The demonstrator will highlight the main achievements in the different fields including figures that asset the benefits of the proposal in terms of capabilities and economic savings […]”. No tangible monetizable (product) results or implementation plans highlighted.

2. Seed4C

This is an acronym for “Security Embedded Element and Data privacy for Cloud”, and their objective is to propose an approach to attach hardware-based secure elements (SE) to “cloud nodes” in order to offer strong security enforcement and to support and end-2-end process ranging from security modelling to security assurance.

Good thought. Here’s their achieved results according to their flyer: “Seed4C has defined an end-2-end security process which consists of the following stages […]”, and then the stages are explained which include a modelling approach, an OpenStack platform for application deployment according to the model and security policy configuration. The project has ended in February 2015 after 3 years work. So, that’s it more or less. No claim of any company actually implementing this.

3. H2B2VS

The complete name of this is “HEVC Hybrid Broadcast Broadband Video Services”. And this is pretty interesting, as it proposes to use broadband networks in addition to broadcasting networks to allow for hybrid distribution of TV programs and services leveraging both in a synchronized way in order to overcome the limitations in capacity that traditional broadcasting methods have.

Achievements so far (as the project is going to end in October 2015):

  • 20 use cases on hybrid distribution described
  • 3 HEVC encoders and 2 decoders available
  • CDNs adapted to hybrid delivery
  • a proposal to MPEG how to efficiently synchronize broadcast and broadband (acceptance state not reported)

Tangible in a sense of productization? None, as far as I could see.

Future projects

The pitch session with about 20 differnt projects mostly asking for cooperation partners offered some interesting ideas as well, e.g. a framework to combine information from wearables (like e.g. about an accident of an elderly person) with location and mobile and skill data of close by healthcare personal, or a platform for predictable management of public transport, or a worldwide database for usable broadband spectrums in order to allow services to leverage any spectrum in a flexible dynamic, an agile, way (closing the gap of what’s claimed to be leaking in the 5G research projects’ scope)

None of them – however – offered any glimpse on competition analysis or tangible results and deliverables from which monetization opportunities could have been deducted.

So, in the end

… I was asking myself: With all those cool and highly innovative approaches being presented and with hundreds of millions of governmental money pumped into these initiatives – where is the results that companies take into their portfolios and products to be offered on the market or included in solutions? Where is the real practical change of ecosystems and services for consumers or businesses that show that the funds provided through collaboration under the Celtic-Plus cluster are rightfully used and spent? How can the European Union and EU governments spend huge sums of money to projects which (mostly) in the end do not come up with anything more than academic research results?

Don’t get me wrong: I thoroughly and fully trust, that funds are important, research on an innovative (maybe sometimes a bit academic) level is utterly necessary to drive digital innovation, that not every project can end with a tangible new solution being productive: But the spending for these kind of projects is tremendous, and the duration of most of the projects is pretty long, and the results, as I could see, are mostly so very limited that I would really love to demand funding organizations to bind their spendings to actual revenue achieved with the respective project results.

If any angel investor or private equity acted that way and not measured their engagement against real practically usable results, they’d be dead before having even started.

I think, in order to really be successful in terms of innovation, there needs to be innovative projects alongside monetizable and productizable business needs! And funds for the same — meaning that there’ll be less for purely academic research as long as it can’t be brought back into the market and – ultimately – benefits the end user and an improved (digital) world.

 

Published by:

3 Gründe, warum es egal ist, was in den facebook AGBs steht

Da war er wieder – der 2-3 mal jährliche Aufschrei der Online-Gemeinde über die AGBs eines Sozialen Netzwerks. Nicht irgendeines Sozialen Netzwerks: DES Sozialen Netzwerks.

Facebook hatte seine “Allgemeinen Nutzungsbedingungen” wieder einmal überarbeitet und ich stolperte unvermeidbar über den diesbezüglichen Artikel der ORF futurezone (es gab bestimmt noch weitere).

Kurz darauf überschlugen sich Kritiker und Kalmierer und warfen sich gegenseitig vor, den falschen Umgang mit der nackten Tatsache der Änderung zu pflegen (erfrischend dabei lediglich jene facebook (sic!) Posts, die dazu aufforderten, irgendetwas auf das persönliche Profil zu stellen, um dadurch den neuen AGBs zu widersprechen; mein unerreichter Favorit dabei: das Einhorn – ich bin sicher, auch dazu gibt’s ein paar “Gläubige”).

Letztendlich bleibt jedoch ohnehin von solchem Aufruhr nichts übrig – und das ist auch gut so. Weil es nämlich vollkommen wurscht ist, was in den facebook AGBs steht. Und zwar aus folgenden simplen Gründen:

1. Die Welt ist Werbung!

So ist das nun mal. Was immer wir tun (falsch: was immer wir schon immer taten) wurde und wird dazu benutzt, dass Unternehmen versuchen, uns zu sagen, was wir in Zukunft tun, kaufen, benutzen, buchen, … leben sollen. Schauen Sie sich einfach nur die Evolution von Werbung (vom Plakat, über die Radio-Information, zum Fernsehspot, zwei-, drei-, viermal pro Tag, vor und nach Sendungen, inmitten des Films, nun vor dem youtube-Video, … usw.) an: Unternehmen und Medien versuchen, in gegenseitigem Kreativwettlauf an immer noch mehr Möglichkeiten zu kommen, uns mit ihrer “Information” zu überschütten. Neuerdings bekomme ich vor jedem youtube-Video den Spot eines SharePoint Migrationstools zu sehen (womit habe ich mich wohl in letzter Zeit online beschäftigt).

Und ehrlich gestanden frage ich mich: Was ist so falsch daran? Wenn ich ein Hotelzimmer in Madrid buchen möchte, besuche ich mal kurz booking.com, suche ausgiebig danach und warte dann, bis mir booking.com was günstiges vorschlägt. War ich dann dort und es war gut, schreib ich mir die eMail-Adresse auf und booking.com sieht mich für diese Stadt nie wieder. Werbung kann so einfach ausgeblendet und gleichzeitig zielführend genutzt werden. Daher ist allein dieser Grund genug, die facebook AGB Änderung zu ignorieren, wenn es – wie die futurezone einleitend feststellt – doch nur darum geht, zielgerichtetere Werbung zu ermöglichen.

2. Welches Recht zählt wirklich?

Schon mal genauer in die AGBs reingeschaut? Hier nochmal der Link dazu. Wenn man nach dem Gerichtsstand sucht, findet man da:

“You will resolve any claim, cause of action or dispute (claim) you have with us arising out of or relating to this Statement or Facebook exclusively in the U.S. District Court for the Northern District of California or a state court located in San Mateo County, and you agree to submit to the personal jurisdiction of such courts for the purpose of litigating all such claims. The laws of the State of California will govern this Statement, as well as any claim that might arise between you and us, without regard to conflict of law provisions.”

Na dann! Auf in die Staaten. Gehen wir uns beschweren, was uns facebook da antut.

Verstehen Sie mich richtig, bitte: Die Sammelklage des österr. Jusstudenten, Max Schrems, beispielsweise finde ich im Grunde richtig und sogar notwendig. Leider gerät der ursprünglich auslösende Moment für dieses Vorgehen ein wenig in Vergessenheit: Begonnen hatte dieser Fall ja mit dem Versuch, alle gesammelten Daten von facebook zu erhalten; ich halte es für ein Grundrecht jedes Menschen auf dieser Welt, detailliert erfahren zu können, was wo über einen selbst gespeichert ist (vgl. auch meine Transparenz-Forderung im “Citizenfour”-Artikel).

Ich halte es natürlich auch für ein Grundrecht, selbst entscheiden zu können, welche persönlichen Daten verwendet werden – und genau deshalb sind die AGBs von facebook genau genommen Makulatur, denn (last not least):

3. Ich entscheide selbst, was ich wie nutze!

facebook zwingt mich in keiner Weise, facebook zu nutzen. facebook zwingt mich nicht einmal, facebook auf eine bestimmte Art und Weise zu nutzen. facebook bietet mir Möglichkeiten. Möglichkeiten zur Kommunikation, zur Information, … ja: zu Eigenwerbung. Ich kann das Medium ja auch selbst dazu nutzen, für etwas, das mir ein Anliegen ist, Werbung zu machen. Das geht so weit, dass ich gegen Einwurf kleiner Münzen die Datenmaschine “facebook” selbst für meine Zwecke gebrauchen kann: Zielgerichtet wird facebook dann meine Statusmeldungen und Seiten-Aktualisierungen in den “Newsfeed” meiner Freunde platzieren, um sie auf mein Anliegen aufmerksam zu machen. Perfekt. Genau so wünsche ich mir das.

Wenn ich bestimmte Informationen sehen möchte, werde ich bestimmte Dinge, Themen, Inhalte, Schlüsselwörter im Netz publizieren. Wenn ich für ein bestimmtes Thema nicht gefunden oder damit identifiziert werden möchte, werde ich zu diesem Thema einfach die Klappe halten.

Der Punkt ist doch der:

Unser unbändiges Mitteilungsbedürfnis und unsere unbändige Neugierde spielen uns bei der Nutzung von Online-Medien einen bösen Streich: Denn heutige Technologien ermöglichen halt einfach ein Mehr an Zielgenauigkeit, als es der guten alten Fernsehwerbung im spannendsten Moment des Hauptabendfilms möglich war – sie erlauben es dem Informationsanbieter einfach, seine Information exakter passend zu platzieren.

Das Argument einiger lautstarker Kritiker der neuen facebook-AGBs, man könne sich der Nutzung von facebook ja heutzutage gar nicht mehr entziehen, ist schlichter, wenig differenzierender Blödsinn. Es mag stimmen, dass Schulen, Vereine und andere menschliche “Netzwerke” das Medium “facebook” als einzige Kommunikations-Plattform nutzen und man daher zur Teilnahme an dieser Kommunikation an einem facebook-Benutzerprofil nicht vorbei kommt. Die Inhalte dieses Profils – allerdings – bestimme ich dann selbst. Und ich kann die Inhalte durchaus auf den Zweck meines Dabei-Seins beschränken.

Und abgesehen davon: Suchen Sie auch machmal im Internet nach Dingen, Themen, Inhalten oder bestimmten Schlüsselwörtern? Und was zeigt die Suchmaschine ihrer Wahl dann gleich zu oberst an?

Es ist halt einfach zu einfach, die Verantwortung für meine eigenen Handlungen (Mitteilungen, Suchanfragen, Bilder oder Videos, …) den AGBs eines Unternehmens zu übertragen, das sich die hochgradig effektive Nutzung dieser meiner “Handlungen” zum eigenen Geschäftszweck gemacht hat.

 

{feature image “Digital Footprint” via Flickr/Creative Commons}

Published by:
%d bloggers like this: