The Smile-IT Blog » January 2015

Monthly Archives: January 2015

Digital Business Trends – Erste Veranstaltung der neuen APA-Reihe

{Im Bild die Teilnehmer an der Podiumsdiskussion der DBT-Veranstaltung “Health Gadgets”, 29.01.2015, Haus der Musik}

 

Gestern, Donnerstag, fand im Haus der Musik die erste Abendveranstaltung der neuen APA-Eventreihe “Digital Business Trends” zum Thema “Health Gadgets” statt. Die Diskussions- und Netzwerkveranstaltungen werden gemeinsam mit Styria Digital One organisiert und von namhaften österreichischen IT(-nahen) Unternehmen gesponsert.

Dem Charakter nach der bereits seit den frühen 00er-Jahren bekannten APA-eBusiness-Community nachempfunden, folgte auch diesmal einem Impulsvortrag zum Thema eine Podiumsdiskussion, abgerundet durch – diesmal sehr intensive – Publikumsbeteiligung.

Und der Grundtenor der verschiedenen Meinungen war für mich besonders in zwei Aspekten neu:

1. Sicherheit und Datenschutz

Selbstredend, dass in einer Diskussion, in der es im Wesentlichen um das Messen, Sammeln und Bereitstellen von Daten durch Gadgets geht, die Frage des Schutzes dieser Daten irgendwann gestellt wird. Florian Schumacher, Vortragender und Experte am Podium, nahm jedoch dem Thema sofort die sonst in derartigen Diskussionen so übliche Tragweite, in dem er feststellte, dass das permanente Hinterfragen des Schutzes der Daten ein Innovations-Inhibitor sei. Die Haltung der Menschen würde sich zunehmend dahingehend verändern, dass sie bereitwillig ihre Daten diversen Systemen anvertrauen, weil sie den persönlichen Vorteil deren Nutzbarmachung erkennen und spüren könnten.

Während also bisher in Trend-Veranstaltung das Gros der Anwesenden seinen Ausweg aus der Unwissenheit über den diskutierten Trend oftmals in einer Art “Panikmache” über den Verlust von Datenschutz- und Kontrolle nahm, schwenkten diesmal die Mehrheit der Meinungen auf die Linie: “Meine Daten werden durch die Verknüpfung mit geeigneten Diensten und Dienstleistungen zu einem nützlichen Werkzeug für mich selbst.”

Die genannten Beispiele dazu blieben zwar – gewissermaßen – noch in den Kinderschuhen stecken (im Wesentlichen drehten sie sich um innovative Prämienmodelle von Versicherungen oder Gesundheits-Diagnosesysteme), aber der Trend hin zu einem offeneren Umgang mit persönlichen Daten – innerhalb ethischer Grenzen – war durchaus ablesbar. Ein positives Novum, also.

2. Geschäftsmodelle

Mehrheitlich blieb die Diskussion beim “Coolness”-Faktor und der persönlichen Daten-“Auswertung” der besprochenen Gadgets stecken. Das greift meiner (und manch anderer Diskussionsteilnehmer) Meinung nach bei weitem zu kurz: Am Ende wird ein Hype dann zum Trend, wenn Einzelne oder Mehrere daraus Nutzen und letztendlich Geschäft machen können. Den Nutzen von Health-Gadgets dadurch zu rechtfertigen, dass Einzelpersonen auf einem Web-Portal ihre eigenen Daten mit sich selbst vergleichen können, definiert noch kein Geschäftsmodell. Ebensowenig – wenn auch mit mehr Rechtfertigung – tut das der Effekt der Selbstmotivation.

Schon nachvollziehbarer ist da der Ansatz, dass Versicherungen Prämiennachlässe auf Basis von Daten gewähren könnten, die einen gesunden Lebensstil nachweisen (der ethische Hintergrund eines solchen Vorgehens, der durchaus auch mehrfach angesprochen wurde, sei hier für den Moment mal dahingestellt), oder ärztliche Diagnosen durch Online-Diagnosesysteme ersetzt oder zumindest unterstützt werden könnten.

Während der zitierte Ansatz von Versicherungen sicherlich eines der nachvollziehbarsten und möglicherweise auch nahe-liegendsten Geschäftsmodelle werden könnte, blieb es sonst in dieser Frage eher dünn, und ich denke, dass Health Gadgets im momentanen Hype-Status stecken bleiben werden, wenn Dienstleistungs-Unternehmen, Plattformen und/oder Hersteller nicht mehr Umsatz-Möglichkeiten finden, als den Verkauf von Armbändern, Uhren, Broschen oder – vielleicht bald – Implantaten.

Eine der wohl interessantesten Ansätze kam von Eugenius Kaniusas (TU Wien), der mehrfach in der Diskussion meinte, dass das vergleichen von Zahlen (Puls- und Blutwerte, Schrittanzahl, oder Indexwerte für Fitness, etc.) zwar eine nette “Spielerei” für den Endverbraucher sei, aber am Ende doch wenig inhaltlichen Mehrwert böte, weil ja Sinn und Konsequenz der Werte und ihrer Veränderung dem medizinisch Fachfremden verborgen blieben. Wirklich Nutzen entstünde erst durch eine alltagstaugliche Übersetzung der gesammelten Information. Plattformen müssten geschaffen werden, die für gesammelte Daten geeignete kontextuelle Interpretation und Übersetzung anböten.

In Anbetracht der großen Anzahl an Menschen, die mit medizinischen Befunden schon so ihre Schwierigkeiten haben, vielleicht in der Tat eine erste verfolgenswerte, über Versicherungs- und Diagnose-Modelle hinausgehende Idee für innovatives Geschäft im Zusammenhang mit Health-Gadgets …

 

Update:

  • Link zur Presseaussendung
  • Link zur Fotogalerie

 

Published by:

Synced – but where?

We had eventually setup our Office 365 (O365) tenant for eMail (read about that e.g. in the “Autodiscover” post) and, of course, wanted to leverage Sharepoint as well. My-sharepoint, either. And the “OneDrive for Business” Sync client (ODB) … whatelse.

It wasn’t without further ado that this was accomplished …

Setup

is very straight forward, indeed. Go to your “OneDrive” in the O365 portal and click the “sync” link at the top of the page:

O365 Sync Link, displayed

O365 Sync Link, displayed

Presuming, you got the Office on-premise applications installed on your PC, items will quickly commence showing up in your “OneDrive for Business” folder within the “Favorites” area of explorer.

Also, ODB is nice enough to offer you to jump to that folder by just clicking the little button in the bottom right corner of the confirmation dialog, that appears after having issued syncing:

O365 ODB Confirmation popup

O365 ODB Confirmation popup

Easy, isn’t it.

Sharing

Now, having files made accessible in Explorer, next thing would be to share them with others in your organization. ODB is nice in that, as well, as it offers you a “Share…” option in the Explorer context menu of ODB by which you’re able to launch a convenient “share’n’invite” browser popup with all necessary options:

O365 ODB Share Options

O365 ODB Share Options

Also that one is very straight forward in that

  • you just type in a name,
  • O365 will help you with auto-completion of known names,
  • you select whether people shall be able to edit or to only view items
  • you’re even able to create an “edit” or “view” link which will allow people to access items without dedicated invitation
  • etc.

So – no rocket science here. Users will easily be able to share their files with others. And once one’s done with sharing, invited colleagues will receive an eMail invite to the newly shared stuff which takes them into their browser and into O365 to actually see what’s newly shared with them.

Great!

And now …

Get that into your Windows Explorer!

Once, the necessary items were shared with every user of our tenant as needed, at least I was going right into my ODB sync folder in Explorer to await the newly shared files showing up. O-key, ODB takes a little while to sync it all (Hans Brender, Microsoft MVP for ODB, hence a real expert, wrote a great post on how syncing works in ODB). However, even waiting infinitely wouldn’t lead to us seeing any shared files. What we pretty quickly learned was, that the ODB sync client will – in its initial default setup – never ever sync anything that was shared with you. Only your own files will show up in Explorer. Period.

Makes no sense really for collaboration, does it? But …:

Here’s some solutions:

1. Access files that are shared only through your browser

Anything that has been shared with you, is accessible within your browser-based ODB access. Just click “shared with me” in the navigation on the left of the ODB portal and you’ll see it.

O365 ODB: Shared with me

O365 ODB: Shared with me

Pretty lame, though, for anyone that’s used to work from within the Explorer and not e.g. from the browser or any of the Office applications.

2. Create a teamsite for documents shared between multiple colleagues

O365 with its Sharepoint functionality, of course, does offer the ability to create a site – which also contains a document library. Documents put there are available for anyone with permissions on that site. Permissions can even be set on a more granular level (e.g. management-only for certain folders, team-A on team-A-folders only, etc.).

Navigating to that site’s document library offers you the same “sync” link possibility as with your own files (see screenshot above), i.e. in a few moments ODB will sync any file that’s eligible for you to be viewed or edited.

Nice. But what if creating umpteenth sites for all the different project and team setups within your company is just not what you want? Or what if managing all the various permission sets within one site is just beyond the acceptable effort for your IT team? There’s at least one more possibility that might help:

3. Sync your team-mates ODB document library

As you already know, every O365 user has its own ODB site which is invoked when one clicks to OneDrive in the O365 portal. When being invited to view or edit shared files and brought to the respective ODB site in your browser, you actually end up within the document library of someone else:

O365 ODB: Sync someone else's docs

O365 ODB: Sync someone else’s docs

Well — sync that! Just click to the “sync” link on top as described before and the ODB client on your PC adds another folder to the ODB folders in Explorer. And those will show exactly what has been shared with you from that library. Not 100% perfect maybe, as it leaves you with having to know “who” shared “what” with you, but still a possibility to work around having to create a teamsite or working from within the browser, only, if you don’t want to.

Anybody out there knowing other options how to conveniently add shared files and folders to the local ODB folder tree? Please share your insight in a comment!

P.S. – what about the doc-lib links?

If in case you do not want to go by the “sync” link in the ODB portal to invoke ODB synchronization but want to add libraries within your ODB sync client on your PC, right-click to the ODB tray icon (the little blue cloud – not the white one, that’s OneDrive, formerly aka “SkyDrive” ;)) and click “Sync a new library”. And here’s what to use for the syncing options discussed above:

  1. Your own ODB library: https://<company-name>-my.sharepoint.com/personal/<your-username>_<domain>_<TLD>/Documents (where <your-username> is what you’re called in that O365 tenant, e.g. johndoe, and <domain> and <TLD> is what your company is called in that tenant, e.g. contoso.com – in that case it would be “johndoe_contoso_com“)
  2. Teamsite: https://<company-name>.sharepoint.com/<name-of-shared-documents-library> (which is depending on the initial language at the initial site setup; e.g. “Freigegebene%20Dokumente” in case setup was done in German
  3. Someone else’s ODB library: https://<company-name>-my.sharepoint.com/personal/<that-person’s-username>_<domain>_<TLD>/Documents – i.e.: instead of using your name as described in (1) above, you’ve just to exchange that with the username of that other person who’s library you want to sync

But I think, how to format all those links correctly for being used in the ODB client’s “Sync a new library” dialog has been already discussed all around the place in multiple posts on the web, anyway.

 

Published by:

Autodiscover!

OR: How to successfully migrate from POP to an Office365 mailbox, when your hoster doesn’t support you!

Yes, @katharinakanns was right, when she recently said, I’d bother with Office365 to get all our IT stuff migrated onto it. Sounds ridiculous, maybe, but it’s 2 domains, 2 mailboxes, a bunch of subdomains, aliases used all around the net, etc. etc. etc. … and we’d like to merge that all into one place.

Before you read on, this one here is – since long – something more down to earth again, so get ready for some bits and bytes 🙂

What’s this about?

These days I started to setup our Office365 tenant to serve both our single-person businesses as well as become the place for joint collaboration (and maybe more lateron). One thing in this that bothered me indeed a bit beyond normal was OneDrive – but that’s a different story to come … Another pretty interesting process was the domain migration. And even though umpteenth blogposts already tell the way to take from different angles, we ran into one bit that wasn’t just to solve by a click. I’ll share a few straight forward steps with domain migration here; but I’ll also share some hints beyond.

1. Know your hoster/provider

The domain you want to migrate into Office365 will most probably be managed by some ISP (like e.g. “1und1.de” or any other hoster; one whom you trust, maybe). Out of our experience, I’d suggest you get in touch with the support hotline of your hoster first and make sure

  • (a) whether editing DNS records for your domain is possible for you yourself (e.g. by some web interface) and (!) to which extent
  • (b) how accurately the hotline reacts in case of problems
  • (c) whether they can help in case of any failure over the weekend (one would want to have a business mailbox up and running on Monday morning, I guess)

I had to migrate 2 domains, one of which was with a hoster not allowing editing DNS myself but reacting swiftly to support requests and executing them just perfectly. The other one allowed editing the DNS by me but only let me enter TXT and MX records (no CNAME records – at least not for the primary domain). Or to be precise: The self-service web interface would let me do that but clearly stated that any existing records for this domain would become invalid by this step – and I wasn’t too sure whether this might run us into troubles with our business website …

1und1-CNAME-warning-screenshot

The 1und1.de warning about deactivating existing records

 

Note: The second was "1und1.de" and they do not offer any possibility of doing anything else in terms of DNS than what is provided for self service. I tried really hard with their support guys. No(!) way(!).

2. How migration works when your ISP cooperates

To begin with, it would of course be possible to simply move DNS management from the ISP to Office365. In that case, all the ISP would have to do is changing the addresses of the name servers managing the respective domain. We didn’t want that for several reasons, hence went for the domain migration option, which is actually pretty straight forward.

The Office365 domain management admin console is totally self-explanatory in this, and there’s umpteenth educational how-to-posts. The keyword is – surprise(!) – “Domains” and you just follow step 1-2-3 as suggested.

Office365 admin console - the place to start off into domain migration

Where you start: Office365 admin console

One can either start at the “Email address” section here (if there’s not yet any custom domain managed within the tenant) or by “Domains” further below:

  1. Office365 wants to know whether the domain is yours. Therefore Office365 shows you a TXT DNS record in the first step, which you have to forward to your hoster to be entered as part of “your” DNS. If you’re able to enter that yourself this step is accomplished in no time. Otherwise it depends simply on the response time of the support line. BTW: DNS propagation in general may take up to 72 hours as we know – however, in reality I didn’t experience any delay after having received the confirmation that the TXT has been entered. I could forward to step 2 instantly.
  2. With step (2) Office365 changes any user’s name that you want to make part of the then migrated domain. Essentially that’s a no-brainer, but an Office365 user currently can only send eMails being identified with exactly this username. Receiving goes by multiple aliases which can be configured separately in the user management console; but sending always binds to the username (there’s ways around this as well – but that’s again a different story). Hence, it is worth some consideration which users you click in this step.
  3. Proceeding to the next step equals stepping into the crucial part; after this change is completed your eMail, Lync and – if chosen – website URL will be redirected to Office365. Admittedly, in both cases I only chose “eMail and Lync” for migration, which means that the website remained with the ISP – for now … As the penultimate step after having chosen the services that you want to switch over, Office365 gives you a list of records that need to be entered as DNS records with your domain.

Let’s have a brief look into those DNS records as they are the ones that eventually bring your migration to life:

  • MX records: This is, normally, one record that identifies where the eMails with the domain in question shall be routed to (to: name@yourdomain.tld). No rocket science here and getting that into your DNS shouldn’t be a bummer, really.
  • CNAME records: The most important of these is the “autodiscover” record. I’d argue this to be the “most compulsory” one. Not having “autodiscover” set into the DNS of your domain means that any eMail client will not be able to discover the server for the respective user automatically, i.e. users will “pull their hair out” over trying to configure their mail clients for their Office365 Exchange account. In all honesty, I actually was not able to find a possibility to figure out the correct mail server string for outlook for our users as it contains the mailboxID (being a GUID@domainname.tld; if anyone of you out there knows one, can you please drop your solution as a comment). So, without the “autodiscover” record, you’ll be pretty lost, I think – at least with mobiles and stuff … The other CNAME records are for Lync and Office365 authentication services. Here‘s a pretty good technet article listing them all.
  • The SPF TXT record helps preventing your domain being used for spam mailing
  • And finally, 2 SRV records are for the Lync information flow and enabling SIP federation

[update] Here’s some hints on how we got Lync to work for our accounts, but for eMail, of all the records above the MX would be fully sufficient; I’d just once more emphasize “autodiscover”, as this caused us some headache, because …

3. What do you do, if your ISP does not add “autodiscover”?

As explained above, one’s in bad shape, if an ISP refuses to add the “autodiscover” CNAME record demanded by Office365 for a custom domain. In the case of “1und1” this was exactly what ran us into troubles. However, there’s a pretty simple solution to it, but to begin with – here’s some things that don’t work (i.e.: you don’t need to dig for them):

  • Enter CNAME records into the respective PCs hosts file: Normally a hosts file can be used locally to replicate a DNS – but only for resolving names to IPs, not for CNAMEs.
  • Install a local DNS server: Might work, but seemed like some more work. I didn’t want to dive into this for one little DNS record.
  • Find out the mailbox server for manual configuration: Well – as said above: I didn’t succeed in due course.

Finally @katharinakanns found the – utterly simple – solution by just asking the world for “autodiscover 1und1“. So here’s what probably works with any petulant ISP:

  • create a subdomain named “autodiscover.yourdomain.tld” with your ISP (normally, every ISP allows creation of unlimited subdomains)
  • create a CNAME record for this new subdomain and enter “autodiscover.outlook.com” as its value/address portion
1und1 CNAME for subdomain setting

The CNAME config screen again – and now we’re fine with checking the box at the bottom

Done. This is it. Mailclients discovered the correct mailserver automatically and configuration instantly became a matter of seconds 🙂

[update] 1und1 has updated there domain dashboard, hence config is easier now – find hints here!

 

{feature image from Ken Stone’s site http://masterstrack.com/ – I hope, he don’t mind me using it here}

 

Published by:

#CITIZENFOUR – a film by Laura Poitras

Thanx to rolfgeneratedcontent, I decided for one of my meanwhile (unfortunately) rare visits to the cinema and watched CITIZENFOUR, the documentary film by Laura Poitras outlining the chronology of events leading to the disclosure of some of the biggest spying endeavours of western governments (no, this is not only on the US, indeed).

The film is great work! It offers a glimpse of the vastness of data collected by programs like PRISM or TEMPORA, how security agencies engaged with big telecommunication providers in order to simply intercept lines, communications, traffic, … on the very source of transmission instead of at its origin – the persons involved.

Technology of these interceptions isn’t really rocket science (except, of course, for the decryption intelligence that those national security agencies posses just by their exceptionally high budgets – brain power is venal, either).

However, the question I keep pondering since having watched this documentary film is: What’s really the revelation? Not only of the film but of Ed Snowden’s work as such? Don’t get me wrong! I won’t argue for dropping human rights and personal privacy laws. Not at all. Neither will I say that the collection and structured analysis of data from millions of people against whom there’s no legal suspicion has any rightful legal basis. No. What I do, though, want to query is all those post-Snowden arguments against Cloud vendors and Service providers which state that no data can be given off-premise anymore for the reason of all the various programs that Snowden “whistleblowed” on.

Let me give you three simple considerations why I think that Snowden may have shaken us up (as awareness was so low prior to his revelations) but has not really disclosed the unknown:

  1. In a talk in 2013, Dr. Gerd Polli, ex head of the Austrian National Security, in essence stated that National Security Agencies always throughout the years head the possibilities, the money and the brain power to not only be ahead but supersede by far any technological intelligence within any non-governmental endeavour. Not only where they able to create respective programs but additionally have governments and businesses been their best-paying customers to receive espionage services; over decades. So the fact as such is far less new than – e.g. – Cloud Computing as a disruptive technology.
  2. Last year, facebook claimed 2.23 billion active users. All of them disclosing information about their current, their future, their past position, their activities, the people they surround themselves with, … Even though facebook – in my humble opinion – does a good job in allowing people to keep a respective level of privacy, it still lets through quite a bit even when I’m not connected to someone. Very useful information for anyone intending to stalk out the little extra of me.
  3. Anytime in the past – long before 9/11 and long before the capabilities of Cloud and Social services – could I have been observed by governmental institutions just because I may have been mistakenly judged to have illegal objectives of some kind. In the quest of identifying truly dangerous characters in a society it is highly unfortunate that sometimes legally acting people become victims. I’m by no means claiming this to be a good thing. And I believe, it is everyone’s responsibility to help clearing up wrong accusations and even more is it the core responsibility of governmental executives to treat observation and investigation cases with ever more care. However, fact remains – such things happen, also did they happen in the past.

My claim here is: This isn’t new. This isn’t a revelation. This isn’t a disclosure of the unknown. And this is by no means a reasoning why any kind of online services should be considered less secure than they have ever been before.

Remains the utterly hardest question: What can – what should – be done about it? Nothing? Abandon those programs? Let them Agencies act freely ever on just upon their will?

There’s no right answer to that, I believe. And I will always appreciate the aim of governments to reduce the danger of the next silly poacher causing a human life in the name of some religious interpretation …

I do think, the only rightful answer for acting and living within the fact of ubiquitous observation and data collection is two-folded:

  • Every single person has to act transparently, openly and humanly in a manner which obeys the laws, rules and regulations of his society for the benefit of a calm and secure life of everyone.
  • And every company – especially but by no means only telcos, security agencies and/or service providers – have to be fully transparent about every – literally: EACH and EVERY – interception of information running through their lines, services, …, their business.

I as a citizen have a right to know what is known about me by whom. And that includes Security Agencies to the full extent. In that, Ed Snowden’s revelations indeed serve the greater purpose of making a change to how governmental security treated privacy so far – and in that, they do need to be continued.

 

{ feature image from www.thehollywoodnews.com }

Published by:

The Big Data Evolution (updated)

In 1969 (the year I was born), an 8″ floppy disc by IBM was able to capture 80 KB (or 80 * 1024 = 81.920 byte). Only IBM could write to it; for “normal” people it was read-only.

In 1976, an 5,25″ single-side floppy disc could store 110 KB (112.640 byte); in 1984 it had made it to double-side and high-density with the famous capacity of 1.200 KB (or 1.228.800 byte).

The 3,5″ double-density classic appeared in the same year. Unforgettable its epic storage capacity of 720k (or 737.280 bytes). It took until 1987 for the high-density brother to come up with 1.440 KB (1.474.560 byte). MS DOS needed 3 of those floppy discs. My Amiga computer back then still worked with DD types of 880k and all of my private data in 1990 when I seriously started to program fitted onto only 1 of those little HD wonders.

I do not really recall the tape drive era; however I recall that guy responsible for source control in our projects in the office back then who could watch flying sheets of paper in Windows Explorer for hours when he backuped our VSS database (probably onto 2,4 GB – 2.516.582.400 byte – Sony tape).

More or less at the same time I discovered the tremendiously brilliant invention of the IoMega Zip Drive, bought a 100MB (104.857.600 byte) type of it and was utterly happy to be able to do backups of not only all of my private data but also all of my source code of one project onto just one disc.

And just when I had purchased the second Zip Drive for a – then – reasonable price in order to be able to carry data between home and office without carrying a drive, our company had ordered the first CD RW writer – an epic piece made by — ? not sure ? — Plextor, I think. What I however do remember clearly is an audio CD project which I did in that time, using the utterly famous “Feurio!“. For an 8th grade school class I sampled English songs with strong lyrics like “Free Electric Band” or “Me and Bobby McGee” for their English lesson and as a school’s-out present. I think I destroyed like 20 raw discs before getting 13 working ones 😉

It was 1997++ and there were the 74 and the 80 minute CD-RWs. 74 equalled to around 640 MB (671.088.640 byte) and 80 equalled to 700 MB which by “overburning” could be extended to 730 MB (765.460.480 byte) sometimes – involving the risk that the target drive couldn’t read it anymore.

It was in those years that I started backing up my data onto disc around once a year (incl. some transfer of some very old projects from the early 90ies) and the backups captured from 1995 until 2010 hold 159.666 files or 40,3 GB (43.363.611.776 byte) of data – which is

  • 57 times the 80 minute CD
  • 414 times my Zip Drive
  • 28.408 times the 3,5″ HD
  • and 529.341 times the 8″ floppy disc

Which is still a ridicolously little amount of data compared to a 59,- EUR 1TB (1.099.511.627.776 byte) external HDD or my 800,- EUR 4-bay 6 TB Raid-5 NAS (6.597.069.766.656 byte).

One Windows Azure Storage Account has a limited capacity of 200 TB (219.902.325.555.200 byte) – which obviously is still quite small compared to the overall data storage capacity of all Azure data centers worldwide. And facebook is reported to hold the equivalent of over 100 PB (112.589.990.684.262.400 byte) of data while its more than a billion users utilize 7 PB of foto storage each and every month only.

Which means, that in 45 years time the storage used by a private person for private reasons has increased by a factor of more than 96 billion!

I just do hope this isn’t all food and cat pics.


Update:

I’ve just purchased a MiniSD for my mobile. 16GB at the size of a fingernail (0,3″ roughly or 5,7% the size of a 5,25″ floppy disk). It cost EUR 7,70. I digged out the price of a 5,25″ double-side high-density; according to my records that was about EUR 0,17 back in 1995. Which equals a price drop per Gigabyte by more than 99% throughout those years … Gives an interesting twist to the quote “this is worth nothing anymore” …

 

Published by:
%d bloggers like this: