The Smile-IT Blog » Blog Archives

Tag Archives: code

Drum prüfe, wer sich ewig bindet!

Chello UPC rühmt sich mit schnellem Internet. Hyper-schnellem Internet. Leider bleibt das – zumindest in den Wiener Innenbezirken – meist eine Mähr’! Da nun aber gerade in diesen Breitengraden die Alternativen mangels “blizznet” et al. rar sind, bzw. LTE auf Grund der Bebauungsdichte auch keine besseren Ergebnisse liefert, ist man der miserablen Servicequalität des Quasi-Monopolisten auf Gedeih und Verderb ausgeliefert.

Oder?

Nein, ist man nicht. Denn eine garantierte Bandbreite muss nun mal eingehalten werden; wird sie das nicht, hat der Kunde laut VKI Gewährleistungsanspruch (siehe Artikel derstandard.at vom 25 Mai d.J.)

Das Silberschneider-Script am Mac in 4 Schritten

Der oben erwähnte Artikel liefert ein “Speed-Test” Skript mit, welches periodisch die Internet-Geschwindigkeit prüft. Idealerweise konfiguriert man Skript und cronjob auf einem dauerhaft laufenden Linux Server (dafür wurde es maßgeschneidert). Es geht aber – mit ein paar Adaptionen auch am Mac. Hier die Infos:

1. Download und Install

Den Dauerläufer speedtest_cron gibts auf GitLab! Er bedient sich eines speedtest-cli Skripts von “Sivel” (github download). Beides herunterladen und in einem eigenen neuen Ordner unter ~/Library ablegen (~ ist: user directory – z.B. /<main-hd>/Users/<mein-name>/). Die speedtest-cli Dateien kommen dabei in das vorbereitete Unterverzeichnis “speedtest_cli” (Anm.: speedtest-cli ist mit Apache-Lizenz freigegeben; speedtest_cron ist komplett frei verwendbar – ohne Gewähr).

2. Pfade anpassen

speedtest_cron ist per README Instruktionen perfekt für die Anpassung vorbereitet; das Einzige, was man im Prinzip tun muss, ist die Pfade auf die realen Gegebenheiten am eigenen Gerät anzupassen – im Skript sind das alle Stellen mit /path/to/this/folder

3. Network-Interface adaptieren

Die Netzwerkkarten werden unter Linux mit eth0..n nummeriert. Unter Mac OS X heißen sie en0..n! Da das cron-Skript versucht, die Quelle des Speed-Tests (Source IP Adresse) mit zu berücksichtigen, muss man diesen Teil adaptieren. Dazu die folgende Zeile in der Datei speedtest_cron ändern:

/<mein-pfad-zum-speedtest>/speedtest_cli/speedtest_cli.py --share --server 5351 --simple --source `/sbin/ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'` > /<mein-pfad-zum-speedtest>/speedtests/$DATE.log

Der wesentliche Teil beginnt bei “/sbin/ifconfig …“. ifconfig liefert – auch unter OS X – die Netzwerkkonfiguration aller Interfaces. eth0 existiert nicht, daher kommt es zu einem Fehler. Unter Verwendung von en0 gibts ein Ergebnis, das allerdings anders als unter Linux formatiert ist; daher läuft auch das nachfolgende rausschneiden der IP-Adresse anders. Der adaptierte Befehl sieht folgendermaßen aus:

/<mein-pfad-zum-speedtest>/speedtest_cli/speedtest_cli.py --share --server 5351 --simple --source `/sbin/ifconfig en0 | grep 'inet' | cut -d: -f2 | awk '{ print $2}'` > /<mein-pfad-zum-speedtest>/speedtests/$DATE.log
  • ifconfig en0 liefert die Daten zur ersten Netzwerkkarte im Gerät (darf auch gerne eine andere sein, wenn über diese getestet werden soll)
  • grep ‘inet’ liefert aus den gesamten Daten von ifconfig jenen Teil, in dem die IP-Adresse steht
  • cut -d: -f2 schneidet alles vor einem Doppelpunkt weg und liefert nur noch das zweite Feld in der Zeile (könnte man unter OS X auch weglassen)
  • awk ‘{ print $2}’ liefert das zweite Feld in der “inet” Zeile – die IP-Adresse

Und diese wird dann als Quelle dem speedtest Skript vorgeworfen.

4: Cron Job erstellen

Das ist am Mac zugegeben etwas lästig. crontab wird nicht empfohlen, stattdessen laufen unter OS X alle zeitgesteuerten Jobs mit launchd. Die Zeit-Parametrierung lässt aber keine Syntax “laufe alle 10 Minuten zwischen X und Y Uhr” zu. Das muss leider mittels mehrerer identer Parameterzeilen angegeben werden:

<dict><key>Hour</key><integer>8</integer><key>Minute</key><integer>30</integer></dict>

Die Zeile oben sagt im Prinzip: Starte den Job um 8:30; und eine derartige Zeile kommt nun so oft mit so vielen Uhrzeiten in die launchd-Konfigurationsdatei, wie man Abläufe von speedtest_cron haben will. Etwas mühsam, aber gut … wem das zu nervig ist, einfach das LaunchControl UI verwenden (hier zum Download).

Also – launchd Einrichtung step-by-step:

  • …plist-Datei beliebigen Namens erstellen
  • Ablegen im Verzeichnis ~/Library/LaunchAgents (hier liegen unter OS X alle benutzerdefinierten launchd Job-Konfigurationen)
  • Label (beliebig): <key>Label</key><string>local.speedtest</string>
  • Auszuführendes Programm: <key>Program</key><string>/<mein-pfad-zum-speedtest>/speedtest_cron</string>
  • Startzeitpunkte festlegen mit dem Schlüssel <key>StartCalendarInterval</key>
  • Oben erwähnte Zeile mehrfach einfügen je nach Wunsch

Eine vollständige sehr gute launchd-Anleitung gibts hier: http://launchd.info/

4a: Start ohne Reboot

launchd Jobs starten beim booten oder beim Login; alternativ kann man mittels Kommando

launchctl load

den Job direkt manuell starten. Ab dann läuft der Speedtest gem. eingestellten Zeitparametern und legt jeweils eine Datei im Unterverzeichnis ~/Library/<mein-pfad-zum-speedtest>/speedtests ab.

Diese können allesamt auf Wunsch noch mit dem mitgelieferten Skript speedcsv in eine CSV-Datei überführt werden.

Und die kann man dann freudig UPC als Nachweis für deren schlechte Service-Qualität vorlegen, um zumindest etwas weniger zu zahlen – in der Hoffnung, dass besonders viele solcher Nachweise den Provider endlich dazu verführen, seine Leistungen im Wiener Innenstadtbereich nachhaltig zu verbessern.

 

Published by:

DevOps style performance monitoring for .NET

 

{{ this article has originally been published in DevOps.com }}

 

Recently I began looking for an application performance management solution for .NET. My requirements are code level visibility, end to end request tracing, and infrastructure monitoring in a DevOps production setup.

DotTrace is clearly the most well-known tool for code level visibility in development setups, but it can’t be used in a 24×7 production setup. DotTrace also doesn’t do typical Ops monitoring.

Unfortunately a Google search didn’t return much in terms of a tool comparison for .NET production monitoring. So I decided to do some research on my own. Following is a short list of well-known tools in the APM space that support .NET. My focus is on finding an end-to-end solution and profiler-like visibility into transactions.

New Relic was the first to do APM SaaS, focused squarely on production with a complete offering. New Relic offers web request monitoring for .NET, Java, and more. It automatically shows a component-based breakdown of the most important requests. The breakdown is fairly intuitive to use and goes down to the SQL level. Code level visibility, at least for .NET, is achieved by manually starting and stopping sampling. This is fine for analyzing currently running applications, but makes analysis of past problems a challenge. New Relic’s main advantage is its ease of us, intuitive UI, and a feature set that can help you quickly identify simple issues. Depth is the main weakness of NewRelic. As soon as you try to dig deeper into the data, you’re stuck. This might be a minor point, but if you’re used to working with a profiler, you’ll miss CPU breakdown as New Relic only shows response times.

net-1-newrelic

Dynatrace is the vendor that started the APM revolution and is definitely the strongest horse in this race. Its feature set in terms of .NET is the most complete, offering code level monitoring (including CPU and wait times), end to end tracing, and user experience monitoring. As far as I can determine, it’s the only tool with a memory profiler for .NET and it also features IIS web request insight. It supports the entire application life cycle from development environments, to load testing, to production. As such it’s nearly perfect for DevOps. Due to its pricing structure and architecture it’s targeted more at the mid to enterprise markets. In terms of ease of use it’s catching up to competition with a new Web UI. It’s rather light on infrastructure monitoring on its own, but shows additional strength with optional Dynatrace synthetic and network monitoring components.

net-2-dynatrace

Ruxit is a new SaaS solution built by Dynatrace. It’s unique in that it unites application performance management and real user monitoring with infrastructure, cloud, and network monitoring into a single product. It is by far the easiest to install, literally takes 2 minutes. It features full end to end tracing, code level visibility down to the method level, SQL visibility, and RUM for .NET, Java, and other languages, with insight into IIS and Apache. Apart from this it has an analytics engine that delivers both technical and user experience insights. Its main advantages are its ease of use, web UI, fully automated root cause analysis, and frankly, amazing breadth. Its flexible consumption based pricing scales from startups, cloud natives, and mid markets up to large web scale deployments of ten-thousands of servers.

net-3-ruxit

AppNetta‘s TraceView takes a different approach to application performance management. It does support tracing across most major languages including database statements and of course .NET. It visualizes things in charts and scatter plots. Even traces across multiple layers and applications are visualized in graphs. This has its advantages but takes some time getting used to it. Unfortunately while TraceView does support .NET it does not yet have code level visibility for it. This makes sense for AppNetta, which as a whole is more focused on large scale monitoring and has more of a network centric background. For DevOps in .NET environments however, it’s a bit lacking.

net-4-TraceView

Foglight, originally owned by Quest and now owned by Dell, is a well-known application performance management solution. It is clearly meant for operations monitoring and tracks all web requests. It integrates infrastructure and application monitoring, end to end tracing, and code level visibility on .NET, among other things. It has the required depth, but it’s rather complex to set up and obviously generates alert storms as far as I could experience. It takes a while to configure and get the data you need. Once properly set up though, you get a lot of insight into your .NET application. In a fast moving DevOps scenario though it might take too long to manually adapt to infrastructure changes.

net-5-foglight

AppDynamics is well known in the APM space. Its offering is quite complete and it features .NET monitoring, quite nice transaction flow tracing, user experience, and code level profiling capabilities. It is production capable, though code level visibility may be limited here to reduce overhead. Apart from these features though, AppDynamics has some weaknesses, mainly the lack of IIS request visibility and the fact that it only features walk clock time with no CPU breakdown. Its flash-based web UI and rather cumbersome agent configuration can also be counted as negatives. Compared to others it’s also lacking in terms of infrastructure monitoring. Its pricing structure definitely targets the mid market.

net-6-AppDynamics

Manage Engine has traditionally focused on IT monitoring, but in recent years they added end user and application performance monitoring to their portfolio called APM Insight. Manage Engine does give you metric level insight into .NET applications and transaction trace snap shots which give you code level stack traces and database interactions. However it’s apparent that Manage Engine is a monitoring tool and APM insight doesn’t provide the level of depth one might be accustomed to from other APM tools and profilers.

net-7-ME

JenniferSoft is a monitoring solution that provides nice real-time dashboarding and gives an overview of the topology of your environment. It enables users to see deviations in the speed of transactions with real time scatter charts and analysis of transactions. It provides “profiling” for IIS/.NET transactions, but only on single tiers and has no transaction tracing. Their strong suit is clearly cool dashboarding but not necessarily analytics. For example, they are the only vendor that features 3D animated dashboards.

net-8-JenniferSoft

Conclusion: There’s more buzz around on the APM space than a Google search would reveal on first sight and I did actually discover some cool vendors to target my needs; however, the field clears up pretty much when you dig for end-2-end visibility from code down to infrastructure, including RUM, any web service requests and deep SQL insights. And if you want to pair that with a nice, fluent, ease-of-use web UI and efficient analytics, there’s actually not many left …

Published by:

How to Recover from Enterprise Vault

… that moment, when in the cloud – in a real one; i.e.: in a plane somewhere over an ocean – and you eventually got nothing else to do than reading those loads of docs you dropped into your mailbox for later use … that – very – moment … when your enterprise’s archiver kicks in and Outlook tells you it can’t load your eMail as you are – guess what? – OFFLINE!

Here’s what I did.

 

Why?

Enterprise Vault is a great archiving solution. It integrates pretty seamlessly with Outlook. You don’t realize any difference in accessing eMails whether they’re meanwhile archived or not. There’s however a difference: Once Vault has gotten hold of one of your eMails, all you really have in your folders is in essence a torso of 300 chars embedded with a link to the respective Vault item of your eMail.

And now, there’s those occasions when you want to access exactly those old eMails that Vault has long ago grasped; also when offline; and – honestly: PST is not such a bad concept (while I indeed do appreciate companies’ aim to reduce (restrict) PST usage). Anyway. I spent some thought around this recently and ultimately created a solution which works perfectly for me and now lets me access all my old mail again – through a PST folder.

This one’s to explain how that solution works:

 

The Solution

is a simple Outlook VBA codepiece grabbing any vaulted eMail, opening it and copying it to a respective PST folder. Once opened and copied (the “copy” is key) it loses its vault link and gets its entire content back.

 

1: Search vaulted eMails

First of all, I defined an Outlook Search Folder to grab all vaulted eMails. This can be done by querying the .MessageClass field:

Vault-blog-1I went by the Search Folder idea as otherwise I’d have to walk through all eMails to find the vaulted ones. BTW: On vaulted eMails the MessageClass field reads “IPM.Note.EnterpriseVault.Shortcut” in its entirety.

2: Folder structure

I then wanted to replicate my folder tree in the target PST – just … well: just ’cause I’m used to. That’s a little recursion:

Function CreateFolder_Recursive(aRootFolder As Outlook.MAPIFolder, aFolder As Outlook.MAPIFolder, bMailOnly As Boolean) _
  As Outlook.MAPIFolder
Dim fldReturn As Outlook.MAPIFolder
Dim itm, itm2 As Object
 For Each itm In aRootFolder.Folders
  If itm.Name = aFolder.Name Then
   Set fldReturn = itm
   For Each itm2 In aFolder.Folders
    Set itm = CreateFolder_Recursive(fldReturn, itm2, bMailOnly)
   Next itm2
   Exit For
  End If
 Next itm
 If fldReturn Is Nothing Then
 ' create the folder only if it is a mailfolder or if the parameter flag indicates that we shall create all folders
  If aFolder.DefaultItemType = olMailItem Or Not bMailOnly Then
   Set fldReturn = aRootFolder.Folders.Add(aFolder.Name)
  End If
  If Not (fldReturn Is Nothing) Then
   For Each itm2 In aFolder.Folders
    Set itm = CreateFolder_Recursive(fldReturn, itm2, bMailOnly)
   Next itm2
  End If
 End If
End Function

3: Get the search folder to retrieve the vaulted eMails from

Finding the respective search folder is just an iteration over all stores and figuring out the SearchFolder object with the right name.

 On Error Resume Next
 Set colStores = Application.Session.Stores
 For Each oStore In colStores
  Set oSearchFolders = oStore.GetSearchFolders
  For Each oFolder In oSearchFolders
   'Debug.Print (oFolder.FolderPath)
   If Right$(oFolder.FolderPath, Len(aFolderName)) = aFolderName Then
    Set FindMySearchFolder = oFolder
   End If
  Next
 Next

 

4: Finally – the eMail copy routine

That one’s the major piece of it; with every eMail retrieved from the SearchFolder you got to

  • Open it by the MailItem.Display command; this creates an Inspector object
  • Grab the Application.ActiveInspector and from that the Inspector.CurrentItem
  • Once the MailItem is discovered you can copy it: currentItem.Copy. That’s a major step. You could just right away move the item into the target folder in your PST, but that would not void the vault link.
  • Finally – after that copy operation – you can now move the MailItem in the destined target folder (I made sure it is the same as in the original mail store): MailItem.Move targetFolderName
  • After moving, close the item without changes: MailItem.Close olDiscard

With that operation on any of the vaulted eMails they get freed and accessible without vault connection.

 

Now – a few useful hints

for the benefit of your patience:

  • The Outlook forms cache is a tricky beast. As Enterprise Vault uses a bunch of custom forms to handle vaulted eMails, the forms cache is heavily used during this operation. I removed it before execution and also made sure that in case it gets scrambled again forms would be loaded from their original source instead to load’em from the cache. Here’s a few sources on the Outlook forms cache and the ForceFormReload registry key.
  • This still did not allow the macro to execute on all the 1300-something eMails I had to unvault. Ultimately, a simple DoEvents command in the macro’s main loop allowed Outlook to regularly recover from its heavy use of the forms cache.
  • Where to start? I used the namespace method PickFolder and simply chose the right folder to target my eMails to by the dialog it throws up.
  • Deletion after unvault: You might wanna consider deleting any vaulted eMail from your main mail store once it’s been copied to the PST.

So, finally the end result now resides within my Outlook Applicaiton as a VBA routine and lets me regularly unvault and PST-archive my eMail.

Nice .. I think.

 

Published by:
%d bloggers like this: