The saga of the Oracle client and the “Attempted to read or write protected memory” exception

On one of the project’s I worked on recently, we had a strange problem whereby the application was deployed to multiple servers and on all but one, ran perfectly. On this one server we were seeing, in the log file, that we were getting exceptions with the message “Attempted to read or write protected memory”.

Let’s face it if the code works on all other boxes, the likelihood was a configuration or installation issue on the one anomaly machine. So I knocked together a very simple Oracle client (sample code listed below) to prove that the issue was unrelated to the software I maintained (you know how it is, you might be pretty sure where the problem is but sometimes you have to also prove it to others).

try
{
   var connection = new OracleConnection(connectionString);
   connection.Open();

   var command = connection.CreateCommand();
   command.CommandType = CommandType.Text;
   command.CommandText = queryString;

   var reader = command.ExecuteReader();

   while (reader.Read())
   {
      Console.WriteLine(reader.GetString(0));
   }
}
catch (Exception e)
{
   Console.WriteLine(e.Message);
   Console.WriteLine(e.StackTrace);
}

Obviously you’ll need to supply your own connectionString and queryString if using this snippet of code.

Indeed this simple client code failed with the same issue and hence it was definitely nothing to do with my application (and yes the same sample code was also tested on the working servers and worked perfectly).

We also connected to Oracle via sqlplus and all seemed fine, using the same connection details and query that we were using in the failing application.

Upon further investigation it became clear that multiple (well two) Oracle client installs existed on the machine and therefore it seemed likely our app was somehow trying to connect via a newer version of the Oracle client.

Luckily the web came to the rescue, with this following configuration code which allows us to point our .NET client code to a specific version of an Oracle client installation.

<configSections>
   <section name="oracle.dataaccess.client"
    type="System.Data.Common.DbProviderConfigurationHandler, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
</configSections>
  
<oracle.dataaccess.client>
   <settings>
      <add name="DllPath" value="c:\oracle\product\10.2.0\client_2\BIN"/>
   </settings>
</oracle.dataaccess.client>

C# 6 features

A look at some of the new C# 6 features (not in any particular order).

The Null-conditional operator

Finally we have a way to reduce the usual

if(PropertyChanged != null)
   PropertyChanged(sender, propertyName);

to something a little more succinct

PropertyChanged?.Invoke(sender, propertyName);

Read-only auto properties

In the past we’d have to supply a private setter for read only properties but now C# 6 allows us to do away with the setter and we can either assign a value to a property within the constructor or via a functional like syntax, i.e.

public class MyPoint
{
   public MyPoint()
   {
      // assign with the ctor
      Y = 10;
   }

   // assign the initial value via the initializers
   public int X { get; } = 8;
   public int Y { get; }
}

Using static members

We can now “open” up static class methods and enums using the

using static System.Math;

// Now instead of Math.Sqrt we can use
Sqrt(10);

String interpolation

Finally we have something similar to PHP (if I recall my PHP from so many years back) for embedding values into a string. So for example we might normally write String.Format like this

var s = String.Format("({0}, {1})", X, Y);

Now we can instead write

var s = $"({X}, {Y})";

Expression-bodied methods

A move towards the way F# might write a single line method we can now simplify “simple” methods such as

public override string ToString()
{
   return String.Format("({0}, {1})", X, Y);
}

can now be written as

public override string ToString() => String.Format("({0}, {1})", X, Y);

// or using the previously defined string interpolation
public override string ToString() => $"({X}, {Y})";

The nameof expression

Another improvement to remove aspects of magic strings, we now have the nameof expression. So for example we might have something like this

public void DoSomething(string someArgument)
{
   if(someArgument == null)
      throw new ArgumentNullException(nameof(someArgument));

   // do something useful
}

Now if we change the someArgument variable name to something else then the nameof expression will correctly pass the new name of the argument to the ArgumentNullException.

However nameof is not constrained to just argument in a method, we can apply nameof to a class type, or a method for example.

References

What’s new in C# 6

Filtering listbox data in WPF

Every now and then we’ll need to display items in a ListBox (or other ItemsControl) which we can filter.

One might simply create two ObservableCollections, one containing all items and the other being the filtered items. Then bind the ItemsSource from the ListBox to the filtered items list.

A simple alternate would be to use a CollectionView, for example

public class MyViewModel
{
   public MyViewModel()
   {
      Unfiltered = new ObservableCollection<string>();
      Filtered = CollectionViewSource.GetDefaultView(Unfiltered);
   }

   public ObservableCollection<string> Unfiltered { get; private set; }
   public ICollectionView Filtered { get; private set; }
}

Now to filter the collection view we can use the following (this is a silly example which will filter to show only strings larger than 3 in length)

Filtered.Filter = i => ((string)i).Length > 3;

to remove the filter we can just assign null to it, thus

Filtered.Filter = null;

In use, all we need to do is bind our Filtered property, for example in a ListBox control’s ItemsSource property and then apply a filter or remove a filter as required.

Up and running with Modern UI (mui)

So I actually used this library a couple of years back but didn’t blog about it at the time. As I no longer have access to that application’s code I realized I needed a quick start tutorial for myself on how to get up and running with mui.

First steps

It’s simple enough to get the new styles etc. up and running, just follow these steps

  • Create a WPF application
  • Using NuGet install Modern UI
  • Change the default Window to a ModernWindow (both in XAML and derive you code behind class from ModernWindow
  • Add the following to your App.xaml resources
    <ResourceDictionary>
       <!-- WPF 4.0 workaround -->
       <Style TargetType="{x:Type Rectangle}" />
       <!-- end of workaround -->
       <ResourceDictionary.MergedDictionaries>
          <ResourceDictionary Source="/FirstFloor.ModernUI;component/Assets/ModernUI.xaml" />
          <ResourceDictionary Source="/FirstFloor.ModernUI;component/Assets/ModernUI.Light.xaml"/>
       </ResourceDictionary.MergedDictionaries>
    </ResourceDictionary>
    

So that was easy enough, by default a grayed out back button is shown, we can hide that by setting the window style to

Style="{StaticResource BlankWindow}"

You can show/hide the window title by using the property

IsTitleVisible="False"

to the ModernWindow.

The tab style navigation

In these new UI paradigms we may use the equivalent of a tab control to display the different views in the main window, we achieve this in mui using

<mui:ModernWindow.MenuLinkGroups>
   <mui:LinkGroup DisplayName="Pages">
      <mui:LinkGroup.Links>
         <mui:Link DisplayName="Page1" Source="/Views/Page1.xaml" />
         <mui:Link DisplayName="Page2" Source="/Views/Page2.xaml" />
      </mui:LinkGroup.Links>
   </mui:LinkGroup>
</mui:ModernWindow.MenuLinkGroups>

This code should be placed within the ModernWindow element (not within a Grid element) and in this example I created a Views folder with two UserControls, Page1 & Page2 (in my case I placed a TextBlock in each with Page1 and Page 2 Text respectively to differentiate the two).

Running this code we now have a UI with the tab like menu and two pages, the back button also now enables (if you are using it) and allows navigation back to the previous selected tab(s).

One thing you might notice, when the app starts no “page”, by default, is selected. There’s a ContentSource property on a ModernWindow and we can set this to the page we want dispalyed, but if you do this you’ll also need to updated the LinkGroup to tell it what the current pages is.

The easiest way to do this is using code behind, in the MainWindow ctor, simply type

ContentSource = MenuLinkGroups.First().Links.First().Source;

Colour accents

By default the colour accents used in mui are the subtle blue style (we’ve probably seen elsewhere), to change the accent colours we can add the following

AppearanceManager.Current.AccentColor = Colors.Red;

to the MainWindow ctor.

Okay that’s a simple starter guide, more (probably) to follow.

References

https://github.com/firstfloorsoftware/mui/wiki

Getting RESTful with Suave

I wanted to implement some microservices and thought, what’s more micro than REST style functions, executing single functions using a functional language (functional everywhere!). So let’s take a dip into the world of Suave using F#.

Suave is a lightweight, non-blocking, webserver which can run on Linux, OSX and Windows. It’s amazingly simple to get up and running and includes routing capabilities and more. Let’s try it out.

Getting Started

Create an F# application and the run Install-Package Suave via Package Manager Console.

Now, this code (below) is taken from the Suave website.

open Suave
open Suave.Filters
open Suave.Operators
open Suave.Successful

[<EntryPoint>]
let main argv = 

    let app =
        choose
            [ GET >=> choose
                [ path "/hello" >=> OK "Hello GET"
                  path "/goodbye" >=> OK "Good bye GET" ]
              POST >=> choose
                [ path "/hello" >=> OK "Hello POST"
                  path "/goodbye" >=> OK "Good bye POST" ] ]

    startWebServer defaultConfig app

    0 // return an integer exit code

Run your application and then from you favourite web browser, type in either http://localhost:8083/hello and/or http://localhost:8083/goodbye and you should see “Hello GET” and/or “Good bye GET”. From the code you can see the application also supports POST.

Let’s test this using Powershell’s Invoke-RestMethod. Typing Invoke-RestMethod -Uri http://localhost:8083/hello -Method Post and you will see “Hello POST”.

Passing arguments

Obviously invoking a REST style method is great, but what about passing arguments to the service. We’re going to need to add the import open Suave.RequestErrors to support errors. We can read parameters from the commands using HttpRequest queryParam

 let browse =
        request (fun r ->
            match r.queryParam "a" with
            | Choice1Of2 a -> 
                match r.queryParam "b" with
                | Choice1Of2 b -> OK (sprintf "a: %s b: %s" a b)
                | Choice2Of2 msg -> BAD_REQUEST msg
            | Choice2Of2 msg -> BAD_REQUEST msg)

    let app =
        choose
            [ GET >=> choose
                [ path "/math/add" >=> browse ]
            ]

    startWebServer defaultConfig browse

Disclaimer: This is literally my first attempt at such code, there may be a better way to achieve this, but I felt the code was worth recording anyway. So from our preferred web browser we can type http://localhost:8083/math/add?a=10&b=100 and you should see a: 10 b:100.

Passing JSON to our service

We can also pass data in the form of JSON. For example, we’re now going to pass JSON contain two integers to our new service. So first off add the following

open Suave.Json
open System.Runtime.Serialization

Now we’ll create the data contracts for sending and receiving our data using, these will be serialized automatically for us through the function mapLson which will see in use soon. Notice we’re also able to deal with specific types, such as integers in this case (instead of just strings everywhere).

[<DataContract>]
type Calc =
   { 
      [<field: DataMember(Name = "a")>]
      a : int;
      [<field: DataMember(Name = "b")>]
      b : int;
   }

[<DataContract>]
type Result =
   { 
      [<field: DataMember(Name = "result")>]
      result : int;
   }

Finally let’s see how we startup our server. Here we use the mapJson to map our JSON request into the type Calc from here we carry out some function (in this case addition) and the result is returned (type inference turns the result into a Result type).

startWebServer defaultConfig (mapJson (fun (calc:Calc) -> { result = calc.a + calc.b }))

Let’s test this using our Invoke-RestMethod Powershell code. We can create the JSON body for this method in the following way.

Invoke-RestMethod -Uri http://localhost:8083/ -Method Post -Body '{"a":10, "b":20}'

References

Suave Music Store
Invoke-RestMethod
Building RESTful Web Services
Building REST Api in Fsharp Using Suave

Setup Powershell to use the Visual Studio paths etc.

This one’s straight off of How I can use PowerShell with the Visual Studio Command Prompt? and it works a treat.

So I amend the $profile file with the following (updated to include VS 2015)

function Set-VsCmd
{
    param(
        [parameter(Mandatory=$true, HelpMessage="Enter VS version as 2010, 2012, 2013, 2015")]
        [ValidateSet(2010,2012,2013,2015)]
        [int]$version
    )
    $VS_VERSION = @{ 2010 = "10.0"; 2012 = "11.0"; 2013 = "12.0"; 2015 = "14.0" }
    if($version -eq 2015)
    {
        $targetDir = "c:\Program Files (x86)\Microsoft Visual Studio $($VS_VERSION[$version])\Common7\Tools"
        $vcvars = "VsMSBuildCmd.bat"
    }
    else
    {
        $targetDir = "c:\Program Files (x86)\Microsoft Visual Studio $($VS_VERSION[$version])\VC"
        $vcvars = "vcvarsall.bat"
    }
 
    if (!(Test-Path (Join-Path $targetDir $vcvars))) {
        "Error: Visual Studio $version not installed"
        return
    }
    pushd $targetDir
    cmd /c $vcvars + "&set" |
    foreach {
      if ($_ -match "(.*?)=(.*)") {
        Set-Item -force -path "ENV:\$($matches[1])" -value "$($matches[2])"
      }
    }
    popd
    write-host "`nVisual Studio $version Command Prompt variables set." -ForegroundColor Yellow
}

The previous version (non-VS 2015) is listed below in case it’s still needed

function Set-VsCmd
{
    param(
        [parameter(Mandatory=$true, HelpMessage="Enter VS version as 2010, 2012, or 2013")]
        [ValidateSet(2010,2012,2013)]
        [int]$version
    )
    $VS_VERSION = @{ 2010 = "10.0"; 2012 = "11.0"; 2013 = "12.0" }
    $targetDir = "c:\Program Files (x86)\Microsoft Visual Studio $($VS_VERSION[$version])\VC"
    if (!(Test-Path (Join-Path $targetDir "vcvarsall.bat"))) {
        "Error: Visual Studio $version not installed"
        return
    }
    pushd $targetDir
    cmd /c "vcvarsall.bat&set" |
    foreach {
      if ($_ -match "(.*?)=(.*)") {
        Set-Item -force -path "ENV:\$($matches[1])" -value "$($matches[2])"
      }
    }
    popd
    write-host "`nVisual Studio $version Command Prompt variables set." -ForegroundColor Yellow
}

Another user on the same question of stackoverlow also put forward the idea of simply changing the shortcut that Visual Studio supply to add the & powershell, like this

%comspec% /k ""C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\Tools\VsDevCmd.bat" & powershell"

Creating a Powershell function

We can create commands/cmdlets using C# or combine commands in ps1 files (for example) but we can also implement Powershell functions, which give is the ability to combine existing Powershell commands but wrap them in their own function with their own help etc.

Let’s look at a simple implementation of a tail-like command

Get-Content -Path c:\logs\logfile.log -Wait 

Ofcourse the first thing we might like to do is wrap this in a simple function, like this

function Get-Tail 
{ 
   Get-Content -Path $args[0] -Wait 
}

This works fine, but now maybe we’d like to make it more obvious what argument(s) Get-Tail expects. So we’ll add a parameter which better illustrates the intent via it’s name etc. So now we might have

function Get-Tail 
{ 
   param (
      [Parameter(Mandatory=$true, 
       Position=0, 
       HelpMessage="Path of file to tail")]
      [ValidateNotNullOrEmpty()]
      [string]$Path
   )

   Get-Content -Path $Path -Wait 
}

Here we’ve named the parameter that we expect as Path and we’ve stipulated it’s type. We’ve also added a Parameter attribute to ensure it’s obvious that this is a mandatory field and if the user forgets to supply it, Powershell will ask for it, to that purpose we’ve also supplied a help message so if the user types !? into the prompt for Path, the user will get the help message telling them what’s expected.

This is looking good, but running Get-Help tells us very little about the function so now we can extend this further and make it look like this

function Get-Tail 
{ 
<#
.SYNOPSIS
   Lists the file to standard out and waits 
   for any change which are then also output
.DESCRIPTION
   Get-Tail works a little like the tail 
   command, it allows the user to write a 
   file to standard out and then waits for
   any further changes, these to are written to
   standard out until the user exist the command.
.PARAMETER Path
   The Path to the file to be tailed
.EXAMPLE
   Get-Tail c:\logs\logfile.log
#>
param
(
[Parameter(Mandatory=$true, 
 Position=0, 
 HelpMessage="Path of file to tail")]
[ValidateNotNullOrEmpty()]
[string]$Path
)
 
Get-Content -Path $Path -Wait 
}

Now we have a command written in Powershell which looks and acts like any of the common commands might work, with help.

Let’s quickly review the lines we’ve added – the <# #> is the comment block for Powershell, within it we’ve headings along the lines of .HEADING and below that we’ve just some text to describe the command. The .PARAMETER is more interesting in that we write the parameter name after it (we can have multiple .PARAMETER blocks for multiple params).

To find out what options are available for the help block, type

Get-Help about_Comment_Based_Help

References

Documenting Your PowerShell Binary Cmdlets

Creating a C# CmdLet

So Powershell comes with a lot of commands/CmdLets but as a developer I’m always interested in how I might write my own. Whilst it’s likely that combining existing commands might produce the results you’re after, if it doesn’t we can resort to writing our own command using C#.

Getting Started

  • Create a new Class Library project
  • Go to the project properties and target the version of .NET supported by your installed version of Powershell (to find this out simply type $PSVersionTable into Powershell and check the CLRVersion)
  • Add a reference to System.Management.Automation to locate this browse to C:\Program Files (x86)\Reference Assemblies\Microsoft\WindowsPowerShell\3.0
  • The namespace we need to add is System.Management.Automation

Hello World CmdLet

Now we’ve got everything set-up we need to make our CmdLet do something. Here’s the source for a good old HelloWorld Cmdlet

[Cmdlet(VerbsCommon.Get, "HelloWorld")]
public class GetHelloWorld : Cmdlet
{
   protected override void ProcessRecord()
   {
      WriteObject("Hello World");
   }
}

The CmdLetAttribute takes a string for the verb, in this case I’m reusing the VerbsCommon.Get string. The next requirements is the noun, the name of the Cmdlet. So in this case the two go together to give us the Cmdlet Get-HelloWorld.

We derive our class from Cmdlet as we’re not dependent upon the Powershell runtime, if we were we’d derive from the PSCmdlet.

Importing and using the new CmdLet

Once we’ve built our CmdLet and assuming it’s been built with the same version of .NET as supported by the installed Powershell, we can import the “module” into Powershell using

Import-Module c:\Dev\MyCmdLets.dll

Obviously replacing the path and DLL with the location and DLL you’re installing

Once imported we can simply run

Get-HelloWorld

Autocomplete also works if you type Get-He then press tab and you’ll find Get-HelloWorld presented.

If you need to rebuild your Cmdlet you’ll need to close the Powershell instance to remove the instance from it, I tried Remove-Module MyCmdLets but this only removes its availability to Powershell, i.e. you can no longer run it, but I guess, like in C# applications, once the module is in the AppDomain you cannot fully unload it.

Parameters

Let’s add a parameter to the Cmdlet.

[Cmdlet(VerbsCommon.Get, "HelloWorld")]
public class GetHelloWorld : Cmdlet
{
   [Parameter(Mandatory = true)]
   public string Name { get; set; }

   protected override void ProcessRecord()
   {
      WriteObject("Hello World " + Name);
   }
}

So now, we’ve added a mandatory parameter Name. When we run the Get-HelloWorld Cmdlet we can now supply the name, thus

Get-HelloWorld -Name Scooby

Returning objects

So far we’ve returned a string, but what about if we want to return and object or better still multiple objects, like Get-Process might.

[Cmdlet(VerbsCommon.Get, "HelloWorld")]
public class GetHelloWorld : Cmdlet
{
   protected override void ProcessRecord()
   {
      WriteObject(new HelloObject { Name = "Scooby", Description = "Dog"} );
      WriteObject(new HelloObject { Name = "Shaggy", Description = "Man" });
      WriteObject(new HelloObject { Name = "Daphne", Description = "Woman" });
   }
}

public class HelloObject
{
   public string Name { get; set; }
   public string Description { get; set; }
}

Now running this from Powershell will list two columns, Name and Description and three rows with the name and description as per our objects.

Better still we can now write commands such as

Get-HelloWorld | where {$_.Description -eq "Dog"}

References

Cmdlet Methods

Writing Powershell command (.ps1) files

In a previous post I started to look into using Powershell, but ofcourse the power of commands/Cmdlets comes when they’re either combined or whereby our most common commands exist in files to be run again and again.

So let’s turn an often used command into a ps1 file.

Windows PowerShell ISE (Integrated Scripting Engine)

From the Windows search box (ctrl+R) we can run up the Windows PowerShell ISE. This gives us, both an editor and also a command prompt for writing and testing our command scripts.

You can also run this application from the command line using powershell_is or better still via the alias ise.

In the editor let’s type

Get-Process | where {$_.CPU -gt 1000}

Now save this file as Top-Cpu.ps1

Running our new command

We can simply drag and drop a ps1 file from Windows Explorer onto the Powershell window to fill in the fully qualified path on the command line, pressing enter we can then run the file. Obviously if you know the full path you can do this yourself by typing the same into the command prompt.

Specifying parameters

It may be that our ps1 file is perfect as it is, but it’s also quite likely we’ll want to allow the user to change some values in it at runtime, i.e. using command line arguments/parameters.

To specify parameters in our script we write

param(cpu=1000)

This defines a parameter named cpu and gives a default value, in this case 1000. If we change our script to look like this

param($cpu=1000)
Get-Process | where {$_.CPU -gt $cpu}

where you’ll notice $cpu is the placeholder/variable where the parameter is used. We can now run this script as Top-cpu.ps1 and the default parameter is used. Or we might write Top-cpu.ps1 7000 to supply a new parameter.

Multiple parameters

We can command separate our parameters to include multiple params, like this

param($value=1000, $field="CPU")
Get-Process | where {$_.$field -gt $value}

Maybe not so useful in this specific script, but you get the idea.

References

Windows PowerShell: Defining Parameters

Powershell $profile

What’s the purpose of the $profile

The $profile (like a bash script configuration) allows us to configure the way our Powershell command shell looks, or sets the default location, we can add commands and aliases etc.

Where’s the $profile and does it exist?

Typing the following will result in Powershell telling us where the Microsoft.PowerShell_profile.ps1 file is expected to be

$profile

Just because $profile outputs a path, does not mean there’s a file’s there. It’s simply telling us where it goes to get the profile, we may still need to created one, instead we can use the following

We can find out whether a $profile already exists using

Test-Path $profile

Test-Path determines whether all elements of a path exists, i.e. in this case, does the file exist

Creating the profile file

Typing

New-Item -path $profile -itemType file -force

will create a new item (in this case a file) at the location (and name) supplied by the $profile variable. The force switch ensure the file is overwritten if it already exists.

The ps1 file is just a text file, so from the command line you can run Notepad or powershell_ise (or ofcourse from the Windows GUI you can do the same) and edit the file, allow us to enter the commands that might want available from session to session.