Setting project item’s BuildAction from NuGet package

I created a package for internal use which has an App.xaml file in content. Naturally I would like to find it in target project with BuildAction set to “ApplicationDefinition”, but Visual Studio treats it as “Page”, because it is the xaml extension default.

I found a hopeful solution here promising a fast track. My first version of install.ps1 was this:

param($installPath, $toolsPath, $package, $project)

$item = $project.ProjectItems.Item("App.xaml")
$item.Properties.Item("BuildAction").Value = ???

The problem raised when I didnt found the “ApplicationDefinition” value in prjBuildAction enumeration on MSDN…

I found some similar examples on the net. Some of them have comment with a question: what to do if someone wants to set a value not in the enumeration? Neither of those comments has an answer which distressed me a bit.

Here I found a clue so I tried to enumerate those undefined prjBuildAction values with this code:

param($installPath, $toolsPath, $package, $project)

Add-Type -AssemblyName 'Microsoft.Build.Engine'
$msbuildproject = new-object Microsoft.Build.BuildEngine.Project
$msbuildproject.Load($project.FullName)

[System.Collections.ArrayList]$buildActions = "None", "Compile", "Content","EmbeddedResource"

$msbuildproject.ItemGroups | Where-Object { $_.Name -eq "AvailableItemName" } | Select-Object -Property "Include" | ForEach-Object {
  $act = $_
  $buildActions.Add($act)
}

$item = $project.ProjectItems.Item("App.xaml")
$item.Properties.Item("BuildAction").Value = [int]$buildActions.IndexOf("ApplicationDefinition")

Dont know why but the enumeration of the ItemGroups didnt worked. When I did a

Write-Host ($msbuildproject.ItemGroups | Format-List | Out-String)

it showed me a nice list of BuildItems, but when I run with a Where-Object against it I found nothing. The problem should be in PS syntax or the object instances. I dont maintain my PS knowledge which is based on my .Net and Linux scripting practice combined with looking for snippets from code examples around. I simply dont want to go more deeply into it because I feel PS is something “created” and not “born” if You understand what I mean.

I rewrite the script to get the available BuildAction values as follows:

...
$msbuildproject.ItemGroups | ForEach-Object {   
    $ig = $_
    @($ig.GetEnumerator()) | ForEach-Object { 
        $i = $_
        if ($i.Name -eq "AvailableItemName")
        {
            $buildActions.Add($i.Include);
        }
    }
...

And voilá, I got a nice list of values in $buildActions:

None
Compile
Content
EmbeddedResource
CodeAnalysisDictionary
ApplicationDefinition
Page
Resource
SplashScreen
DesignData
DesignDataWithDesignTimeCreatableTypes
EntityDeploy
XamlAppDef

My $idx becomes: 5, I checked value in Solution Explorer… and found “CodeAnalysisDictionary” there. No problem, it should be some 0/1 based indexing problem, lets set $idx+1, run, check, value okay! Lets try something other just for to be sure… again bad value! Unfortunately it seems the order of values collected with this algorithm isnt good as the link I mentioned above imply. Back to the start line.

While looking for solutions I somewhere found something else about ProjectItem’s “ItemType” property, so I tried to play with it. And suddently the Sun raised, the sky becomes blue, etc.:

param($installPath, $toolsPath, $package, $project)
$item = $project.ProjectItems.Item("App.xaml")
$item.Properties.Item("ItemType").Value = "ApplicationDefinition"

So simple and it works!

Error 286 The “BuildShadowTask” task failed unexpectedly. System.NullReferenceException: Object reference not set to an instance of an object.

One of my colleagues meets the message above during building solution which has a test project. Because the solution he found isnt trivial I decided to distribute it here too:

Refresh accessors in Test References or add if some of them are missing.

Using accessors is obsolete now, but if You have an ancent project with them which wont compile with this error I hope You find useful this info.

Assembly generation failed — Referenced assembly ‘…’ does not have a strong name

A lot of useful nuget packages can be found around. But some of them are useless when You try to reference them from an assembly which should be signed. During compilation you will get the error message:

Assembly generation failed — Referenced assembly ‘…’ does not have a strong name

How to add strong name to a third party assembly?

I used to download the original sources and recompile them with assembly signing. But last time I simply neither cannot find the original sources nor the original download location of the used version of binaries. I tried to disassemble the dll but without success: ilspy and reflector generated uncompilable code 🙁

But while googleing I found a simple 4 step solution here:

Step 1 : Run visual studio command prompt and go to directory where your DLL located.

For Example my DLL located in D:/hiren/Test.dll

Step 2 : Now create il file using below command.

D:/hiren> ildasm /all /out=Test.il Test.dll
(this command generate code library)

Step 3 : Generate new Key for sign your project.

D:/hiren> sn -k mykey.snk

Step 4 : Now sign your library using ilasm command.

D:/hiren> ilasm /dll /key=mykey.snk Test.il

Nice, eh?

Naturally You should repeat these steps each time when the referenced nugets are updated,
but this way is much easier than the download-actual-sources-and-recompile one…

NSubstitute’s unexpected default return value

NSubstitute is a great tool when You are writing unit tests for Your product which was designed with dependency injection in mind.

Let’s see it:

    public interface IWorker
    {
        object GetResult();
    }

    [TestMethod]
    public void ExecuterTest_WorkerCalledOrNot()
    {
        var workerSubstitute = Substitute.For<IWorker>();

        var executerToTest = new Executer(workerSubstitute);

        // we test here
        executerToTest.Execute();

        workerSubstitute.Received(1).GetResult();
    }

In the code above I was not interested in the real value of the worker’s result. I wanted only to know whether the worker’s GetResult method gets called or not. But all of my workers do some hard work, so for this test I dont want to instantiate them and implicitli convert my unit test into integration test. So I create a substitute via NSubstitute for the given interface and gave that to my Executer.

The substitute tries to be as neutral as it can be. All void methods return asap, all non void methods return the default value for the return type. Naturally You can modify that behaviour and explicitly tell the substitute what to do when it’s methods get called, so You can for example redirect all database calls to in memory data structures during tests if You created Your DAL layer with DI in mind. And meanwhile the subtitute collects data about its use, so we can ask it whether it was called or not and with what arguments, etc.

But today I found got something unexpected thing.

Change a bit the example above to demonstrate it:

    public interface IWorker
    {
        TRet GetResult<TRet>();
    }

    [TestMethod]
    public void SubstituteReturnValueTest()
    {
        var workerSubstitute = Substitute.For<IWorker>();

        // test #1
        var r1 = workerSubstitute.GetItem<int>();
        Assert.AreEqual(default(int), r1);

        // test #2
        var r2 = workerSubstitute.GetItem<List<int>>();
        Assert.AreEqual(default(List<int>), r2);

        // test #3
        var r3 = workerSubstitute.GetItem<IList<int>>();
        Assert.AreEqual(default(IList<int>), r3);
    }

This will fail at Assert of test #3 because r3 wont be null as You would expect but an instance of “Castle.Proxies.IList`1Proxy”!

I think it is a bug but may be a result of some design decisions which priorized some functionality (retval is an instance of some interface which is wrapped around with a subtitute created on the fly) over coherency.

So be careful 🙂

DateTime.Now vs DateTime.UtcNow

A lot of times we used DateTime.Now for logging purposes. Once we got a task to search for and avoid performance bottlenecks in a data distributor projects. As a most simple measurement we put some logging around the code blocks in which we suspect the bottlenecks. The log showed us that a lot of time spent in a method which was really simple and we didnt understand why is it so slow.

After choosing professional performance profiler (RedGate’s ANTS Performance Profiler is the best one! 🙂 ) we found that not the method was so slow but the logging! More precisely the DateTime.Now calls while writing timestamp on log lines!

After some googling we found that DateTime.Now after determining UTC time asks the OS for the timezone set on machine and computes the local time from it. And the determination of timezone from the OS was the real bottleneck.

So in high performing solutions use DateTime.UtcNow instead of DateTime.Now if You dont want to run into things like this.

Method implementation pattern

During the development of many projects I tried to standardize the outlook of methods to help anybody to distinguish between some parts of them.
Check the code below:

        public int SaveNewPartner(Partner partner, Operator modifier)
        {
            if (partner != null)
            {
                if (modifier != null)
                {
                    if (partner.ID == null)
                    {
                        if (!SanityCheck(partner))
                        {
                            throw new ApplicationException("partner failed sanity check");
                        }

                        partner.ModificationTime = DateTime.Now;
                        partner.Modifier = modifier;
                        return Save(partner);
                    }
                    else
                    {
                        throw new InvalidOperationException("Already saved!"); //LOCSTR
                    }
                }
                else
                {
                    throw new ArgumentNullException("partner");
                }
            }
            else
            {
                throw new ArgumentNullException("partner");
            }
        }

I have problems with this code, and if I have them maybe others who meet it later in our project will have too. If You should determine which part is its business functionality it will be probably a shot in the dark. I really need to understand the whole method before I can point out how it works because parameter checking, control flow, exit point mixed with business part.

That’s why I wrote all my methods by following a pattern:

  1. check parameters
  2. define return value
  3. do business functionality
  4. return retval

Here is the rewritten method:

        public int SaveNewPartner(Partner partner, Operator modifier)
        {
            if (partner == null)
            {
                throw new ArgumentNullException("partner");
            }
            if (modifier == null)
            {
                throw new ArgumentNullException("modifier");
            }

            int ret = 0;

            if (partner.ID == null)
            {
                throw new InvalidOperationException("Already saved!"); //LOCSTR
            }
            if (!SanityCheck(partner))
            {
                throw new ApplicationException("partner failed sanity check");
            }

            partner.ModificationTime = DateTime.Now;
            ret = Save(partner);

            return ret;
        }

In lines 3-10 the parameter check occures. There should be parameter checking in each public entrypoint of our class (methods, properties). I check all the parameters I use in the given method directly or in private members I call from here. It is not necessary to check params which are simly handled to other public methods (we may not know all the constraints about these, it’s not our business). I neither check the business validity here (line 3 vs. line 14).

In line 12 I define the return value, which I gave always the name ‘ret’. So if You check any line of the method You can clearly identify where the retval is set and You dont need to scroll anywhere to determine what is the retval variable.

In lines 14-24 placed the business logic. All extremalities are closed asap, so no long lasting ifs and unnecessarily deep indentations happen.

In line 26 we return from here. No other inline returns in the method body so the control flow is clear: we enter at the beginning and exit at the end.

SOS does not support the current target architecture

WinDbg is a great tool if You know how to use it. There could be a lot of caveats, like the one in the subject line.

0:000> .loadby sos clr
0:000> !threads
SOS does not support the current target architecture.

If You create a 32bit dump on a 64bit machine via taskmanager dont forget to use the appropriate taskmgr.exe! You should use C:\Windows\SysWOW64\taskmgr.exe instead of the 64bit one executed by Ctrl-Alt-Del menu or other means. Otherwise the dump may be unusable.

for vs. foreach

If You write a cycle to reach every item in a collection You should use foreach instead of for.
So instead of:

            for (int i = 0; i < list.Length; i++)
            {
                list[i].DoSomething();
            }

write this:

            foreach (var item in list)
            {
                item.DoSomething();
            }

In first case the compiler may not determine what do You want with variable i.
It wont try to remember of position in list to step to the next item in next cycle which will be needed in next cycle.
The code will index the list in every cycle from the begining to the last element plus one.
Foreach will avoid this.
There are excemptions like real arrays via indexing cpu level instruction which will be faster than instantiating an enumerator
but remember my Think Runtime Principle.

My Think Runtime Principle

I realized that while working I always take care of a thought which now has a name: Thing Runtime Principe.

While programming the implementation details of Your current task You should keep a performace-guard-thread in Your mind. You should always think about how Your code will be executed. It will accomplish what in You specs is, but will it be optimal?

Nowadays we have high level programming languages, our code will be compiled with clever compilers, will be executed under clever runtimes on a fast computers so we dont really need to think about CPU, memory exhausement, etc. But some years ago programmers use punch cards, did you know? Processor’s speeds were messured in Mhz and not Ghz. And a man who is very reach now said, that 640kb should be enought for everything… Programmers of those ages had the ability to write programs which fullfill their specs and were optimal in aspect of hardware too.

I understand, that today it isnt so important to think about the hardware. But I am not talking about the hardware at all. I am talking about philosophy of thinking about the runtime environment of Your product. If You dont think about it You lost or never grow up a skill. A skill which has a place in our head in the panthenon of skills which help use to design and write good code. Without that skill there will be a growing distance between You and the device You write for. You may never used in the real life some of the maths You learned in the elementary school, but they helped to organize You thoughts, learn algorithmic thinking, etc.

What kind of music will born from a composer who dont really knows a note how will be sound from a piano or from a violin? My favorite church has a chime. The organist realized that a lot of songs simply cannot be played on a chime because of the bells sounds for seconds after he played them and that sound may be not harmonic with those he plays later. He must take care of device he worked on.

Caution! I didnt said You should write code which somehow relies on implemetation details of underlying classes, runtime or hardware! That whould break OOP rules in a wider sense. Do You remember Your favorite PC games which become unplayable fast after You changed Your PC to a faster one?

I said You should write code which relies on contracts of underlying things appropriatelly!

You should be not only a programmer but an engineer!

 

Reuse intermediate results

Everybody knows that a good programmer writes reuseable code.
If somebody writes the same instruction sequence twice she/he will extract it to a method and the code becomes reuseable.
But how about data? Reuse it too!

            for (int i = 0; i < array.Length - 1; i++)
            {
                if (array[i] > array[i + 1])
                {
                    sum = array[i] + array[i + 1];
                }
            }

Depending on array’s content You will index array four times in every cycle instead of the two necessary.
Indexing an array is a cpu instruction, very fast, but imagine if it is a List and not an array!
The List implementation may switch to linked items and it becomes a lot of dereferencing steps to reach the item at specific index.

Go further:

                if (MyObject1.AnObjectProperty.ItsSubProperty.Value > MyObject2.AnObjectProperty.ItsSubProperty.Value)
                {
                    sum = MyObject1.AnObjectProperty.ItsSubProperty.Value + MyObject2.AnObjectProperty.ItsSubProperty.Value;
                }

How many times should the object graph walked down to Value property? What happens if the properties in path are calculated ones and not just field wrappers?

Go further:

            for (int i = 0; i < array.Length - 1; i++)
            {
                if (CalculateValue(i) > CalculateValue(i+1))
                {
                    sum = CalculateValue(i) + CalculateValue(i + 1);
                }
            }

We are calculating values twice!

Think about how Your code will be executed, not only what You should accomplish!
See my Think Runtime Principle.