Read Committed Snapshot Isolation (RCSI)–Know before you use it for your Dynamics CRM Database

‘Read Committed Snapshot Isolation’ Or RCSI is short is something I continuously keep hearing in my CRM implementations every time there is any discussion related to the CRM performance.  On the lighter side of it, the term itself is very catchy isn’t it.

Since it’s not a database related blog, I will keep the concepts very simple here and focus more on the Dynamics CRM performance part of it. Typically from SQL sense, an isolation level is the degree to which one transaction must be isolated from resource or data modifications made by other transactions. The isolation level under which a Transact-SQL statement executes determines its locking and row versioning behavior.

By default the isolation level for your CRM Organization connection is ‘Read Committed’. You can check by running the below command. You would get a result similar to the below.

DBCC UserOptions

image

In ‘read committed’ isolation, your SQL statement views the most-recently committed data as of the moment each item is physically read. To put it simple, for read committed isolation, each row is locked briefly and physically read.

RCSI improves on this by removing row locking part totally while reading of rows. It provides transaction with the point-in-time view of the committed data, where the point-in-time is the time when the transaction starts executing. When this isolation level is maintained, SQL server maintains row versioning and during read no shared locks are acquired physically because the entire transaction reads from the row version store rather than being accessed directly.

To put in the words of MSDN – “Transactions that modify data do not block transactions that read data, and transactions that read data do not block transactions that write data, as they normally would under the default READ COMMITTED isolation level in SQL Server”

From the explanation above, it make obvious sense that it would result in increase in performance. And normally whenever there is any performance discussion on CRM, the question that I come across is – ‘we are thinking of implementing RCSI and it should increase performance. What’s your thoughts?’

Before we implement anything, we should also learn the caveats of it and personally I know many customers who have suffered (transactions rolled back and other stuffs). While this makes an attractive proposition for implementing, let’s see what can be the downside to it.

  • RCSI leverages the tempdb database to store a copy of the original row and adds a transaction sequence number to the  row. So it is important that the physical environment is configured to cope with this, primarily in terms of tempdb performance and memory/disk space requirements.
  • Many of CRM customers have complained that updates are rolled back frequently after enabling RCSI. This is because snapshot isolation uses an optimistic concurrency model. If a snapshot transaction attempts to commit modifications to data that has changed since the transaction began, the transaction will roll back and an error will be raised.
  • When snapshot is enabled, there are no shared locks. Any statement running with snapshot isolation cannot see any committed database modifications that occur after the statement starts executing. The longer the statement runs for, the more out-of-date its view of the database becomes, and the greater the scope for possibly-unintended consequences.

For e.g – You have a long running query which determines whether an email should be sent at certain step of the query, based on some attribute value of an entity. Since this is running in snapshot, the transaction at the beginning has taken a copy of the data for the transaction in the tempdb database. So when the email sending condition is evaluated, the value of the determining field might have changed in the database. But since the query is working on the snapshot of data some time back (might be few seconds), it would still evaluate the e-mail sending condition to true.

This is off-course an example. However I hope you guess what analogy I am trying to draw here.

So before you implement, just think what is the concurrency level for your CRM usage and then discuss with your DBA before reaching a decision.

Finally if you go ahead and enable the RCSI, run the below commands.

ALTER DATABASE <Your crm database>
SET ALLOW_SNAPSHOT_ISOLATION ON

ALTER DATABASE <Your Crm Database>
SET READ_COMMITTED_SNAPSHOT ON
Hope this helps!

{Utility}–Access team template migrator for Microsoft Dynamics

Access Teams! Great feature that came with Microsoft Dynamics CRM 2013 version. However after repeated implementations, despite the benefits that access templates provide, one common complain that I have heard

“Why are access teams not solution aware?”

Well lets get a bit deeper here. It’s not the access teams which is loosely mentioned in the above question, that is our problem. It is the access team template and the access team grid configurations on the entity forms which needs to be repeated across environments. And if you have more than a few of them to handle, it can become annoying.

Well, in case you are in the same boat as me, the following utility might come as a sight of relief to you. I have uploaded it in codeplex and it’s free to use.

https://accessteamsmigrator.codeplex.com/releases/view/618241

Please read the detailed documentation on how to use this tool to your advantage – https://accessteamsmigrator.codeplex.com/documentation

This is a beta version. So in case you have any issues with this tool, please drop an email to debajit.prod@gmail.com and I will get back you.

Hope this helps!

image_thumb3_thumb

Registering custom client handlers for your business process flow stage handlers –Dynamics CRM

This post is in continuation to the my previous post – https://debajmecrm.com/2014/04/18/control-crm-2013-business-process-next-stage-and-previous-stage-flow-using-jscript/

In the above post I have showed how you can override the OOB next and the previous stage clicks and make your own functions to fire when the next or previous stage movements happen. Please note that whatever I have mentioned in the above post is totally unsupported customization and should not be done unless you do not have any other option to try to.

This comes in handy specially in case of Dynamics CRM 2013 where you do not have stage change event handlers on the client side and you have some complex business logic to validate if the stage movement is logical.

Also for CRM 2015, the addOnStageChange is great addition to the library. But these events fire after the stage has been changed. In case you want to fire something before the stage changes from the client side, you might need to use the trick mentioned in the above link.

However many of my blog readers have reported that the above code is not working for them. After my research I could find that to fetch the OOB event handlers for the next and the previous stage clicks, the above link uses the code below.

$originalNextStageHandler = $(“#stageAdvanceActionContainer”).data(“events”)[“click”][0].handler

However the data API of jQuery has been deprecated from jQuery 1.8 and above.

If you do not reference the jQuery, CRM by default uses the version 1.7.2 of jQuery. You can find the version of jQuery your page is using by opening developers tools of your browser and typing the below code in the console.

jQuery.fn.jquery

However many a times we refer the advanced versions of jquery in our forms and why not. In that case if your form is referencing jquery version 1.8 or later, then the above code would not work.

In that case you would require to change the code to fetch the event handler to the one below

$originalNextStageHandler = $._data($(“#stageAdvanceActionContainer”).get(0), “events”)[“click”][0].handler;

The above code makes use of the private data API in jQuery.

Again a word of caution – It’s an unsupported customization and should not be attempted unless you have no other option.

Hope this helps!

{Fix}–SSRS Reports in Dynamics CRM not running under executing user’s context

Wondering how is it possible. I myself was confused when the one of the developers in our project reported this. Surely I thought it is not possible. There is something the developer is confusing with.

So I checked the report and what I did is print the logged in userid in the report using the function dbo.fn_finduserguid(). And the developer was right, it was not fetching the executing user’s guid in the report.

Also in parallel to that, the reports were not running connecting to the the organization in which they were uploaded. They were running with the dataset specified in the BIDS while developing the report. So I  was pretty sure that it has to do something with the way the report got uploaded.

 

So I logged in the report sever and browsed to the report. There is a trick in the way you should browse the report so that you can troubleshoot it. One of our developers was using the url – /ReporServer">http://<servername>/ReporServer. And below is the screen you get once you browse the url. We have three organizations in our environment and for each of organizations, there exists a report folder in the environment which I have blurred here.

 

image

If you go inside any of these folders, you would find something like below. These are all the guids of the reports you have uploaded for your organization. Not much use isn’t it?

image

 

What you need to is go to URL – /Reports">http://<servername>/Reports. Then go inside you organization folder and click on the Detailed View. And then click on custom reports. Then you would find all the reports that you have uploaded for your organization. Please check for the screenshots below.

image

image

image

 

Now going back to the main topic, I select my report and from the report menu, click of manage and then click on datasources.

image

 

image

 

When you publish a report in Dynamics CRM, internally it configures the connection to use the shared data source for its organization and makes the option of Connect Using to – Credentials supplied by the user running the report.

However in this case, the option ‘Custom data source’ was selected with the connection string that was used to develop the report. I had to black it out due to security reasons.

So first of all I changed the data source to use the shared data source and enabled the option Credentials supplied by the user running the report. Then everything started working as expected.

image

 

So far so good. But why did the report upload like that. What about some other reports which got uploaded in the same way from CRM. Do I have to go and publish each report by changing the data source?

Publishing the reports again using the publishreports.exe utility in CRM.

Open a command window and navigate to C:\Program Files\Microsoft Dynamics CRM\Tools\   and run PublishReports.exe. It requires a parameter to specify your organization name. PublishReports.exe "<Organization_name>".

Hope this helps!

{Fix}–Report Parameter changes not taking affect when uploaded–SSRS Reports in Dynamics CRM

Every day brings a new surprise in the life on consultant. And today was also not an exception.

We had some SSRS reports for our client which required some changes in the default values of multiple parameters for each of the reports. So our SSRS report developer did all the changes which was pretty quick for him. He tested in BIDS and everything worked fine.

Then once I started uploading the reports and viewed each of them, I could see that the default values selection was empty. My initial thought was that there were some glitches from the report side (I should not have thought like that considering how good he is, but still you know humans Smile). I went up to his desk and he showed me how each report is working fine.

Then I realized that it had to do something to do with CRM. I searched some blogs and it suggested restarting Reporting services and all.

Finally we had no other option to delete and re-create the reports. After that everything started working fine.

Please note that before you delete a report, take a note of the all the sub-reports which were referencing it. Because once you re-create the report, you would need the sub-reports to point back to the parent report.

I am sure there must be some better way to get rid of this error. However we had a deadline to meet for our QA team and could not afford to do much R&D. So if you are facing the same issue, deleting and re-creating the report might do the trick for you.

 

Hope this helps!

{knowhow} Clone an entity record programmatically in Microsoft Dynamics CRM using Clone method

Recently in my project the customer came up with a requirement where they needed to clone a record programmatically. They wanted a common API which can be used to clone records of any entities.

Normally the requirement would be like there would be a button on the entity record form called ‘Copy Record’ or ‘Clone Record’ and then once you click that, a new form would open up with the data copied from the parent record. I have explained the same stuff in the blog post – https://debajmecrm.com/2015/01/27/how-to-using-xrm-utility-openentityform-to-clone-all-fields-of-one-record-to-another-in-dynamics-crm/.

However this time, the requirement was a bit different. This time the client needed an API which can clone a single entity or a collection of entities. The function would accept either an Entity as parameter or EntityCollection as parameter.

I was thinking if Dynamics CRM provides something like MemberwiseClone method of C#. Could not find initially anything like that initially and decided to write something of my own. But before that, I decided to go back to the CRM programmer’s bible – CRM SDK. And guess what, CRM indeed has a clone method which does exactly the same stuff I was looking.

Below is the screenshot of the method definition from the SDK.

image

As you can see, the method is present in the Microsoft.Xrm.Client.dll.

With the help of this, you could do something like the below to clone a record.

 

var customer = proxy.Retrieve("account", Guid.Parse("2767215B-C2D2-E111-B664-005056AB021D"), "Columns which you want to copy");
var newCustomer = EntityExtensions.Clone(customer, true); // You can also write customer.Clone(true);
newCustomer.Id = Guid.Empty;
newCustomer.Attributes.Remove("accountid");
proxy.Create(newCustomer);

A small note here. After you clone from the parent record and create a new customer, please remember to remove the primary key fields. Otherwise you would get an error like – Cannot insert duplicate key.

And voila! It worked.

A little bit of more searching and I came across this wonderful blog post. http://inogic.com/blog/2014/08/clone-records-in-dynamics-crm/. A greet post indeed like all other posts from Inogic. Wish I would have searched that before. Smile. It contains multiple ways to clone the record in Dynamics CRM.

 

P.S – The stuff I have mentioned here is only available in CRM on-premise. The Inogic blog gives you a very informative pointer to achieve this across all environments.

 

Hope this helps!

 

{Best Practices} Naming convention for you javascript webresources in Dynamics CRM

Let’s agree to this. In any complex CRM implementations you would end up with lot of client side coding. No matter how much hard you try, you simply cannot avoid them. And why not. With new additions in CRM 2015, the client API of Dynamics CRM is as powerful as it ever was.

However believe me, they can be a nightmare to maintain and debug and if not maintained properly. And here I was in the same situation doing code review for some client. They had huge bunch of custom entities and jscript webresources and they named their scripts like this-

like for account, they have a single file named  ‘account.js’. Similarly for opportunity – opportunity.js.

And all the code related to the that entity is stored in a single file be it the form events, custom ribbon button events, field events and what not. So whatever suggestions I provided, I thought of sharing with the community also since all this far my clients have been greatly benefitted by that approach and still I get a thank you note from them.

Well, then let’s start.

Suggestion 1

————————

Use namespaces in you jscript code like the way you use that in your Plugins and workflows. The biggest advantage of using namespaces is that even if you end up writing the same method in multiple jscript files and all those files are loaded by the browser at the same time, still they would be different since they would be qualified by their namespaces. So what is the convention?

So imagine you are doing some stuff for a company named ‘Constoso’. So the root namespace you can have as Contoso.

example:

if (typeof (Contoso) == "undefined") {
    Contoso= { __namespace: true };
}

Suggestion 2

—————————–

Keep a common helper file that can be used to write down all the utility functions that would be used across all entities and custom HTML webresources. I have seen that in projects with multiple developers working on the different modules, they tend to write up the same method as and when they require specific to the entity they are working. This results in loss of time and well as affects the re-usability and maintainability. A classic example would be something like – getUserRoles() and isUserInRole() methods. So instead of this you can do something like this.

  • Create a webresource named Contoso.Utilities.js
  • Then you can define the methods in the file like below.
  • if (typeof (Contoso) == "undefined") {
        Contoso= { __namespace: true };
    }

    Contoso.Utilities = {
        _getUserInfo: function(){},

       _isUserSysAdmin: function(){}

    }

 

  • You can then include the webresource wherever you need. To call the method of the file, you would need to use the fully qualified name

For e.g

var isAdmin = Contoso.Utilities._isUserSysAdmin();

  • You can then direct all your developers to write any utility function, if not already present, in this file and not specific files for entity javascripts.

 

Suggestion 3

————————————

Name the files as per their intended functionalities. So lets take opportunity entity for an example. For opportunity you can have something like.

  • Opportunity.FormEvents.js – This can be the place for all your form on-load and on-save events that you register for the form. So if there is any error you are getting during on save or on load of the form, you can be rest assured that the origin you can trace from this file.

 

  • Opportunity.FieldEvents.js – Any field on-change events, tab state change events etc, you can place in this field. The good thing is if there is any error during on-change event of some field, you know which file to look into. Also if you follow a specific naming convention for you onchange events like for the customerid field on the opportunity, if you name the event handler like function customer_changed, you do not need to open the customizations and check what is the function name and what file the function is in.

 

  • Opportunity.RibbonEvents.js – This can be very useful. If you place all the custom ribbon button event handler in this file, you no longed need to check the ribbon diff xml to find out which file the ribbon event handler is located in.

 

  • Opportunity.Constants.js – This can be place to store all your constants that you need for the processing. in Plugins and workflows you can use the C# enums instead of hardcoding the values. But in jscript, if you want the same effect, you can just place it in this file.

And off-course not to forget, just backup the code for these files in appropriate namespaces.

Say for opportunity.formevents.js

if (typeof (Contoso) == "undefined") {
    Contoso= { __namespace: true };
}

if (typeof (Contoso.Opportunity) == "undefined") {
    Contoso.Opportunity = { __namespace: true };
}

Contoso.Opportunity.ForEvents= {
    _formLoad: function(eContext){},

   _formSave: function(eContext){}

}

And believe me, if you maintain you stuff like this, you would find it much comfortable to maintain and use in the long run.

 

Hope this helps!