Products

Solutions

Resources

Partners

Community

About

New Community Website

Ordinarily, you'd be at the right spot, but we've recently launched a brand new community website... For the community, by the community.

Yay... Take Me to the Community!

The Community Blog is a personal opinion of community members and by no means the official standpoint of DNN Corp or DNN Platform. This is a place to express personal thoughts about DNNPlatform, the community and its ecosystem. Do you have useful information that you would like to share with the DNN Community in a featured article or blog? If so, please contact .

The use of the Community Blog is covered by our Community Blog Guidelines - please read before commenting or posting.


Performance Enhancements

So in my last blog I talked about how important it is to have a thorough understanding of the Windows Hosting environment before diving into performance optimizations in your web application. Another critical aspect is having a Performance Test Plan as well as a Test Lab where you can simulate load under various scenarios and measure the results. At the beginning of November, Charles Nurse spent one week in Redmond working side by side with some Microsoft perf experts, putting DotNetNuke through its paces in a robust test lab. The various resources were provided by the Microsoft Web Platform & Tools team and Microsoft Patterns & Practices team and we are eternally grateful to Microsoft for their assistance in this area.

The purpose of the performance testing was to establish some baseline expectations by measuring how DotNetNuke reacted under a variety of load scenarios. Since DotNetNuke is primarily used in a shared hosting environment, we focussed on simulating a highly dense server configuration - with many active IIS web sites competing for server resources. We also limited our testing to ASP.NET 2.0 on IIS 6.0, as it is now the predominant platform for Windows hosting. We compared an early version of DotNetNuke 4.0 with the development code for DotNetNuke 4.4.0 to determine if the recent optimizations resulted in performance and/or scalability gains.

Like many of our previous releases, DotNetNuke 4.4.0 has a central theme which in this case is focused on performance improvements. In DotNetNuke 4.4.0, we analyzed every single application layer and service to try and identify bottlenecks. We used a wide variety of tools and processes and gained some very deep technical knowledge on the intricacies of ASP.NET, IIS, and the Windows Hosting environment. It is never easy to produce a comprehensive solution for a complex problem, but after nearly 4 full-time months, we feel that we have finally reached a point where we can confidently deliver results to the community.

Why did it take so long?

An important thing to remember when it comes to performance tuning is more of an art than a science, as an improvement in one area can very easily lead to a degradation in another. For example, the common methodology which developers use to improve the response time of an application is to reduce the number of calls to the database. Typically this is accomplished with caching. However, based on the information in my previous blog post, we know that memory is a gating resource on a web server; therefore, there is an upper limit to the amount of data you can cache before your application's working set ( memory footprint ) becomes too large and effects scalability. Another important thing to remember when it comes to performance is that although a small gain/loss in one area does not seem like a big deal, when you multiple it across hundreds of active sites on a server, it can make a huge difference. This are just a couple of many variables you need to contend with as you work towards optimizing a web application.

So how did we do it?

Code Refactoring

Using the Red-Gate ANTS Profiler, we were able to profile both the code and memory footprint of the application. The ANTS Profiler is an incredible tool for identifying bottlenecks and providing actual metrics of the application working set. And since the tool is relatively easy to configure, it is simple to test the application under a variety of different scenarios. A good example is testing a new web site install versus testing a fully provisioned production web site - as an application can behave much differently depending on the data volume it is supporting ( ie. a new DotNetNuke install has 1 page, 1 user, and 3 roles whereas a site like dotnetnuke.com has 1000+ pages, 360000+ users, and 50+ roles ). Another important aspect of performance testing is that the largest gains are obtained by optimizing your primary code path ( ie. the code execution path which is followed in 90% of your web requests ). ANTS Profiler does a good job of identifying the methods in your primary code path, regardless of where these methods exist within your application architecture. A few of the big offenders were XML Document handling and Serialization and use of reflection in our CBO utility class.

Caching

Steven Smith created a useful custom free tool ( Cache Manager - http://aspalliance.com/cachemanager/ ) which allows you to granularly view the memory footprint for various objects in your environment. This tool ( along with the ANTS Profiler ) was important for identifying and verifying some issues with our caching architecture.

Since DotNetNuke 2.0, we had been using the ASP.NET Cache for storing our commonly used application entities. This allowed us to reduce the number of database hits, resulting in better response time. Unfortunately, the area where caching was initially implemented was in the Begin_Request page event. This made a lot of sense in terms of optimizing the response time in the primary code path, but it ignored the fact that background threads and administrative interfaces could also benefit from the caching of these objects. Another issue with our caching implementation was that we were storing redundant objects in the cache. Typically we were storing both a collection of objects as well as each individual object under unique keys in the cache. As a result our working set was nearly twice as large as it needed to be.

To address these issues, we moved the caching logic from the Begin_Request page event to the business applicaton layer ( domain ). This allowed the caching logic to benefit all data access scenarios. We also changed our caching logic to use a Dictionary collection rather than an ArrayList, allowing us to store the objects in a single collection which still allows us to access individual objects based on their ID. These caching improvements allowed us to improve response time as well as reduce working set.

Assembly Management

As I mentioned in my last blog, all assemblies in the /bin folder are loaded into the AppDomain when it starts up. So when you have optional assemblies which you distribute with your application ( ie. alternate providers, handlers, etc... ) that are not necessary in the 90% use case scenario, this obviously has bad side effects because these optional assemblies contribute to the memory footprint of the application ( and once they are loaded, they are never unloaded - even if they are never referenced ). As usual, there are many ways to deal with this problem - each with their own advantages or disadvantages.

One solution we implemented in DotNetNuke 4.4.0 is a new "light weight" install model. Basically, DotNetNuke will only install a few basic modules by default. The other modules will exist in the /Install/Module folder ( with a different file extension ) and can be optionally installed through the Host / Module Definitions interface. The result is that you only need to install the modules which you need for your site - and the other modules will not unnecessarily contribute to your memory footprint.

Another solution we have investigated is the use of the Relative Search Path in the CLR which allows you to reference assemblies stored outside of the /bin folder. This is accomplished through the <probing> node in the <runtime> section of web.config. The benefit of this technique is that the CLR will granularly load inidividual assemblies when they are referenced by your application ( rather than loading ALL assemblies on app start ). For core framework items such as Providers, HTTP Modules, etc... we have verified that there are performance gains ( we are still investigating the upgrade complications ). For custom modules, it is not so simple and it appears that module developers would need to follow some strict guidelines in order to support this model. We are still investigating this scenario.

One other solution which is often suggested is the use of the Global Assembly Cache ( GAC ). In order to support GAC installs, all of the assemblies in DotNetNuke need to be strongly named and the APTCA bit needs to be set. However, the practical limitations of using the GAC really do not make sense for DotNetNuke. Most Hosters tightly control the components which are installed in the GAC ( since they run in Full Trust ), and would not consider installing an application like DotNetNuke. This is not because DotNetNuke is untrusted, but rather because DotNetNuke is a pluggable application framework which allows third party modules to be installed at runtime. The combination of these factors represent a significant security risk in a shared hosting environment. In addition, the GAC allows for side-by-side installs of different versions of the same assembly. Based on DotNetNuke's agile release schedule, the administration of different release versions in the GAC would extremely challenging to manage effectively. Use of the GAC in a Dedicated server or Intranet environment may be feasible but it would certainly require some due diligence prior to production use.

Database

Using the SQL Server Profiler as well as some custom profiling scripts created by Bert Corderman, we were able to capture database performance metrics for both a clean install scenario as well as a high traffic production scenario ( dotnetnuke.com ). This type of profiling identified the most frequent and slowest database queries. Optimization involved offloading some calls to the cache, tuning the database schema with strategic indexes, and rewriting some stored procedures to make them more efficient.

Compression

One of the things which many of us developers take for granted is the fact that not everyone in the world has broadband Internet access. The infrastructure in some countries has not been upgraded to provide the same data transfer rates which we have come to expect in North America. This results in latency issues where the amount of time to transfer a standard HTML page to a client browser can vary drastically. As a result, reducing the payload ( size ) of each web response becomes a critical aspect to delivering a responsive web experience.

One of the common techniques for reducing payload is to leverage HTTP Compression. HTTP Compression uses an algorithm to compress the response stream and reduce its size. IIS 6.0 has an integrated GZIP compression service but most Hosters do not enable it by default for a couple of reasons. First, compression does have the possibility of increasing CPU usage as the compression algorithm is procesor intensive. Second, compression does not cooperate with certain web application activities such as streaming ( uploading/downloading ) which means that the generic IIS 6.0 compression feature has the potential to break ASP.NET applications ( resulting in support tickets for the Hoster ).

More than 18 months ago we were privileged enough to get support from Ben Lowery in the form of a contribution of his popular HTTP Compression module. Unfortunately, as a seperate DotNetNuke project, this module did not get the attention it deserved from our team. As a result, the HTTP Compression module was never bundled with the standard DotNetNuke distribution, which severely limited its adoption within the DotNetNuke community. In DotNetNuke 4.4.0 we have made the HTTP Compression module a first class citizen and have integrated it directly with the core framework. The module has a couple features specific to DotNetNuke - a whitespace filter using RegExp and a simple configuration file and UI for managing "exclusions" ( URLs which should not be compressed ).

Page State

In the same vein as Compression, all ASP.NET applications use a technique known as ViewState for persisting state across web requests. ViewState is stored within the page payload and has the potential to bloat your response stream, resulting in latency issues in lower bandwidth scenarios. A common technique for dealing with ViewState is to remove it from the page payload and store it on the web server. ASP.NET 2.0 makes this very simple and we have now integrated an alternate Page State persistence mechanism into the core framework.

As you can see, there has been significant focus applied to the theme of Performance in DotNetNuke 4.4.0. I think we all need to recognize Charles Nurse for his incredible effort during this development phase. An initial beta package will be released to Platinum Benefactors at the end of this week for early testing.

Comments

Comment Form

Only registered users may post comments.

NewsArchives


Aderson Oliveira (22)
Alec Whittington (11)
Alessandra Daniels (3)
Alex Shirley (10)
Andrew Hoefling (3)
Andrew Nurse (30)
Andy Tryba (1)
Anthony Glenwright (5)
Antonio Chagoury (28)
Ash Prasad (37)
Ben Schmidt (1)
Benjamin Hermann (25)
Benoit Sarton (9)
Beth Firebaugh (12)
Bill Walker (36)
Bob Kruger (5)
Bogdan Litescu (1)
Brian Dukes (2)
Brice Snow (1)
Bruce Chapman (20)
Bryan Andrews (1)
cathal connolly (55)
Charles Nurse (163)
Chris Hammond (213)
Chris Paterra (55)
Clint Patterson (108)
Cuong Dang (21)
Daniel Bartholomew (2)
Daniel Mettler (181)
Daniel Valadas (48)
Dave Buckner (2)
David Poindexter (12)
David Rodriguez (3)
Dennis Shiao (1)
Doug Howell (11)
Erik van Ballegoij (30)
Ernst Peter Tamminga (80)
Francisco Perez Andres (17)
Geoff Barlow (12)
George Alatrash (12)
Gifford Watkins (3)
Gilles Le Pigocher (3)
Ian Robinson (7)
Israel Martinez (17)
Jan Blomquist (2)
Jan Jonas (3)
Jaspreet Bhatia (1)
Jenni Merrifield (6)
Joe Brinkman (274)
John Mitchell (1)
Jon Henning (14)
Jonathan Sheely (4)
Jordan Coopersmith (1)
Joseph Craig (2)
Kan Ma (1)
Keivan Beigi (3)
Kelly Ford (4)
Ken Grierson (10)
Kevin Schreiner (6)
Leigh Pointer (31)
Lorraine Young (60)
Malik Khan (1)
Matt Rutledge (2)
Matthias Schlomann (16)
Mauricio Márquez (5)
Michael Doxsey (7)
Michael Tobisch (3)
Michael Washington (202)
Miguel Gatmaytan (3)
Mike Horton (19)
Mitchel Sellers (40)
Nathan Rover (3)
Navin V Nagiah (14)
Néstor Sánchez (31)
Nik Kalyani (14)
Oliver Hine (1)
Patricio F. Salinas (1)
Patrick Ryan (1)
Peter Donker (54)
Philip Beadle (135)
Philipp Becker (4)
Richard Dumas (22)
Robert J Collins (5)
Roger Selwyn (8)
Ruben Lopez (1)
Ryan Martinez (1)
Sacha Trauwaen (1)
Salar Golestanian (4)
Sanjay Mehrotra (9)
Scott McCulloch (1)
Scott Schlesier (11)
Scott Wilkinson (3)
Scott Willhite (97)
Sebastian Leupold (80)
Shaun Walker (237)
Shawn Mehaffie (17)
Stefan Cullmann (12)
Stefan Kamphuis (12)
Steve Fabian (31)
Steven Fisher (1)
Tony Henrich (3)
Torsten Weggen (3)
Tycho de Waard (4)
Vicenç Masanas (27)
Vincent Nguyen (3)
Vitaly Kozadayev (6)
Will Morgenweck (40)
Will Strohl (180)
William Severance (5)
What is Liquid Content?
Find Out
What is Liquid Content?
Find Out
What is Liquid Content?
Find Out