Quantcast
Channel: John Gallardo's Weblog
Viewing all 35 articles
Browse latest View live

All those temporary files (RSTempFiles)

$
0
0

When you install Reporting Services, we create a few directories:

  1. LogFiles
  2. ReportManager
  3. ReportServer
  4. RSTempFiles

Most of these are fairly self explanatory.  LogFiles... well we put our log files in there.  ReportManager contains the Report Manager application (what you get when you browse to http://<server>/reports) and ReportServer contains the Report Server application which is the proper 'Report Server' which implements our Web Service and HTTP handler endpoints... it is what you get when you go to http://<server>/reportserver.

So what about that RSTempFiles thing?  Well it contains files that are temporary of course!

These temporary files can be broken down into a few categories:

  • Temporary Report Snapshot files.
    • These files are only created if you have opted into using temporary file storage (see the WebServiceUseFileShareStorage).
    • For RS2000/2005, snapshots are more or less completely independent and each is contained in its own directory (identified by a GUID).  Each snapshot contains a set of files (internally, they are referred to as "chunks"). 
    • For RS2008, snapshots oftentimes contain some shared data.  The folder-per-snapshot hierarchy used in RS2000/2005 is replaced by a single directory called "Chunks."  Each chunk is a discrete file in this directory.
    • These files are automatically cleaned up by the Report Server on a time-based interval (same as snapshot data stored in ReportServer and ReportServerTempDB catalog databases).
  • Output/Intermediate Streams
    • These files are all created directly within the the RSTempFiles directory.  There is no way to differentiate between these two sub-categories of files currently.
    • Output streams - these streams are generated as output from the renderers.  The server spools them to disk if they grow large enough.  RS has an output cache that may cause these streams to survive beyond the lifetime of the request so that subsequent accesses can be served directly from the cache. 
    • Intermediate files - these files contain results of intermediate calculations during report processing and rendering.  Generally, these contain data which is never returned to the client, but rather holds temporary results in order to relieve memory pressure.  These files are cleaned up when the request completes (no caching across requests).
  • Conversion files
    • These are stored under a folder named _Conversion. 
    • Our compression format for snapshots changed between RS2000 and RS2005.  This folder would contain temporary files supporting the on-demand one-time upgrade of these persisted snapshots.  Unless you are upgrading from an RS2000 instance, you should never see this directory or files created. 
  • Shadow Copy Files
    • This is only for RS2008.
    • After the CTP6 release of RS2008, we enabled shadow copy for our ASP.Net worker domains under the new hosting model.

Scaling Up: SSRS 2008 vs. SSRS 2005 (spoiler: 2008 wins)

$
0
0

The SQL Customer Advisory Team just released a Technical Note comparing SQL Server Reporting Services 2008 vs. 2005 from a scale-up perspective.  Its good to see that a lot of the work that we did over this release focusing on performance and scalability (across the board, from the core server/processing infrastructure to specific rendering extensions) has really paid off.  You can read the entire article here:

http://sqlcat.com/technicalnotes/archive/2008/07/09/scaling-up-reporting-services-2008-vs-reporting-services-2005-lessons-learned.aspx

Quoting from the summary (emphasis is mine):

Reporting Services 2008 was able to respond to 3–4 times the total number of users and their requests on the same hardware without HTTP 503 Service Is Unavailable errors compared with Reporting Services 2005, regardless of the type of renderer. In stark contrast, Reporting Services 2005 generated excessive HTTP 503 Service Is Unavailable errors as the number of users and their requests increased, regardless of the report renderer.

Our tests clearly show that the new memory management architecture of the report server enables Reporting Services 2008 to scale very well, particularly on the new four-processor, quad-core processors. With our test workload, Reporting Services 2008 consistently outperformed SQL Server 2005 with the PDF and XLS renderers on the four-processor, quad-core hardware platform (16 cores) both in terms of response time and in terms of total throughput. Furthermore, with these renderers on this hardware platform, Reporting Services dramatically outperformed other hardware platforms regardless of Reporting Services version, responding to 3–5 times the number of requests than when running on either of the other hardware platforms. As a result, we recommend that you scale up to four-processor, quad-core servers for performance and scale out to a two-node deployment for high availability. Thereafter, as demand for more capacity occurs, add more four-processor, quad-core servers.

Scaling out the Viewer Control and rsExecutionNotFound

$
0
0

One of the criteria that the report server uses to match the provided SessionID with a stored report is that the SessionID has to be provided by the same user that initially created the session.  Usually, this is the case.  Someone browses the report in IE, they click around to paginate or expand toggles, and things just work because they are the same user they were when they initially ran the report.

Sometimes though, things go wrong.

One way that this can happen is when you are hosting the ASP.Net Viewer Control in your own application and you are not impersonating the incoming user all the way to the backend report server.  This is a totally supported configuration, however there is a little caveat that you have to keep in mind.  Since the report server requires that the user names match across session retrievals, you have to ensure that the viewer control is impersonating the same user.  Sounds easy, right?  Well not so fast if you are using a machine specific account for your application pool which is hosting the viewer control.  The specific topology that can get you into trouble is something like the following:

 

In this scenario you have the following:

  • Client machines accessing multiple web frontends hosting the report viewer control via a load balancer.
  • The web frontend machines are using a machine-specific account to communicate with the report server (for example they are using the builtin NETWORK SERVICE account).
  • You are not impersonating the incoming user in the viewer control when accessing the backend report server.

The sequence of operations that can lead to trouble are:

  1. The initial request from ClientA is routed to MachineA. 
  2. The viewer control in MachineA instantiates a session with the report server.  Since the $MachineA credentials are sent, the session is associated with this user.  Keep in mind at this point the report server actually has no idea that there is some other logical user beyond the web frontend that is actually making the request.
  3. The user views the first page of the report, and navigates to the second page.
  4. The request for the second page is actually routed to MachineB by the load balancer.
  5. The viewer control in MachineB attempts to load the user session, however this fails since the request to the report server is actually from $MachineB user and not the original $MachineA user.  The report server generates an rsExecutionNotFound error and returns that, resulting in an error being displayed to the user.

There are a couple of ways to address this problem:

  1. Architect your application such that you can flow credentials from the client all the way to the report server.  -- Or --
  2. Ensure that your web frontend nodes can impersonate the same user when accessing the report server regardless of which machine the request is routed to (so use something like domain credentials to access the report server).

Hiding Rendering Extensions

$
0
0

This is documented behavior, but we see lots of questions on it.  A good mechanism for preventing users from accidently exporting reports to a format that you don’t want (for example, you might know that the report doesn’t render quite right in a particular format) is to mark the extension as “invisible” in the Report Server config file. 

This MSDN documentation explains how to configure a rendering extension:

http://msdn.microsoft.com/en-us/library/ms154516.aspx

The important one here is the Visible attribute:

Visible

A value of false indicates that the rendering extension should not be visible in user interfaces. If the attribute is not included, the default value is true.

 

By setting this attribute to false for a given extension, it will hide the extension from being visible in the Report Viewer control as well as in the delivery configuration pages.

Hiding parameter area when viewing reports

$
0
0

image

When rendering reports, oftentimes the application would like to minimize as much of the non-report area of the viewer control as possible.  This is easily accomplished when rendering the report through the built-in report viewer control hosted in the ReportServer web site (http://servername/reportserver). 

There are two parameters which control the rendering of the viewer control area:

  • rc:Parameters
  • rc:Toolbar

rc:Parameters can have 3 values:

OnDisplays the parameter area as in the screenshot above.
OffCompletely hides the parameter area. 
CollapsedRenders the parameter area initially minimized, so a user can expand it if they need to view/change the parameters for a report.

Here is an example URL which demonstrates rendering a report with rc:Parameters=Collapsed:

https://servername/ReportServer?%2fSamples%2fSQL+2008+Samples%2fAdventureWorks2008+Sample+Reports%2fEmployee+Sales+Summary+2008&rs:Command=Render&rc:Parameters=Collapsed

The other way of hiding the parameter area is to render the report with rc:Toolbar=false.  For many applications though, there are side-effects of this approach which I don’t think are desireable.  These are:

  1. It disables the automatic ping-mechanism to keep the user’s report session alive.  This can lead to rsExecutionNotFound errors if the user leaves the page inactive for several minutes.
  2. It also removes the navigation (next page, find, etc…) UI.
  3. By default, it will cause every page of the report to be returned to the client.  For long reports, this is typically not desireable.

So if you are in a situation where you want to minimize the amount of extraneous UI initially presented to the user, while still allowing flexibility in having the end user change the parameter values, consider using rc:Collapsed=true when rendering the report.

Scaling Up: SSRS 2008 vs. SSRS 2005 (spoiler: 2008 wins)

$
0
0

The SQL Customer Advisory Team just released a Technical Note comparing SQL Server Reporting Services 2008 vs. 2005 from a scale-up perspective.  Its good to see that a lot of the work that we did over this release focusing on performance and scalability (across the board, from the core server/processing infrastructure to specific rendering extensions) has really paid off.  You can read the entire article here:

http://sqlcat.com/technicalnotes/archive/2008/07/09/scaling-up-reporting-services-2008-vs-reporting-services-2005-lessons-learned.aspx

Quoting from the summary (emphasis is mine):

Reporting Services 2008 was able to respond to 3–4 times the total number of users and their requests on the same hardware without HTTP 503 Service Is Unavailable errors compared with Reporting Services 2005, regardless of the type of renderer. In stark contrast, Reporting Services 2005 generated excessive HTTP 503 Service Is Unavailable errors as the number of users and their requests increased, regardless of the report renderer.

Our tests clearly show that the new memory management architecture of the report server enables Reporting Services 2008 to scale very well, particularly on the new four-processor, quad-core processors. With our test workload, Reporting Services 2008 consistently outperformed SQL Server 2005 with the PDF and XLS renderers on the four-processor, quad-core hardware platform (16 cores) both in terms of response time and in terms of total throughput. Furthermore, with these renderers on this hardware platform, Reporting Services dramatically outperformed other hardware platforms regardless of Reporting Services version, responding to 3–5 times the number of requests than when running on either of the other hardware platforms. As a result, we recommend that you scale up to four-processor, quad-core servers for performance and scale out to a two-node deployment for high availability. Thereafter, as demand for more capacity occurs, add more four-processor, quad-core servers.

Scaling out the Viewer Control and rsExecutionNotFound

$
0
0

One of the criteria that the report server uses to match the provided SessionID with a stored report is that the SessionID has to be provided by the same user that initially created the session.  Usually, this is the case.  Someone browses the report in IE, they click around to paginate or expand toggles, and things just work because they are the same user they were when they initially ran the report.

Sometimes though, things go wrong.

One way that this can happen is when you are hosting the ASP.Net Viewer Control in your own application and you are not impersonating the incoming user all the way to the backend report server.  This is a totally supported configuration, however there is a little caveat that you have to keep in mind.  Since the report server requires that the user names match across session retrievals, you have to ensure that the viewer control is impersonating the same user.  Sounds easy, right?  Well not so fast if you are using a machine specific account for your application pool which is hosting the viewer control.  The specific topology that can get you into trouble is something like the following:

 

In this scenario you have the following:

  • Client machines accessing multiple web frontends hosting the report viewer control via a load balancer.
  • The web frontend machines are using a machine-specific account to communicate with the report server (for example they are using the builtin NETWORK SERVICE account).
  • You are not impersonating the incoming user in the viewer control when accessing the backend report server.

The sequence of operations that can lead to trouble are:

  1. The initial request from ClientA is routed to MachineA. 
  2. The viewer control in MachineA instantiates a session with the report server.  Since the $MachineA credentials are sent, the session is associated with this user.  Keep in mind at this point the report server actually has no idea that there is some other logical user beyond the web frontend that is actually making the request.
  3. The user views the first page of the report, and navigates to the second page.
  4. The request for the second page is actually routed to MachineB by the load balancer.
  5. The viewer control in MachineB attempts to load the user session, however this fails since the request to the report server is actually from $MachineB user and not the original $MachineA user.  The report server generates an rsExecutionNotFound error and returns that, resulting in an error being displayed to the user.

There are a couple of ways to address this problem:

  1. Architect your application such that you can flow credentials from the client all the way to the report server.  — Or —
  2. Ensure that your web frontend nodes can impersonate the same user when accessing the report server regardless of which machine the request is routed to (so use something like domain credentials to access the report server).

Avoid using HttpContext in your reports

$
0
0

On one of our internal mailing lists, someone was asking trying to retrieve some HTTP Headers that their internal application was submitting to the Report Server within the report.  Someone else responded with a code snippet containing the custom code that you can put into an RDL expression to do just that. 

This is a bad idea.

HttpContext has the interesting property of exposing per-request state via a static property.  Inherently, there is nothing really wrong this approach.  It allows access to data without having to plumb the data all of the way through every single API or object on the call stack.  However, one gotcha here is that HttpContext is only available on the thread that received the request, and not on any asynchronous threads that may be spawned by that request.  This can lead to subtle problems in your reports under the following conditions:

  • When printing the report, the Report Server actually uses an asynchronous mechanism to render the report in order to return the first page quickly while continuing to render the additional pages in the background.
  • In the case of a subscription delivery, there is actually no HTTP Request that initiated the request.

Additionally, in the future we will more than likely be making additional portions of report execution asynchronous to improve both perceived latency as well as taking advantage of multi-core processors during non-peak load.

That said, there are a couple of scenarios I can think of where you might be tempted to examine the HTTP Request during report execution.  These are:

  1. You want to determine the path or location of the report, and you are doing so by examining the incoming request URL.
  2. Your application is passing some value to the report via an HTTP header.

For each of these, there are built-in supported mechanisms.  For (1), there is a set of values exposed by the global collections which allow to get information about the report server and the context report.  For example, there are Globals.ReportFolder, Globals.ReportName, and Globals.ReportServerUrl.  You can leverage these variables in report expressions rather than trying to tease apart the HTTP Request. 

In the case of passing data to the report via an HTTP Header, there is already a built-in mechanism for passing data to a report execution.  They are called report parameters.  They are useful for more than just providing query parameters!  You can use them to also pass application-specific information that some of your custom code may require.

So what is the moral of the story?  Be very careful when accessing static objects that are request-specific in the context of your report via custom code.  Doing so exposes you to potential back-compat issues as we continue to bring the platform forward.  Wherever possible we try to expose enough context through the supported report processing and rendering object models, and if there are additions you need we would love to hear about them.


RS Blogger shout-outs

$
0
0

If you are interested in Reporting Services, you probably already know about these blogs … because frankly they have more interesting things to say than me!  At any rate, just in case they happened to slip through the cracks for you, I thought I would point you towards their blogs:

Robert Bruckner – Robert is a developer that works on the Report Processing engine and basically knows just about everything there is to know about RS.  If you have been to any of the trade shows or other various RS conferences, you probably have met him.  If you have spend time on RS forums or newsgroups, you probably read some insightful response he had to a question.  He has a great ‘advanced’ RS blog that covers a broad range of topics. 

Brian Hartman – Brian developed the Report Viewer control and has been hanging out on the MSDN forums answering your questions for years.  So if you are integrated Reporting Services with your application, you probably have used his stuff.  He’s finally succumbed to the pressure of starting his own blog to answer a lot of the common questions and themes he sees related to the Report Viewer.   

Hiding Rendering Extensions

$
0
0

This is documented behavior, but we see lots of questions on it.  A good mechanism for preventing users from accidently exporting reports to a format that you don’t want (for example, you might know that the report doesn’t render quite right in a particular format) is to mark the extension as “invisible” in the Report Server config file. 

This MSDN documentation explains how to configure a rendering extension:

http://msdn.microsoft.com/en-us/library/ms154516.aspx

The important one here is the Visible attribute:

Visible

A value of false indicates that the rendering extension should not be visible in user interfaces. If the attribute is not included, the default value is true.

 

By setting this attribute to false for a given extension, it will hide the extension from being visible in the Report Viewer control as well as in the delivery configuration pages.

Hiding parameter area when viewing reports

$
0
0

image

When rendering reports, oftentimes the application would like to minimize as much of the non-report area of the viewer control as possible.  This is easily accomplished when rendering the report through the built-in report viewer control hosted in the ReportServer web site (http://servername/reportserver). 

There are two parameters which control the rendering of the viewer control area:

  • rc:Parameters
  • rc:Toolbar

rc:Parameters can have 3 values:

On Displays the parameter area as in the screenshot above.
Off Completely hides the parameter area. 
Collapsed Renders the parameter area initially minimized, so a user can expand it if they need to view/change the parameters for a report.

Here is an example URL which demonstrates rendering a report with rc:Parameters=Collapsed:

https://servername/ReportServer?%2fSamples%2fSQL+2008+Samples%2fAdventureWorks2008+Sample+Reports%2fEmployee+Sales+Summary+2008&rs:Command=Render&rc:Parameters=Collapsed

The other way of hiding the parameter area is to render the report with rc:Toolbar=false.  For many applications though, there are side-effects of this approach which I don’t think are desireable.  These are:

  1. It disables the automatic ping-mechanism to keep the user’s report session alive.  This can lead to rsExecutionNotFound errors if the user leaves the page inactive for several minutes.
  2. It also removes the navigation (next page, find, etc…) UI.
  3. By default, it will cause every page of the report to be returned to the client.  For long reports, this is typically not desireable.

So if you are in a situation where you want to minimize the amount of extraneous UI initially presented to the user, while still allowing flexibility in having the end user change the parameter values, consider using rc:Collapsed=true when rendering the report.

RunningJobContext.IsClientConnected

$
0
0

I’ve seen a few people get confused over what this error message in the Reporting Services log file indicates. This message is generated when the Reporting Services web server detects that an HTTP request has experienced a remote disconnect.  In practice, this means that we queried the property HttpResponse.IsClientConnected and it returned false.

So how does this work exactly, and are all requests subject to this kind of check? As Reporting Services is currently architected, not all operations are subject to this check. For potentially long-running operations (such as report renderings and interactivity operations – internally we refer to these as Cancelable operations) requests are registered with a central request manager, which has a thread which wakes up periodically to check whether or not the clients are still connected.  If they are not, then we begin the process of aborting that request on the server side in order to reclaim any resources that it is consuming.

Basically every operation on the ReportExecution2005.asmx endpoint, as well as the UpdateReportExecutionSnapshot(), CreateReportHistorySnapshot(), GetReportParameters(), and GetUserModel() methods on the ReportService2005.asmx endpoint register themselves as potentially long running operations.

So what should you do if you are seeing a lot of these errors in the log? Generally this is an indicator that requests are being torn down for some reason. This could be due to certain set of users being particularly impatient waiting for report results to return and closing their browser. Alternatively, we have seen some cases in the wild where certain proxies would internally time out requests after waiting for some period of time, at which time they close the connection to the Report Server. If you have such a deployment topology (proxy/load balancer between clients the Report Server) definately consider checking if such a configuration knob exists for your environment.

RS PowerShell Gems – The WMI Provider

$
0
0

For IT administration of a Report Server instance, you will occasionally need to use WMI.  We try to ease this to some extent by exposing a command line tool rsconfig.exe as well as the RS Configuration Tool UI. That said, sometimes you just need to talk to WMI directly – and for that PowerShell is a great tool!

Here are a couple of script fragments that I have used in the past:

# Gem 1 – Enumerate All Instances
#

$computer = "JGALLA7"
$namespaces = gwmi -class "__NAMESPACE" -namespace "root\Microsoft\SqlServer\ReportServer" -computer $computer
[string]$fn = $namespaces[0].Name

# any of the namespaces can be used to enumerate all instances
$instances = gwmi -class MSReportServer_Instance -namespace "root\Microsoft\SqlServer\ReportServer\$fn\v10" -computer $computer
$instances | ft InstanceName,EditionName,Version

#InstanceName                            EditionName                             Version
#————                            ———–                             ——-
#MSSQLSERVER                             ENTERPRISE EVALUATION EDITION            ####
#ELEND                                   ENTERPRISE EVALUATION EDITION            ####
#VIN                                     ENTERPRISE EVALUATION EDITION            ####
#SAZED                                   DEVELOPER EDITION                        ####

# Gem 2 – get service account of each instance

$configurations = gwmi -class MSReportServer_ConfigurationSetting -namespace "root\Microsoft\SqlServer\ReportServer\$fn\v10\Admin" -computer $computer
$configurations | ft InstanceName,ServiceName,WindowsServiceIdentityConfigured

#InstanceName                            ServiceName                             WindowsServiceIdentityConfigured
#————                            ———–                             ——————————–
#MSSQLSERVER                             ReportServer                            redmond\####
#ELEND                                   ReportServer$ELEND                      NT Authority\NetworkService
#VIN                                     ReportServer$VIN                        NT Authority\NetworkService
#SAZED                                   ReportServer$SAZED                      NT AUTHORITY\NETWORK SERVICE

# Gem 3 – Update Service Account Password of each instance using a particular user name
$username = "redmond\####"
$newpassword = "mynewpassword"
foreach ($update in $configurations | where {$_.WindowsServiceIdentityConfigured -eq $username})
{
  $update.SetWindowsServiceIdentity(0, $username, $newpassword)
}

Some additional useful links:

Now From Windows Phone

$
0
0

Last November I decided to leave the SQL Reporting Services team and join the Windows Phone Application Platform team. This is the team responsible for delivering:

  • The overall application model for windows phone (how are they installed, secured and activated)
  • Runtime integration (hosting of Silverlight Mobile and XNA applications within NetCF)
  • Phone specific frameworks (ex: Sensors)
  • Data and Cloud programmability (Push Notifications, Structured Storage)

I have been working mostly on data programmability since joining the team, bringing Linq To SQL down to the phone from the desktop.

You can play with the new features in the recently released Developer Tools for Windows Phone Mango Beta.

Also, Jesse Liberty has done a good job of capturing some of the best practices for Linq To SQL on Windows Phone.

DataContext.Log and Windows Phone

$
0
0

 

Among the many features that are compatible between Linq To SQL on desktop Windows and on Windows Phone is the support for capturing all of the generated SQL as it is being sent to the database. This can be a great way to both gain an understanding of how L2S works, as well as optimize query complexity. One of the challenges with using this feature is actually collecting the data. Extracting the log out of isostore can be problematic. Typically I like to visualize the output of my queries in the Debug Output Window. However the problem there is that there is no TextWriter in the platform that can write there. 

The solution is to simply wrap Debug.WriteLine() in your own TextWriter implementation.

I am sure this has been done many times before, but here is my implementation:

namespace PhoneHelper
{
    using System;
    using System.Diagnostics;
    using System.IO;
    using System.Text;

    public class DebugStreamWriter : TextWriter
    {
        private const int DefaultBufferSize = 256;
        private StringBuilder _buffer;

        public DebugStreamWriter()
        {
            BufferSize = 256;
            _buffer = new StringBuilder(BufferSize);
        }

        public int BufferSize
        {
            get;
            private set;
        }

        public override System.Text.Encoding Encoding
        {
            get { return Encoding.UTF8; }
        }

        #region StreamWriter Overrides
        public override void Write(char value)
        {
            _buffer.Append(value);
            if (_buffer.Length >= BufferSize)
                Flush();
        }

        public override void WriteLine(string value)
        {
            Flush();

            using(var reader = new StringReader(value))
            {
                string line;
                while( null != (line = reader.ReadLine()))
                    Debug.WriteLine(line);
            }
        }

        protected override void Dispose(bool disposing)
        {
            if (disposing)
                Flush();
        }

        public override void Flush()
        {
            if (_buffer.Length > 0)
            {
                Debug.WriteLine(_buffer);
                _buffer.Clear();
            }
        }
        #endregion
    }
}

And the output:

image


Viewing all 35 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>