Friday, November 5, 2010

Using a CTE in a SSRS Hidden Cascading Parameter

I had a situation where a client wanted to allow the report developer to tie each year of a report to a different dataset.  The user would choose the report year and based on a hidden parameter set by the developer it would use the correct data set.  The catch was I couldn’t create anything on the server.  The solution I came up with was a hidden cascading parameter using a CTE.
The visible parameter is @AcademicYear with the following parameter labels
20062007
20072008
20082009
20092010
20102011
For the hidden parameter I created a dataset called SnapShot with the following query.
WITH SnapShots
(AcademicYear, Snapshot) AS
(SELECT '20062007','1'
UNION
SELECT '20072008','2'
UNION
SELECT '20082009','2'
UNION
SELECT '20092010','3'
UNION
SELECT '20102011','4'
)
SELECT  AcademicYear, SnapShot
FROM SnapShots
WHERE AcademicYear=LTRIM(RTRIM(@AcademicYearLabel))




The parameter @AcademicYearLabel is based on @AcademicYear.  To create it choose parameters in the DataSet Properties dialog

image


Click the image button to create an expression and choose the following

Category – Parameters

Item – All

Values – AcademicYear

Under Set Expression for: Value change

=Parameters!AcademicYear.Value


To


=Parameters!AcademicYear.Label






image


Click OK.  The parameter value should be updated.

image


Now we’ll create a parameter named @SnapShot to go with the dataset.  Set the Name and Prompt to SnapShot, Data Type to Text and Parameter Visibility to Hidden. 

image


Choose DefaultValues.  Select Get values from a query and choose the SnapShot dataset and SnapShot Value field.

image





The report is now customizable by dataset.

Tuesday, October 26, 2010

SSIS Cache Transform as Source Query during For Loop

 

Recently I had a relatively slow performing source query within a for loop.  The for loop ran approximately 12 times each time running this query.  I solved the problem by calling the query once, caching the results, and performing look ups instead of executing the query again.

Here’s the control flow

image

Going into DFL Cache Data

image

In order to perform a lookup that returns all of the relevant rows the query for OLE_SRC School History Src needs to have a unique identifier.

SELECT ROW_NUMBER() OVER (ORDER BY RAND()) ID, *
FROM ComplexQuery



Since I’m going to use year as the parameter in the for loop I’m placing the Cache Connection Manager index on ID and YearID.


image


Now that I’ve filled the cache I’m going to loop by year over the dataflow DFL Import DimSchool


image


Here’s DFL Import DimSchool


image


Next generate a list of numbers with the for loop variables.  To do this create a variable called SQLCommand.  Set EvaluateAsExpression to True with the expression as


"WITH Num1 (n) AS (SELECT 1 UNION ALL SELECT 1),
Num2 (n) AS (SELECT 1 FROM Num1 AS X, Num1 AS Y),
Num3 (n) AS (SELECT 1 FROM Num2 AS X, Num2 AS Y),
Num4 (n) AS (SELECT 1 FROM Num3 AS X, Num3 AS Y),
Num5 (n) AS (SELECT 1 FROM Num4 AS X, Num4 AS Y),
Num6 (n) AS (SELECT 1 FROM Num5 AS X, Num5 AS Y),
Nums (n) AS (SELECT ROW_NUMBER() OVER(ORDER BY n) FROM Num6)
SELECT n ID, " + (DT_WSTR, 4) @[User::_Year] + " YearID
FROM Nums
WHERE n <= 100000"

 

@{User::_Year] is the variable used in the for loop so the value of YearID changes with each iteration.

 

Choose Data access mode as SQL command from variable and select SQLCommand as the variable name.  It results in the following query.

 

WITH Num1 (n) AS (SELECT 1 UNION ALL SELECT 1),
Num2 (n) AS (SELECT 1 FROM Num1 AS X, Num1 AS Y),
Num3 (n) AS (SELECT 1 FROM Num2 AS X, Num2 AS Y),
Num4 (n) AS (SELECT 1 FROM Num3 AS X, Num3 AS Y),
Num5 (n) AS (SELECT 1 FROM Num4 AS X, Num4 AS Y),
Num6 (n) AS (SELECT 1 FROM Num5 AS X, Num5 AS Y),
Nums (n) AS (SELECT ROW_NUMBER() OVER(ORDER BY n) FROM Num6)
SELECT n ID, 2000 YearID
FROM Nums
WHERE n <= 100000


and the following output

 

ID                                                YearID











12000
22000
32000


The lookup is performed on the ID and YearID

 

image

I now have the same records I would’ve gotten by executing the query using the YearID as a parameter.

Tuesday, October 19, 2010

Majority Late Arriving Fact Lookups in SSIS

Usually when I load data into a data warehouse I retrieve only the changes.  Since changes are normally applied to the most recent records doing a lookup on the natural key of the current record and a partial lookup for any that are not associated with that record for type 2 works out well.  I recently had a situation where I needed to reprocess the entire table for every run.  We won’t go into why this was the case.  Needless to say it’s not good.  Consequently performance was horrendous because 70% of the lookups were partial. 
My solution was to use a Merge and Conditional Split to look at the entire dimension table.
image
Let’s start with the dimension (OLE_SRC Dimension).  We’ll use DimStudent as the dimension.  Here’s the query I used
Select StudentID, StudentNaturalKey, EffectiveStartDate, 
COALESCE((SELECT MIN(EffectiveStartDate) FROM DW.DIMstudent where 
EffectiveStartDate>s.EffectiveStartDAte and StudentNaturalKey=s.StudentNaturalKey),'12/31/2099') NextEffectiveStartDate
FROM DW.DimStudent s
ORDER BY StudentNaturalKey



I’m pulling the surrogatekey (StudentID), Natural Key (StudentNaturalKey) , EffectiveStartDate, and determining the NextEffectiveStartDate instead of using EffectiveEndDate because the data warehouse may have gaps or overlap in the dates.  I’m going to join on the NaturalKey in the Merge Transformation so I’m using it to order by.

This is the source import query

SELECT DISTINCT StudentNaturalKey, RecordDate
From Import.Student WITH (NOLOCK)
Order by StudentNaturalKey


I’m pulling back the NaturalKey and RecordDate from the source and ordering by StudentNaturalKey for the Merge Transformation.


Here’s the Merge Transformation joining on natural key


image


Next there’s the conditional split with the following condition to determine the correct record


ISNULL(RecordDate) || ISNULL(StudentID) || (RecordDate >= EffectiveStartDate && RecordDate < NextEffectiveStartDate)



If RecordDate is null then the source record has no date and consequentially there is no corresponding record in the dimension table.  If StudentID is null then there was no corresponding record in the dimension.  Otherwise it checks to see if the RecordDate is between the EffectiveStartDate and the NextEffectiveStartDate.

I then load the matching records into a cache connection manager.  This isn’t the only way but because of the complexity of the transformation dataflow I’d have to use the sort transformation for the merges so caching and then using the lookup transformation performed much better.

image

The cache consists of the natural key, record date, and StudentID.  I look up on the natural key and record date to get the surrogate key.  This allows me to keep the number of records to a minimum as records are often loaded in batches with the same record date.

Tuesday, October 5, 2010

Missing Indexes

I’m back from vacation.  It was wonderful.  Here’s the code I use to help me get a jump on indexes that may need to be created before I get complaints about system performance.
SELECT 
migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) AS improvement_measure, 
'CREATE INDEX [missing_index_' + CONVERT (varchar, mig.index_group_handle) + '_' + CONVERT (varchar, mid.index_handle) 
+ '_' + LEFT (PARSENAME(mid.statement, 1), 32) + ']'
+ ' ON ' + mid.statement 
+ ' (' + ISNULL (mid.equality_columns,'') 
+ CASE WHEN mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL THEN ',' ELSE '' END 
+ ISNULL (mid.inequality_columns, '')
+ ')' 
+ ISNULL (' INCLUDE (' + mid.included_columns + ')', '') AS create_index_statement, 
migs.*, mid.database_id, mid.[object_id]
FROM sys.dm_db_missing_index_groups mig
INNER JOIN sys.dm_db_missing_index_group_stats migs ON migs.group_handle = mig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle
WHERE migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) > 10
ORDER BY migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans) DESC


You’ll find queries like it all over the internet but not necessarily an explanation of what it’s telling you.  The SQL Server DMVs are based on the same concepts used in query plans and query optimization.

sys.dm_db_Missing_Index_Group_Stats – Updated By Every Query Execution

  1. Avg_Total_User_Cost – A number representing the cost of queries for which the index could have been used
  2. Avg_User_Impact – Percentage by which the average query cost would drop if index was implemented
  3. User_Seeks – Number of seeks caused by queries for which this index could have been used
  4. User_Scans – Number of scans caused by queries for which this index could have been used

sys.dm_db_Missing_Index_Details – Updated Every Time Query is Optimized by the Query Optimizer

  1. Statement – Table where the index is missing
  2. Equality_Columns – Columns used in equality predicates (Column=’a’)
  3. Inequality_Columns – Columns used in a predicate that’s anything except equality such as >
  4. Included_Columns – Columns need to cover the query
  5. Database_ID – Database
  6. Object_ID – Table

The higher the improvement_measure the greater the possibility for improvement.  As always with indexes make sure you look at all of the pros and cons for the index.

Tuesday, September 14, 2010

Master Data Services Tutorial

I’m on vacation this week and next so I’ll just let you know that if you want a quick overview of SQL Server 2008 R2 Master Data Services there’s a series of 15 minute tutorials on msdev.com.  Here’s the link
http://www.msdev.com/Directory/SeriesDescription.aspx?CourseId=155
The Master Data Services team also has a blog with links to even more info.
 http://blogs.msdn.com/b/mds/
My next posting should be Sept. 28th.

Tuesday, September 7, 2010

Adding Parameter for Analysis Services Data Source in SSRS

I recently need to add a parameter to a report using the Analysis Services Data Source.  I didn’t find the process to be straight forward so I’m documenting the steps I took.
  1. Go into the query designer’s MDX query  mode.  image
  2. Click the Query Parameters button on the menu bar. image
  3. Enter parameter, dimension, hierarchy, and default values.  Click OK.image
  4. Add the parameter to the query.  Click OK.    image
  5. I don’t want to give the user the choice of every member in the hierarchy so I have to prevent the parameter from auto updating.  To do this switch to the report's code view.
  6. Find the entry for the parameter’s dataset and add SuppressAutoUpdate
    </DataSet>
     <DataSet Name="EnrolledFromYear">
       <Fields>
         <Field Name="ParameterValue">
           <DataField>&lt;?xml version="1.0" encoding="utf-8"?&gt;&lt;Field xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Measure" UniqueName="[Measures].[ParameterValue]" /&gt;</DataField>
         </Field>
         <Field Name="ParameterCaptionIndented">
           <Value>=Space(3*Fields!ParameterLevel.Value) + Fields!ParameterCaption.Value</Value>
         </Field>
         <Field Name="ParameterCaption">
           <DataField>&lt;?xml version="1.0" encoding="utf-8"?&gt;&lt;Field xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Measure" UniqueName="[Measures].[ParameterCaption]" /&gt;</DataField>
         </Field>
         <Field Name="ParameterLevel">
           <DataField>&lt;?xml version="1.0" encoding="utf-8"?&gt;&lt;Field xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Measure" UniqueName="[Measures].[ParameterLevel]" /&gt;</DataField>
         </Field>
       </Fields>
       <Query>
         <DataSourceName>EdDWDataMart</DataSourceName>
         <CommandText>WITH MEMBER [Measures].[ParameterCaption] AS [Enrolled From Date].[Academic Calendar].CURRENTMEMBER.MEMBER_CAPTION MEMBER [Measures].[ParameterValue] AS [Enrolled From Date].[Academic Calendar].CURRENTMEMBER.UNIQUENAME MEMBER [Measures].[ParameterLevel] AS [Enrolled From Date].[Academic Calendar].CURRENTMEMBER.LEVEL.ORDINAL SELECT {[Measures].[ParameterCaption], [Measures].[ParameterValue], [Measures].[ParameterLevel]} ON COLUMNS , [Enrolled From Date].[Academic Calendar].ALLMEMBERS ON ROWS FROM [EdDW]</CommandText>
         <rd:DesignerState><QueryDefinition xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/AnalysisServices/QueryDefinition"><CommandType>MDX</CommandType><Type>Query</Type><QuerySpecification xsi:type="MDXQuerySpecification"><Select><Items /></Select><From>EdDW</From></QuerySpecification><Query><Statement>WITH MEMBER [Measures].[ParameterCaption] AS [Enrolled From Date].[Academic Calendar].CURRENTMEMBER.MEMBER_CAPTION MEMBER [Measures].[ParameterValue] AS [Enrolled From Date].[Academic Calendar].CURRENTMEMBER.UNIQUENAME MEMBER [Measures].[ParameterLevel] AS [Enrolled From Date].[Academic Calendar].CURRENTMEMBER.LEVEL.ORDINAL SELECT {[Measures].[ParameterCaption], [Measures].[ParameterValue], [Measures].[ParameterLevel]} ON COLUMNS , [Enrolled From Date].[Academic Calendar].ALLMEMBERS ON ROWS FROM [EdDW]</Statement></Query></QueryDefinition></rd:DesignerState>
         <rd:AutoGenerated>true</rd:AutoGenerated>
         <rd:Hidden>true</rd:Hidden>
         <rd:SuppressAutoUpdate>true</rd:SuppressAutoUpdate>
       </Query>
     </DataSet>



7. Go back to the report's design view. Right click on the datasource in the Report Data view and choose show hidden datasets. The EnrolledFromYear dataset will appear and can be edited image

Tuesday, August 31, 2010

Sum with empty(null) measures

When I was working on a report I ran into a problem when I tried to sum a measure that contained an empty cell.  I expected the empty cells to be treated as zero but that wasn’t the case.  The result returned was null.  Here’s the sample data.
DistrictAttendance Rate
A(null)
A0.0%
A94.5%
My first solution was just to apply a filter to remove the empty cell from the set.
With Member Total AS   
SUM(EXISTING(FILTER([Attending School].[District], NOT ISEMPTY(Measures.[Attendance Rate])),Measures.[Attendance Rate])




The result was still null. I tried a variety of things to remove the empty cells from the calculated measure or transform them into zero but with no success.  The result was always the same, null for the sum instead of 94.5.

Consequently I moved away from manipulating the calculated member itself and came up with a solution in a subcube.  I was able to remove those cells without an attendance rate from the cube before summing.

WITH MEMBER Total AS 
SUM(Existing([Attending School].[District]),Measures.[Attendance Rate]))

SELECT Measures.Total ON COLUMNS,
[Attending School].[District] ON ROWS
FROM (SELECT  [Enrolled From Date].[Academic Calendar].[Academic Year] ON COLUMNS,
Filter([Attending School].[District] NOT ISEMPTY([Measures].[Attendance Rate])) ON ROWS
FROM [EdDW])
WHERE [Enrolled From Date].[Academic Calendar].[Academic Year].&[20082009]


 

Thursday, August 26, 2010

Intro

I'm a MCITP DBA and MCTS in Business Intelligence who's been working with SQL Server for over 10 years, starting as a VB programmer becoming a SQL Analyst graduating to a SQL Developer and finally a SQL DBA.  I'm currently employed by Mariner in Charlotte, NC a company specializing in BI where I hope to gain competancy in Analysis Services to add to my SQL portfolio.  While I worked with SSIS and SSRS in my former position SSAS was not a direction they wished to go.

I've decided to start this blog with the hope that others can learn from my experiences (good and bad).  I plan to post once a week.