Tuesday, October 26, 2010

SSIS Cache Transform as Source Query during For Loop

 

Recently I had a relatively slow performing source query within a for loop.  The for loop ran approximately 12 times each time running this query.  I solved the problem by calling the query once, caching the results, and performing look ups instead of executing the query again.

Here’s the control flow

image

Going into DFL Cache Data

image

In order to perform a lookup that returns all of the relevant rows the query for OLE_SRC School History Src needs to have a unique identifier.

SELECT ROW_NUMBER() OVER (ORDER BY RAND()) ID, *
FROM ComplexQuery



Since I’m going to use year as the parameter in the for loop I’m placing the Cache Connection Manager index on ID and YearID.


image


Now that I’ve filled the cache I’m going to loop by year over the dataflow DFL Import DimSchool


image


Here’s DFL Import DimSchool


image


Next generate a list of numbers with the for loop variables.  To do this create a variable called SQLCommand.  Set EvaluateAsExpression to True with the expression as


"WITH Num1 (n) AS (SELECT 1 UNION ALL SELECT 1),
Num2 (n) AS (SELECT 1 FROM Num1 AS X, Num1 AS Y),
Num3 (n) AS (SELECT 1 FROM Num2 AS X, Num2 AS Y),
Num4 (n) AS (SELECT 1 FROM Num3 AS X, Num3 AS Y),
Num5 (n) AS (SELECT 1 FROM Num4 AS X, Num4 AS Y),
Num6 (n) AS (SELECT 1 FROM Num5 AS X, Num5 AS Y),
Nums (n) AS (SELECT ROW_NUMBER() OVER(ORDER BY n) FROM Num6)
SELECT n ID, " + (DT_WSTR, 4) @[User::_Year] + " YearID
FROM Nums
WHERE n <= 100000"

 

@{User::_Year] is the variable used in the for loop so the value of YearID changes with each iteration.

 

Choose Data access mode as SQL command from variable and select SQLCommand as the variable name.  It results in the following query.

 

WITH Num1 (n) AS (SELECT 1 UNION ALL SELECT 1),
Num2 (n) AS (SELECT 1 FROM Num1 AS X, Num1 AS Y),
Num3 (n) AS (SELECT 1 FROM Num2 AS X, Num2 AS Y),
Num4 (n) AS (SELECT 1 FROM Num3 AS X, Num3 AS Y),
Num5 (n) AS (SELECT 1 FROM Num4 AS X, Num4 AS Y),
Num6 (n) AS (SELECT 1 FROM Num5 AS X, Num5 AS Y),
Nums (n) AS (SELECT ROW_NUMBER() OVER(ORDER BY n) FROM Num6)
SELECT n ID, 2000 YearID
FROM Nums
WHERE n <= 100000


and the following output

 

ID                                                YearID











12000
22000
32000


The lookup is performed on the ID and YearID

 

image

I now have the same records I would’ve gotten by executing the query using the YearID as a parameter.

Tuesday, October 19, 2010

Majority Late Arriving Fact Lookups in SSIS

Usually when I load data into a data warehouse I retrieve only the changes.  Since changes are normally applied to the most recent records doing a lookup on the natural key of the current record and a partial lookup for any that are not associated with that record for type 2 works out well.  I recently had a situation where I needed to reprocess the entire table for every run.  We won’t go into why this was the case.  Needless to say it’s not good.  Consequently performance was horrendous because 70% of the lookups were partial. 
My solution was to use a Merge and Conditional Split to look at the entire dimension table.
image
Let’s start with the dimension (OLE_SRC Dimension).  We’ll use DimStudent as the dimension.  Here’s the query I used
Select StudentID, StudentNaturalKey, EffectiveStartDate, 
COALESCE((SELECT MIN(EffectiveStartDate) FROM DW.DIMstudent where 
EffectiveStartDate>s.EffectiveStartDAte and StudentNaturalKey=s.StudentNaturalKey),'12/31/2099') NextEffectiveStartDate
FROM DW.DimStudent s
ORDER BY StudentNaturalKey



I’m pulling the surrogatekey (StudentID), Natural Key (StudentNaturalKey) , EffectiveStartDate, and determining the NextEffectiveStartDate instead of using EffectiveEndDate because the data warehouse may have gaps or overlap in the dates.  I’m going to join on the NaturalKey in the Merge Transformation so I’m using it to order by.

This is the source import query

SELECT DISTINCT StudentNaturalKey, RecordDate
From Import.Student WITH (NOLOCK)
Order by StudentNaturalKey


I’m pulling back the NaturalKey and RecordDate from the source and ordering by StudentNaturalKey for the Merge Transformation.


Here’s the Merge Transformation joining on natural key


image


Next there’s the conditional split with the following condition to determine the correct record


ISNULL(RecordDate) || ISNULL(StudentID) || (RecordDate >= EffectiveStartDate && RecordDate < NextEffectiveStartDate)



If RecordDate is null then the source record has no date and consequentially there is no corresponding record in the dimension table.  If StudentID is null then there was no corresponding record in the dimension.  Otherwise it checks to see if the RecordDate is between the EffectiveStartDate and the NextEffectiveStartDate.

I then load the matching records into a cache connection manager.  This isn’t the only way but because of the complexity of the transformation dataflow I’d have to use the sort transformation for the merges so caching and then using the lookup transformation performed much better.

image

The cache consists of the natural key, record date, and StudentID.  I look up on the natural key and record date to get the surrogate key.  This allows me to keep the number of records to a minimum as records are often loaded in batches with the same record date.

Tuesday, October 5, 2010

Missing Indexes

I’m back from vacation.  It was wonderful.  Here’s the code I use to help me get a jump on indexes that may need to be created before I get complaints about system performance.
SELECT 
migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) AS improvement_measure, 
'CREATE INDEX [missing_index_' + CONVERT (varchar, mig.index_group_handle) + '_' + CONVERT (varchar, mid.index_handle) 
+ '_' + LEFT (PARSENAME(mid.statement, 1), 32) + ']'
+ ' ON ' + mid.statement 
+ ' (' + ISNULL (mid.equality_columns,'') 
+ CASE WHEN mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL THEN ',' ELSE '' END 
+ ISNULL (mid.inequality_columns, '')
+ ')' 
+ ISNULL (' INCLUDE (' + mid.included_columns + ')', '') AS create_index_statement, 
migs.*, mid.database_id, mid.[object_id]
FROM sys.dm_db_missing_index_groups mig
INNER JOIN sys.dm_db_missing_index_group_stats migs ON migs.group_handle = mig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle
WHERE migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) > 10
ORDER BY migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans) DESC


You’ll find queries like it all over the internet but not necessarily an explanation of what it’s telling you.  The SQL Server DMVs are based on the same concepts used in query plans and query optimization.

sys.dm_db_Missing_Index_Group_Stats – Updated By Every Query Execution

  1. Avg_Total_User_Cost – A number representing the cost of queries for which the index could have been used
  2. Avg_User_Impact – Percentage by which the average query cost would drop if index was implemented
  3. User_Seeks – Number of seeks caused by queries for which this index could have been used
  4. User_Scans – Number of scans caused by queries for which this index could have been used

sys.dm_db_Missing_Index_Details – Updated Every Time Query is Optimized by the Query Optimizer

  1. Statement – Table where the index is missing
  2. Equality_Columns – Columns used in equality predicates (Column=’a’)
  3. Inequality_Columns – Columns used in a predicate that’s anything except equality such as >
  4. Included_Columns – Columns need to cover the query
  5. Database_ID – Database
  6. Object_ID – Table

The higher the improvement_measure the greater the possibility for improvement.  As always with indexes make sure you look at all of the pros and cons for the index.