
Restore Azure database to Tier-one
In my current project I get a high demand on request for restoring Azure databases to Tier-one. Out of the box there is no support on it from LCS point of view. But that does not imply that it is not possible.
At this moment we create an additional DEVOPS release pipeline that is manually triggered. The steps of that pipeline are shown in the next picture.

The Steps
The steps we are taking are in general
- Request access token to LCS
- Download the bacpac file from the LCS asset library
- Import the bacpac file into a new database
- Stop all services
- Swap the new database with the old database
- Synchronize the database (there could be new tables & fields)
- Reset Financial Reporter
- Start all services
With the current security on the MS hosted Tier 1 boxes, it is getting more complex to completing all these steps. The complexity is related to less privileges. You do not have administrator access anymore.
When you want to perform actions that requiring administrator access, you can host a D365 environment hosted on your own azure subscription. Here you can open Pandora’s box, but Microsoft does not have any responsibility or privileges on this environment.
We prefer to have all PowerShell inline, in case you prefer to use local PowerShell, below example shows you how to transfer the parameters to PowerShell files


Now let us go through the steps one by one.
Lets Download the Database
Below is an example for downloading and restoring the Bacpac file. For more details please read https://docs.microsoft.com/en-us/dynamics365/fin-ops-core/dev-itpro/database/dbmovement-scenario-exportuat#import-the-database
cd C:\temp\ <# asset list #> $refreshUrl = “https://lcsapi.lcs.dynamics.com/databasemovement/v1/databases/project/$(LCSPRPOJID)” $refreshHeader = @{ Authorization = “Bearer $(TOKEN)” “x-ms-version” = ‘2017-09-15’ “Content-Type” = “application/json” } $refreshResponse = Invoke-RestMethod $refreshUrl -Method ‘GET’ -Headers $refreshHeader $DatabaseAssets = $refreshResponse.DatabaseAssets; <# find latest BACKUP on LCS #> $cstzone = [System.TimeZoneInfo]::ConvertTimeBySystemTimeZoneId( (Get-Date), ‘W. Europe Standard Time’) $filedate = Get-Date( $cstzone).AddDays($(DAYS)) -f “yyy-MM-dd” $BackupName = “GOLD-$filedate” Write-Output $BackupName $url = $DatabaseAssets | Where-Object {$_.Name -eq $BackupName} | Select-Object -Property FileLocation $output= $DatabaseAssets | Where-Object {$_.Name -eq $BackupName} | Select-Object -Property FileName Write-Output $url.FileLocation Write-Output $output.FileName <#remove old downloades#> <#Remove-Item âpath C:\temp* -include *.bacpac -whatif -force> <# start download #> Import-Module BitsTransfer Start-BitsTransfer -Source $url.FileLocation -Destination $output.FileName $importFile = $output.FileName Write-Host “##vso[task.setvariable variable=BACKUPNAME]$BackupName” Write-Host “##vso[task.setvariable variable=FILENAME]$importFile” |
The next step is creating the database
<#import Bacpak file#> cd C:\temp\ $fileExe = “C:\Program Files (x86)\Microsoft SQL Server\140\DAC\bin\SqlPackage.exe” & $fileExe /a:import “/sf:$(FILENAME)” /tsn:localhost “/tdn:$(BACKUPNAME)” /p:CommandTimeout=1200 |
Stop environment
The step for start or stopping the service are not so complex, you can even run them inline on the release pipeline (For start change the stop to start)
<#stop environment#> net stop W3SVC net stop DynamicsAxBatch net stop Microsoft.Dynamics.AX.Framework.Tools.DMF.SSISHelperService.exe net stop MR2012ProcessService net stop ReportServer |
Swap Databases
Now all systems are down so we can swap a database.
<#rename Database#> $server = “.” Write-Output Drop AXDBOLD Invoke-Sqlcmd -ServerInstance $server -Query “DROP DATABASE IF EXISTS AXDBOLD” <#rename Database#> $server = “.” Write-Output Drop AXDBOLD Invoke-Sqlcmd -ServerInstance $server -Query “DROP DATABASE IF EXISTS AXDBOLD” Write-Output renaim AXDB to AXDBOLD Invoke-Sqlcmd -ServerInstance $server -Query “ALTER DATABASE AXDB MODIFY NAME = AXDBOLD” Write-Output renaim new DB to AXDB Invoke-Sqlcmd -ServerInstance $server -Query “ALTER DATABASE [$(BACKUPNAME)] MODIFY NAME = AXDB” Write-Output create technical users Invoke-Sqlcmd -ServerInstance $server -Database axdb -Query “the latest scripts MSâ |
I will not distribute the latest scripts MS statements, because it can change depended on latest updates from MS. In general, it should be in line with https://docs.microsoft.com/en-us/dynamics365/fin-ops-core/dev-itpro/database/dbmovement-scenario-exportuat#update-the-database
Lets Synchronize
The next step is about synchronizing the database, there are several ways to do it
- Start Visual studio and synchronize manually
- Scripts that need admin privileges that you do not have
- Use the next one đ
<#sync database#> $command = Join-Path “K:\AosService\PackagesLocalDirectory” “\Bin\SyncEngine.exe” $connectionString = “Data Source=””.””;Initial Catalog=AXDB;Integrated Security=True” $arguments = “-syncmode=fullall -binfolder=K:\AosService\PackagesLocalDirectory -metadatabinaries=K:\AosService\PackagesLocalDirectory -connect=””$connectionString”” -fallbacktonative=False -verbosity=Diagnostic -continueOnError=false” Start-Process -FilePath $command -ArgumentList $arguments -Wait -PassThru -RedirectStandardError “SortError.txt” -RedirectStandardOutput “log.txt” |
Management Reporter
And finally reset the database mart, normally you get again that admin privilege issue , but by adding the option -scope local, we bypass this requirement
<#reset management reporter#> K: cd K:\MROneBox\MRInstallDirectory Import-Module .\Server\MRDeploy\MRDeploy.psd1 -scope local Reset-DatamartIntegration -Reason OTHER -ReasonDetail “” -force |
Please be aware the Bacpac to Tier1 is still a work in process, we must update the scripts regularly based on the requirements from MS
Additional we have on the release pipeline an RSAT provisioning step running, for more details please read XXXX

Please read the other related articles on this link https://kaya-consulting.com/category/lcs/
SQL Trace Flag for Dynamics Ax: Do we need it?
What is a SQL Trace Flag?
A SQL Trace Flag is an indicator that changes the way the SQL engine reacts. Why would we want to that? Well because the DBA always knows best, uh well actually because the standard behavior of the SQL engine will not always give the best possible performance for a Dynamics Ax ERP. Which SQL Trace Flag could be of interest for a Dynamics Ax Database? Well I found 5 that work great for some situations. (-T1117 -T1224 -T2371 -T4136 -T4199). Yes I said âsome situationsâ because which SQL Trace Flag can be enabled depends on lots of things, and should be decided carefully. Let me inform you about these 5 SQL Trace Flags.
SQL Trace Flag T1117: TempDB in multiple files
When you divided the TempDB in multiple data files to match the number of CPU cores as recommended by Microsoft for Dynamics AX, SQL Trace Flag âT1117 comes in handy. It will change the standard behavior of auto growing database files in a file group. The standard behavior is to auto grow one of the files at a time. With this SQL Trace Flag enabled all files in a file group will grow simultaneously.
For example:
1 db divided in 4 data files with an initial size of 100MB and an auto grow of 20MB each. When your reaches the 400MB and all 4 files are full, one of them will auto grow to 120MB. But the next data will be written in the file with the freest space. Guess which one that will be…. The one that grew off course. Â Now the same example with SQL Trace Flag âT1117 globally enabled: When your reaches the 400MB and all 4 files are full, all of them will auto grow to 120MB. The next data will be written in the file with the freest space. Guess which one that will be…. All of them.
Should we enable this SQL Trace Flag this minute?
No, there is a drawback. This SQL Trace Flag is great for the TempDB but it affects all data files of all databases in the instance. They will simultaneously grow if theyâre in the same file group. So think before you act, if you donât want the files to grow simultaneously put them in different file groups, or don’t enable this SQL Trace Flag.
More information on this SQL Trace Flag: Microsoft Dynamics Ax Performance team
SQL Trace Flag T1224: Running big operation during the day.
When running big operations (5000+ rows) during the day, like master planning, they could be blocking your users from doing their daily business. This is due to lock escalation. SQL locks a page to make sure only one process effects that page at a time. When a certain number of pages of a table are locked by the same process SQL can decide to lock the table if this is less costly on memory. This will make all requests for pages in that table wait till the first process is done. SQL Trace Flag -T1224 Disables lock escalation based on the number of locks. However, memory pressure can still activate lock escalation.
Should we enable this SQL Trace Flag this minute?
No, if you donât need to run big operations simultaneously or with users in the system, youâre probably free of blocking by lock escalation. And lock escalation will have a positive effect on the performance of your SQL. So please plan large operations carefully outside office hours where possible.
SQL Trace Flag T2371: Keeping statistics up-to-date on large tables
The SQL query optimizer uses statistics to create query plans that improve query performance. Performance of Dynamics AX depends largely on up-to-date statistics. Thatâs why it is recommended to set âAuto update statisticsâ on. But when will the statistics be updated? Well when 20% of the rows are updated/inserted. This SQL Trace Flag is only available on SQL Server 2008 R2 SP1 and all later versions. There was a small update to this SQL Trace Flag in SQL Server 2008 R2 SP2 and SQL Server 2012 SP1 that makes it more precise.
For example:
On a table containing 1,000 rows the statistics are updated when 200 rows are updated. And the same table after a while containing 1,000,000 rows the statistics are updated when 200,000 rows are updated. Did you read that right? Yes you need to update 200,000 rows till the statistics are updated. Imagine even larger tables. We all know that in an ERP like Dynamics AX these large tables are pretty common. Microsoft also recognized that and released SQL Trace Flag âT2371. This SQL Trace Flag will make the 20% rule a dynamic percentage rule that kicks in at 25,000 rows.
For example with SQL Trace Flag âT2317 globally enabled:
SQL Server will update statistics on a table containing 1,000 rows when 200 rows are updated. And on a table containing 1,000,000 rows the statistics are updated when 32,000 rows are updated. In a chart the dynamic percentage rule looks like this:
Should we enable this SQL Trace Flag this minute?
No, first examine if your database has a lot of large tables (above 100.000 rows is a good rule of thumb). Without large tables this SQL Trace Flag is useless.
SQL Trace Flag T4136: Parameter sniffing optimisation
Dynamics AX uses parameterized SQL statements if we donât use the âForceliteralsâ parameter in AX. I personally have never seen a developer who fully understood the working of âForceliteralsâ let alone use it in code. It even is best practice not to use it.
Why? Well when using parameterized SQL statements the query plans are cached and reused. SQL Server will use one plan for all similar queries instead of compiling each query. This can significantly reduce CPU overhead and improve throughput. That sounds like a good thing and it is.
But there is a drawback, if the queries are complex and/or the data distribution on the columns against which the parameter is compared vary, the cost of different plan choices can change. A plan that is optimal for one parameter value may perform poorly for another value. The query optimizer still needs to estimate the selectivity and cardinality of predicates using a parameter value. In Dynamics AX especially from version 2012 on the queries are almost always complex.
One way to avoid parameter sniffing is to put the OPTIMIZE FOR UNKNOWN option to all SQL statements leaving AX. Another way is to enable SQL Trace Flag -T4136. This SQL Trace Flag is only available on SQL Server 2008 R2 Cumulative Update 2, SQL Server 2008 SP1 Cumulative Update 7 and SQL Server 2005 SP3 Cumulative Update 9 and later.
For example:
If we have 2 companies in our Dynamics Ax Application and one only 2 customers and the other a couple thousand. The following could happen: if the first query we send to loop through customer is done in the company with a couple thousand customers , it could look like:
SELECT * FROM CUSTTABLE WHEREÂ DATAAREAID = ‘@P2’
And SQL Server Engine will solve it by doing a full table scan because almost all records are needed.
Next time we do the same loop in the other company due to the parameter on dataareaid (and partition in AX2012) the query is identical. That is why SQL Server will solve it in the same manner, with a table scan, where a page scan could be more appropriate. If only SQL could store a queryplan per dataareaid….
Should we enable this SQL Trace Flag this minute?
No, because Microsoft provided a way to use parameters but not for dataareaid (and partition in AX2012). This makes SQL server store a queryplan per dataareaid, making SQL Trace Flag T4136 completely unnecessary.
AX2009 SP1: KB 3000700
AX2012 RTM: KB 2920058
AX2012 R2: KB 2969229
You can find more information on the blog of Dynamics in the field.
More information on this SQL Trace Flag: Microsoft Support on SQL Trace Flag T4136
SQL Trace Flag T4199: Query execution plan updates
Microsoft turned off All query execution plan updates/hotfixes by default to make sure that an update will not affect the way your query executes. Several of these hotfixes will positively boost performance for Dynamics AX. Microsoft controls all future query processor fixes that are scheduled to be ‘On-By-Default’ in a later release with trace flag t4199. This SQL Trace Flag is only available on Cumulative update package 6 for SQL Server 2005 Service Pack 3, Cumulative update package 7 for SQL Server 2008, and Cumulative update package 7 for SQL Server 2008 Service Pack 1 and all later versions.
Should we enable this SQL Trace Flag this minute?
No, there is are 2Â drawbacks. 1: Your DBA will send you off to an insane asylum because of the next drawback. 2: If you enable this SQL Trace Flag you enable all future hotfixes/updates to the query execution plans, because we all know you’re going to forget about enabling this SQL Trace Flag.
More information on this SQL Trace Flag: Microsoft Support on SQL Trace Flag T4199
SQL Window Functions in Ax? Yes we can
Are SQL Window Functions new?
Since 2003 in the SQL ANSI/ISO standard and extended in 2008. Microsoft implemented the first window functions in their SQL Server 2005 version (known as OVER clause) and extended them in the following versions.
Window Functions in all SQL Versions
So to be short no they are not new for the ANSI/ISO standard and for Microsoft SQL Server window functions exist for a few version now. Within in Dynamics Ax we use X++ which is translated by the Dynamics Ax Kernel to T-SQL to make the database return a result set. This makes us depended of the implementation of X++, which in Dynamics Ax 2012 doesn’t support window functions for select statements etc. But I can show you a way on how to use them in AX 2012 and up.
What are SQL Window Functions?
Let me show you with an example. As this is a blogpost for Dynamics Ax minded people let us use the SalesLine table.
If you want to select a few fields of this table, you would write something like the following (in SQL and X++):
SELECT SALESID, ITEMID, SALESCATEGORY, CUSTACCOUNT, LINEAMOUNT FROM SALESLINE;
If you would like to see the total amount of the salesorder in the same result set you would need to do a “GROUP BY”:
SELECT SALESLINE.SALESID, ITEMID, SALESCATEGORY, CUSTACCOUNT, LINEAMOUNT FROM SALESLINE JOIN (SELECT SUM(LINEAMOUNT) AS Total, SALESID FROM SALESLINE GROUP BY SALESID) AS A ON A.SALESID = SALESLINE.SALESID;
in X++ you would do something like this:
While Select SalesId, ItemId, SalesCategory, CustAccount, LineAmount from SalesLine { select sum(LineAmount) from salesLine2 where salesLine2.SalesId == salesLine.SalesId; }
But it will send a statement for each salesline to the database.
TIP
A better way to do this in Ax would actual be:
- Create a Query object for the ‘select sum(LineAmount), SalesId from salesLine’ bit.
- Put that Query in a View.
- Now create a new Query with in the datasource the SalesLine and join it with the View.
Now let’s make it even more interesting. I would like a statement to get a resultset with the following columns:
- the SalesId, ItemId, SalesCategory, CustAccount, LineAmount.
- the total amount of the salesorder.
- the avg LineAmount of the customer in the same category.
- the LineAmount of the last time this customer ordered this item.
- running total of this customer.
SELECT SALESID, ITEMID, SALESCATEGORY, CUSTACCOUNT, LINEAMOUNT, SUM(LINEAMOUNT) OVER (PARTITION BY SALESID) AS [Total amount salesorder], AVG(LINEAMOUNT) OVER (PARTITION BY CUSTACCOUNT, SALESCATEGORY) AS [Avg LineAmount customer category], LAG(LINEAMOUNT,1) OVER (PARTITION BY CUSTACCOUNT,ITEMID ORDER BY SALESID) AS [Last time ordered for], SUM(LINEAMOUNT) OVER (PARTITION BY CUSTACCOUNT ORDER BY SALESID ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS [Running Total];
And here we are, our first window functions. They are called window functions because they put a window (subset)Â over the rows you get out of your FROM clause.
The <window function> part can be: Ranking functions, Aggregate functions or Analytic functions.
After the OVER keyword you can limit the size of the window:
- OVER() isn’t limiting at all so all rows are considered.
- OVER (PARTITION BY X) means limit the window to al rows that have matching values in Column X in current Row.
- OVER (ORDER BY) changes the sort order in the window. (Can be nice if you use RANK or ROW_NUMBER window functions)
- OVER( ROWS BETWEEN … AND …) limits the window to the rows between the given criteria seen from current row.
How to use these powerful window functions in Dynamics AX?
As I mentioned before X++ isn’t a real options. But in the View objects in the AOT there is a little gem that we can polish for our purpose.
- Create a Query for the SELECT SALESID, ITEMID, SALESCATEGORY, CUSTACCOUNT, LINEAMOUNT FROM SALESLINE part.
- Use this Query in a new View.
- Create a method on the View with the following code:
public static server str windowFunction() { //make sure you use this Dictview because the names of the tables are changed in the SQL statement send from Ax to SQL. //This makes sure the right names are used DictView dv = new DictView(tableNum(<<name of view>>)); //This is the real magic return 'SUM('+dv.computedColumnString('SalesLine', 'LineAmount') +') OVER ( PARTITION BY ' + dv.computedColumnString('SalesLine', 'SalesId'); }
- Now right click on the ‘fields’ branch of the view choose NewReal computed column.
- Give the new field a nicer name and set the name of the method in the ‘viewmethod’ property.
- Save and synchronise the view.
Now what did we just do?
We changed the create statement of the view to include ‘SUM(LINEAMOUNT) OVER (PARTITION BY SALESID)’ as a column. By synchronizing we created a view in SQL Server that contains the window functions we want to.
Could we use this for other things then window functions?
Yes you can use all possibilities of the T-SQL language here. But remember you are changing the Select clause of the view here. Putting a select statement in here (or a stored procedure) will for sure make the view run a long time as every row will trigger a separate statement. But a scalar value function can be very valuable sometimes (like ISNULL()).
Interesting links:
A 20 minute training about window functions in SQL 2012.
All code samples, walkthroughs and other instructions are provided as-is. Use any and all information provided in this blog at your own risk. It is never advised to import or write code in a production environment without rigorous testing in a test or development environment.
All opinions expressed in this blog are solely my own and do not necessarily reflect the opinions of my employer.
The names of actual companies and products mentioned in this blog may be the trademarks of their respective owners.
Microsoft and Dynamics AX are registered trademarks of Microsoft Corporation in the United States and other countries.
Compression: Less storage AND improved performance
Let us have a look at SQL Compression, with this I mean Data compression not the backup compression which is also available in SQL Server nowadays.
This feature can be implemented in SQL Server without affecting the programs that use the database, because it is only used inside of SQL Server and the way it stores its data. Which means that SQL Server will compress the data as it is entered in the database. SQL Server will use the data compressed for all it’s internal operations, and YES the data is still compressed when it is put into memory resulting in more rows in the same memory space which increases performance. Then if the data is send to the application which requested the data, SQL Server will decompress it. Compressing and decompressing data is a CPU intensive task, which leads to an increase of 15% on CPU usage. The save on storage and the increase of the number of rows on a page and the increase of the number of rows in the memory banks will certainly outweigh the increase in CPU usage.
SQL Server versions with Compression
Data compression became available in SQL 2008 and was further improved in SQL 2008 R2 after which it is continued in all version including the latest SQL 2016. Because it is an advanced feature it is only available in Enterprise Editions of SQL. (And Developer Edition which isn’t to be used in a production environment).
Types of compression
There are 2 types of compression available in SQL Server. Row compression and Page compression the latter implements the first and more, but let me explain both of them to you.
Row compression
This type of compression does what it says it compresses the row of data in an index/table. And will do this in several ways, which I’ll discuss below.
Metadata compression
Metadata is information about the columns for some data types the meta data is larger then the actual data type (for example Bit).
Delete not used space in fixed size datatypes
Row compression will delete the space that isn’t used in fixed sized strings, which means that a CHAR(10) can become a CHAR(1) if only 1 character is in it. Remember it is ROW compression so it will determine the actual used space per value not per column over the entire table/index. Unfortunately Dynamics Ax doesn’t use fixed sized string (it uses NVARCHAR instead), so we cannot gain anything by this.
Fortunately SQL Server also sees numeric values as (a form of) fixed sized, and it will delete leading and/or trailing zero’s if there are any. In Dynamics Ax all amounts are stored as NUMERIC(32,16) but if the actual amount is $1,95 if compression is applied it will only store the 1,95 removing the space it would normally store additionally to make it (32,16). The same is applied to integers and BIGINT, and Date and Datetime are converted to integers which also saves on storage.
Not effected by row compression are memo fields (NVARCHAR(MAX)), Images (VARBINARY), GUID (UNIQUEIDENTIFIER).
Standard Compression Scheme for Unicode
SQL Server also uses an implementation of the Standard Compression Scheme for Unicode (SCSU) algorithm to compress Unicode values. Because SQL Server stores data as 2 bytes regardless of the language of the data. Depending on the language of your data you can save up to 50% on the storage of all nvarchar() and nchar() datatype columns. Languages with the highest compression rate are: Western European languages as English, Dutch, German, French, Spanish etc. but also Hindi. On the lower end we find languages with more characters like Japanese which has a compression rate of 15%.
Dynamics Ax uses Unicode for all strings stored in its database.. I’ll repeat that ALL strings. So if you’re language is for example English this means that ALL your stored text can be stared in half the space it is using right now. Great news I would say.
Page compression
Page compression uses Row compression as mentioned before, but it also uses the following to techniques.
Prefix compression
SQL Server will determine per column a prefix value that is used (fully or partially) by the most rows. After determining this SQL places this value in the header and references to it (or part of it)Â in the row value. Which is best visualized by these images of Microsoft MSDN:
Yes you can do all that… Or use below script to do all the hard work for you and even deliver you the scripts to update the indexes:
--COLLECT INDEX DATA IF object_id('index_estimates') IS NOT NULL DROP TABLE index_estimates GO CREATE TABLE index_estimates ( database_name SYSNAME NOT NULL ,[schema_name] SYSNAME NOT NULL ,table_name SYSNAME NOT NULL ,index_id INT NOT NULL ,update_pct DECIMAL(5, 2) NOT NULL ,select_pct DECIMAL(5, 2) NOT NULL ,CONSTRAINT pk_index_estimates PRIMARY KEY ( database_name ,[schema_name] ,table_name ,index_id ) ); GO INSERT INTO index_estimates SELECT db_name() AS database_name ,schema_name(t.schema_id) AS [schema_name] ,t.NAME ,i.index_id ,i.leaf_update_count * 100.0 / (i.leaf_delete_count + i.leaf_insert_count + i.leaf_update_count + i.range_scan_count + i.singleton_lookup_count + i.leaf_page_merge_count) AS UpdatePct ,i.range_scan_count * 100.0 / (i.leaf_delete_count + i.leaf_insert_count + i.leaf_update_count + i.range_scan_count + i.singleton_lookup_count + i.leaf_page_merge_count) AS SelectPct FROM sys.dm_db_index_operational_stats(db_id(), NULL, NULL, NULL) i INNER JOIN sys.tables t ON i.object_id = t.object_id INNER JOIN sys.dm_db_partition_stats p ON t.object_id = p.object_id WHERE i.leaf_delete_count + i.leaf_insert_count + i.leaf_update_count + i.range_scan_count + i.singleton_lookup_count + i.leaf_page_merge_count > 0 AND p.index_id < 2 AND i.range_scan_count / (i.leaf_delete_count + i.leaf_insert_count + i.leaf_update_count + i.range_scan_count + i.singleton_lookup_count + i.leaf_page_merge_count) > .75 -- only consider tables with 75% or greater select percentage ORDER BY t.NAME ,i.index_id GO --PREPARE ROW AND PAGE COMPRESSION IF OBJECT_ID('page_compression_estimates') IS NOT NULL DROP TABLE page_compression_estimates; GO CREATE TABLE page_compression_estimates ( [object_name] SYSNAME NOT NULL ,[schema_name] SYSNAME NOT NULL ,index_id INT NOT NULL ,partition_number INT NOT NULL ,[size_with_current_compression_setting(KB)] BIGINT NOT NULL ,[size_with_requested_compression_setting(KB)] BIGINT NOT NULL ,[sample_size_with_current_compression_setting(KB)] BIGINT NOT NULL ,[sample_size_with_requested_compression_setting(KB)] BIGINT NOT NULL ,CONSTRAINT pk_page_compression_estimates PRIMARY KEY ( [object_name] ,[schema_name] ,index_id ) ); GO IF OBJECT_ID('row_compression_estimates') IS NOT NULL DROP TABLE row_compression_estimates; GO CREATE TABLE row_compression_estimates ( [object_name] SYSNAME NOT NULL ,[schema_name] SYSNAME NOT NULL ,index_id INT NOT NULL ,partition_number INT NOT NULL ,[size_with_current_compression_setting(KB)] BIGINT NOT NULL ,[size_with_requested_compression_setting(KB)] BIGINT NOT NULL ,[sample_size_with_current_compression_setting(KB)] BIGINT NOT NULL ,[sample_size_with_requested_compression_setting(KB)] BIGINT NOT NULL ,CONSTRAINT pk_row_compression_estimates PRIMARY KEY ( [object_name] ,[schema_name] ,index_id ) ); GO --DYNAMICALLY GENERATE OUTCOME DECLARE @script_template NVARCHAR(max) = 'insert ##compression_mode##_compression_estimates exec sp_estimate_data_compression_savings ''##schema_name##'',''##table_name##'',NULL,NULL,''##compression_mode##'''; DECLARE @executable_script NVARCHAR(max); DECLARE @schema SYSNAME ,@table SYSNAME ,@compression_mode NVARCHAR(20); DECLARE cur CURSOR FAST_FORWARD FOR SELECT i.[schema_name] ,i.[table_name] ,em.estimate_mode FROM index_estimates i CROSS JOIN ( VALUES ('row') ,('page') ) AS em(estimate_mode) GROUP BY i.[schema_name] ,i.[table_name] ,em.estimate_mode; OPEN cur; FETCH NEXT FROM cur INTO @schema ,@table ,@compression_mode; WHILE (@@FETCH_STATUS = 0) BEGIN SET @executable_script = REPLACE(REPLACE(REPLACE(@script_template, '##schema_name##', @schema), '##table_name##', @table), '##compression_mode##', @compression_mode); PRINT @executable_script; EXEC (@executable_script); FETCH NEXT FROM cur INTO @schema ,@table ,@compression_mode; END CLOSE cur; DEALLOCATE cur; --SHOW RESULTS WITH all_estimates AS ( SELECT '[' + i.schema_name + '].[' + i.table_name + ']' AS table_name ,CASE WHEN i.index_id > 0 THEN '[' + idx.NAME + ']' ELSE NULL END AS index_name ,i.select_pct ,i.update_pct ,CASE WHEN r.[sample_size_with_current_compression_setting(KB)] > 0 THEN 100 - r.[sample_size_with_requested_compression_setting(KB)] * 100.0 / r.[sample_size_with_current_compression_setting(KB)] ELSE 0.0 END AS row_compression_saving_pct ,CASE WHEN p.[sample_size_with_current_compression_setting(KB)] > 0 THEN 100 - p.[sample_size_with_requested_compression_setting(KB)] * 100.0 / p.[sample_size_with_current_compression_setting(KB)] ELSE 0.0 END AS page_compression_saving_pct FROM index_estimates i INNER JOIN row_compression_estimates r ON i.schema_name = r.schema_name AND i.table_name = r.object_name AND i.index_id = r.index_id INNER JOIN page_compression_estimates p ON i.schema_name = p.schema_name AND i.table_name = p.object_name AND i.index_id = p.index_id INNER JOIN sys.indexes idx ON i.index_id = idx.index_id AND object_name(idx.object_id) = i.table_name ) ,recommend_compression AS ( SELECT table_name ,index_name ,select_pct ,update_pct ,row_compression_saving_pct ,page_compression_saving_pct ,CASE WHEN update_pct = 0 THEN 'Page' WHEN update_pct >= 20 THEN 'Row' WHEN update_pct > 0 AND update_pct < 20 AND page_compression_saving_pct - row_compression_saving_pct < 10 THEN 'Row' ELSE 'Page' END AS recommended_data_compression FROM all_estimates WHERE row_compression_saving_pct > 0 AND page_compression_saving_pct > 0 ) SELECT table_name ,index_name ,select_pct ,update_pct ,row_compression_saving_pct ,page_compression_saving_pct ,recommended_data_compression ,CASE WHEN index_name IS NULL THEN 'alter table ' + table_name + ' rebuild with ( data_compression = ' + recommended_data_compression + ')' ELSE 'alter index ' + index_name + ' on ' + table_name + ' rebuild with ( data_compression = ' + recommended_data_compression + ')' END AS [statement] FROM recommend_compression ORDER BY table_name --CLEAN UP DROP TABLE index_estimates;
All code samples, walkthroughs and other instructions are provided as-is. Use any and all information provided in this blog at your own risk. It is never advised to import or write code in a production environment without rigorous testing in a test or development environment.
All opinions expressed in this blog are solely my own and do not necessarily reflect the opinions of my employer.
The names of actual companies and products mentioned in this blog may be the trademarks of their respective owners.
Microsoft and Dynamics AX are registered trademarks of Microsoft Corporation in the United States and other countries.
Index fragmentation effects on performance
Before jumping into index fragmentation I’ll first explain what indexes are and how they are used by the SQL Server Engine. I tried to compare it to a real world way of searching data, to paint a clear picture in your head. I’m not going into the differences between clustered and non-clustered indexes as that would be a nice new blog.
How do indexes work?
Indexes are used by SQL Server to search for records in tables as we use indexes in books to get to certain chapters/words/authors etc. The way an index in SQL Server works is similar to the way we lookup a phone number in a (old fashioned and hard copy) phonebook.
- We decide we need a phonebook for this search not a cookbook.
- We pick up the phonebook of the state we need.
- We look at the index to find the pages that have the correct city.
- We go to the page that contains the first letter of the last name of whose phone number we need.
- We find the last name and see that there are in fact several.
- We look at the address and find the right number.
- It decides it needs a certain index.
- It will look at the 1st level of the index to find the right part (phonebook of state).
- It will look at the 2nd level of the index to find the right part again (correct city).
- It will look at the 3th level of the index to find the right part again (last name).
- It finds the page containing the correct last name.
- It traverses through the records for the one that contains the needed address.
What is index fragmentation?
How does index fragmentation start?
- We decide we need a phonebook for this search not a cookbook.
- We pick up the phonebook of the state we need.
- We look at the index to find the pages that have the correct city.
- We go to the page that contains the first letter of the last name of whose phone number we need.
- We find the last name and see that there are in fact several.
- We get to the page our ‘Mr Smith’ should be on, only to find a reference half way the page telling us to look at a page in the end of the book.
- We go to the page at the end of the book
- We look at the address and find the right number.
What happens with index fragmentation?
How to temporarily or partly prevent index fragmentation?
TIP: Workflow and AIF are using GUID’s in indexes so set fill factor accordingly and maintain them.
How to permanently prevent index fragmentation to be an issue?
- Make sure the physical drives your data is on are not shared with other applications. Mainly for HDD storage as the other applications will use the HDD rotation as well.
- Use the scripts of Ola Hallengren to reduce index fragmentation without rebuilding all indexes. Set the parameter of minimum pages to 100 instead of 1000 for Dynamics Ax databases.
- Use SDD’s for storage of data instead of HDD’s. Because SDD’s don’t use moving parts.
- Cache your database by assigning enough memory to SQL Server to cache the entire database, or at least the records that are frequently used. Because cached data is way faster then any stored data on hard discs.
Is index fragmentation the root cause of performance issues?
- Poorly chosen indexes. (indexes on fields that are updated all the time)
- Changing of usage of the system. (Users are allowed to change id fields, which wasn’t in the original design)
- Poorly developed queries. (queries that cannot make full use of the indexes that are already there)
- and a thousand other reasons.
All code samples, walkthroughs and other instructions are provided as-is. Use any and all information provided in this blog at your own risk. It is never advised to import or write code in a production environment without rigorous testing in a test or development environment.
All opinions expressed in this blog are solely my own and do not necessarily reflect the opinions of my employer.
The names of actual companies and products mentioned in this blog may be the trademarks of their respective owners.
Microsoft and Dynamics AX are registered trademarks of Microsoft Corporation in the United States and other countries.
Missing index will cause slow performance: SQL can help you
Introduction
As a new member of the Kaya family I feel the urge to introduce myself. Hi Iâm Boye Walters, a 34 year old developer with over 12 year experience in Microsoft Dynamics Ax and Microsoft SQL Server. Having worked with all versions of Ax ever released in the Netherlands I can truthfully say that Iâve seen a lot already in Ax. As for SQL Server Iâve actively worked with SQL2000 and above, using all features available including integration, analysis and reporting.
Now the introduction is done letâs start off with something I experienced last week at a customer.
A badly performing View
I was asked to build a view with a dozen or so datasources having inner, outer and exist joins. To maximize performance I minimized the number of fields and put some relevant ranges in place minimizing the number of records. I also used the relations already on the tables. It should produce an ultrafast view you would think. Too bad, it took me 40 minutes to open the view in the Ax client. But after 40 minutes it showed a lot of records: 2. Because I didnât use any business logic, user interface or what so ever, I figured it should be SQL causing the delay. As a ran the query in SQL I believed to be proven wrong again, because it ran in 4 seconds. But SQL did suggest an index that could improve performance 85%. After applying it running the query in SQL took under a second. And in Ax it also took under a second. Sounds strange? Yes. Explainable? Absolutely , because of the missing index the query requested a table lock on dbo.salesline which was heavily queried so it took a while to get that.
How did I identify the missing index?
Well after synchronizing the view in Ax I started SQL Server Management Studio (SSMS) and went to the right database and View. There I rightclicked on it and selected ‘Script view asSELECT TONew Query editor window’.
This will generate a select script for the view.
Now instead of running it went for the execution plan. Using âDisplay Estimated Executing Plan (Ctrl+L)â button on the top of my Management Studio. Look at the second query (first is just to select the right database), if SQL determines it is needs an index to give you the answer to your query it will say so. The way it does is by showing a create statement for the missing index in green. Like it did with me:
Now Iâm going to test it before I build it in Ax. First I rightclicked on the message and choose âMissing Index Detailsâ. This opened up a new Query window with a create statement that is commented like this:
I uncommented the query and gave the index a name as requested.
Now I ran this script, and the index was created in SQL. I went back to the previous script and used the âDisplay Estimated Executing Plan (Ctrl+L)â button again, to make sure SQL didnât suggest another missing index.
No more missing index, so I went to Ax and ran the view again. After recovering from the Wow factor (and going through several emotions: impressed, not believing, not understanding, believing, grasping the understanding and still wowed ) I created the missing index in Ax after removing it in SQL.
Why ask a program not a developer?
As many of you know, SQL Server works in a (for non SQL developers) odd way. You tell SQL Server what you want and how it should be formatted but SQL Server determines what would be the best way to deliver you what you want. That makes SQL Server the one that can tell you what it is missing to do itâs job properly. This also means you could use this method to determine what missing index would speed up a SQL query that you caught with the trace parcer. There are even queries possible in SQL Server that show you all missing indexes at ones. And that is exactly what we use in our Performance Dashboard.
Interesting links: