
Size does matter (or not?)
It is a general issue with D365FO development on those development VMs (virtual machines) when it is not working as planned, in the end we throw it away and start over. In the next blog I will explain some tricks to solve issues on the disk space. Look at the next example, we use a disk of 32GB and need 20 of them so we have 640 GB of disk space.

Of course, check the prices: Pricing – Managed Disks | Microsoft Azure.
Besides the disk size I also use DS13 V2, so I can upgrade to Managed Disks: Performance tuning development VM (Virtual Machine) – Kaya Consulting (kaya-consulting.com)
Why not 10 * 64 GB or 5 * 128 GB? It is all about IOPS!
Later, in Azure it will look like below.

When we look at the VM with Disk Management we see a striped disk that has partitions for database and ‘AOT (Application Object Tree) drive.’

So far so good, but the developer needs a good database and when restoring, scenarios like the next picture may occur.

We are stuck, no space on the G drive. And plenty of space on those other drives. So, let us adept, we start with shrinking the other disks. NOTE: K drive needs 15 GB free space for installations started from LCS (Life Cycle Services).

After shrinking we add the created free space to the G Drive.
• In case you still run into disk space issues, use the next SQL script to compress the database:
EXECUTE sp_msForEachTable ‘ Print ”?”;SET QUOTED_IDENTIFIER ON ;IF EXISTS (Select * from sys.indexes Where object_id = object_id(”?”) And type=6) ALTER INDEX ALL ON ? REBUILD ELSE ALTER INDEX ALL ON ? REBUILD WITH (data_compression=PAGE,FILLFACTOR=99)’ |
- Another trick is dropping the retail database, it is in any installation package, the script name is DropAllRetailChannelDbObjects.
- Set the database recovery model to simple will reduce the database log disk size.
- Check the SQL report “Disk Usage by Top Tables.”
- Add an additional disk:
Of Course, I would love to give you a golden rule for size of the bacpac file in relation to the needed disk space. I can give you a refence a 9 Gb Bacpac file results in a 90 Gb Database MDF file. So Bacpac file size multiply by 10. It is still a guesstimate 🙁

Bana Regression Suite Automation Tool(RSAT) satabilir misiniz?
Microsoft D365 FO’nun yeni sürümlerinin mevcut hızlı görünümü ile test yapmak daha fazla zaman alabilir (yılda 10 kez). Bu durumda bazı kararlar vermenizde fayda var. Örneğin;
- Güncellemeleri atlayın ve örneğin yalnızca çift veya tek platform güncelleme numaralarını alın
- MS’i test etmeyin ve güvenin. Ancak lütfen MS’te sadece insan olduklarını ve insanların da hata yaptığını unutmayın.
- Daha kısa sürede daha fazla test yapılabilmesi için test ekibinizi artırın.
- Regression Suite Automation Tool kullanmaya başlayın.
Geçmişte 1. seçeneği seçiyorduk. Yine de her ay release çıkardık, ancak MS sürümüne ISV ile sürümler ve müşteri için özel çözümler dahil edilmedi.
Nasıl kullanabiliriz?
Müşterilerimizle RSAT hakkında konuşmaya başladığımızda, her zaman bütçe olmadığını söylüyorlar. Bu doğru değil, her zaman test maliyeti vardır. Sadece mevcut test maliyeti, dahili testler ve / veya üretim ortamlarında ortaya çıkan sorunlar şeklinde o müşterinin organizasyonunda gizlidir. Gördüğümüz şey; müşterilerin BT projeleri için bir bütçesi olduğu. Müşterinin / anahtar kullanıcının test ettiği saatler bu bütçenin parçası değildir.
Sonra başka bir nokta daha görüyoruz ki mevcut görev kayıtları (task recorder) çoğu zaman güncelliğini yitirmiştir (bkz. İş Süreci Modelleyiciniz (BPM) neden güncel olmalı? bloguna). RSAT kullanarak BPM kitaplığınızı otomatik olarak güncellemiş olursunuz.
RSAT, bir sağlama aracı gibi kullanarak daha da güçlü hale gelebilir. Aşağıda, veri hattı sürümünde kullandığımız bir örnek verilmiştir. İlk örnek, bir veri tabanını 1. katman (MS barındırılan veya müşteri tarafından barındırılan) üzerine geri yüklediğimiz zamandır.

Mevcut RSAT çözümünün bir bulut hizmeti olarak çalıştırmak yerine gerçek bir makineye kurulması gerekiyor. Bunu başarmak için, onu bulutta barındırılan bir Devbox’a kurduk ve maliyetleri yönetmek için otomatik kapatmayı açtık. Bu nedenle eğitimde söz konusu ortamı hem başlatan hem de durduran adımlar görüyorsunuz.
Devops’taki mevcut RSAT, yalnızca kurulu olduğu gerçek bir makinede çalışır. Bu yüzden onu bulutta barındırılan Devbox’a kurduk, Azure maliyetini düşük tutmak için bu sanal makineyi talep üzerine başlatıyoruz. RSAT provizyonunun parametrelerinde; veri tabanını yeni geri yüklediğimiz(restore) ortama işaret eden RSAT ayarları dosyasını seçiyoruz.

RSAT’ı kullandığımız görevler şunlardır:
• Kullanıcıları etkinleştirin
• Toplu işleri (Batch) başlatın
• Exchange ayarlarını güncelleyin
• Diğer tüm entegrasyon seçenekleri
Ancak RSAT, en son özelleştirilmiş müşteri kodunuzla yeni bir release yayınladığınızda da tekrar kullanılabilir. RSAT’ı bu şekilde kullanmamın nedeni; 1. Katman üzerinde yapılan testlerin dünyayı temsil etmemesi, paketi geliştirme alanında değil operasyonel alanda test etmek istiyorum.

Ve son durum, Microsoft’un güncellemeleri içindir. Yukarıdaki gibi benzerdir, RSAT görevini yalnızca MS versiyon aday ortamında çalıştırırsınız.
Şimdi iş durumu açıklandığına göre, yapı taşlarına kısaca bir göz atalım.
VM’yi başlatmak ve durdurmak için Azure CLI kullanıyoruz

Agent pools
Kullandığımız adımlar farklı agent pool’a ihtiyaç duyar. Ben her zaman sadece azure pipeline’ı kullanmayı tercih ediyorum. Ancak bunların bir sınırlaması var. Adımın sanal makinede bulunan programlara veya dosyalara ihtiyaç duyması halinde, sanal makinenin de bir agent pool’u olması gerekir. Aşağıdaki liste, bir müşteri uygulamasında ihtiyaç duyduğumuz agent pool’u göstermektedir:

Ayrıca, mevcut bir bulutta veya hatta yerel bir sanal makinede çalışan bir building pool’a ihtiyacınız olduğunu fark ettiğinizde, onu yine de kurabilirsiniz. Https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/v2-windows?view=azure-devops ile ilgili bilgilere bakabilirsiniz.
Toparlamak gerekirse; RSAT’ın bir satış hikayesine ihtiyacı yoktur. Nasıl kullanılacağını bilirseniz, kendisini satmaya başlayacaktır.
Diğer DEVOPS ipuçları için https://kaya-consulting.com/category/lcs/ adresine bir göz atın.

Restore Azure database to Tier-one
In my current project I get a high demand on request for restoring Azure databases to Tier-one. Out of the box there is no support on it from LCS point of view. But that does not imply that it is not possible.
At this moment we create an additional DEVOPS release pipeline that is manually triggered. The steps of that pipeline are shown in the next picture.

The Steps
The steps we are taking are in general
- Request access token to LCS
- Download the bacpac file from the LCS asset library
- Import the bacpac file into a new database
- Stop all services
- Swap the new database with the old database
- Synchronize the database (there could be new tables & fields)
- Reset Financial Reporter
- Start all services
With the current security on the MS hosted Tier 1 boxes, it is getting more complex to completing all these steps. The complexity is related to less privileges. You do not have administrator access anymore.
When you want to perform actions that requiring administrator access, you can host a D365 environment hosted on your own azure subscription. Here you can open Pandora’s box, but Microsoft does not have any responsibility or privileges on this environment.
We prefer to have all PowerShell inline, in case you prefer to use local PowerShell, below example shows you how to transfer the parameters to PowerShell files


Now let us go through the steps one by one.
Lets Download the Database
Below is an example for downloading and restoring the Bacpac file. For more details please read https://docs.microsoft.com/en-us/dynamics365/fin-ops-core/dev-itpro/database/dbmovement-scenario-exportuat#import-the-database
cd C:\temp\ <# asset list #> $refreshUrl = “https://lcsapi.lcs.dynamics.com/databasemovement/v1/databases/project/$(LCSPRPOJID)” $refreshHeader = @{ Authorization = “Bearer $(TOKEN)” “x-ms-version” = ‘2017-09-15’ “Content-Type” = “application/json” } $refreshResponse = Invoke-RestMethod $refreshUrl -Method ‘GET’ -Headers $refreshHeader $DatabaseAssets = $refreshResponse.DatabaseAssets; <# find latest BACKUP on LCS #> $cstzone = [System.TimeZoneInfo]::ConvertTimeBySystemTimeZoneId( (Get-Date), ‘W. Europe Standard Time’) $filedate = Get-Date( $cstzone).AddDays($(DAYS)) -f “yyy-MM-dd” $BackupName = “GOLD-$filedate” Write-Output $BackupName $url = $DatabaseAssets | Where-Object {$_.Name -eq $BackupName} | Select-Object -Property FileLocation $output= $DatabaseAssets | Where-Object {$_.Name -eq $BackupName} | Select-Object -Property FileName Write-Output $url.FileLocation Write-Output $output.FileName <#remove old downloades#> <#Remove-Item –path C:\temp* -include *.bacpac -whatif -force> <# start download #> Import-Module BitsTransfer Start-BitsTransfer -Source $url.FileLocation -Destination $output.FileName $importFile = $output.FileName Write-Host “##vso[task.setvariable variable=BACKUPNAME]$BackupName” Write-Host “##vso[task.setvariable variable=FILENAME]$importFile” |
The next step is creating the database
<#import Bacpak file#> cd C:\temp\ $fileExe = “C:\Program Files (x86)\Microsoft SQL Server\140\DAC\bin\SqlPackage.exe” & $fileExe /a:import “/sf:$(FILENAME)” /tsn:localhost “/tdn:$(BACKUPNAME)” /p:CommandTimeout=1200 |
Stop environment
The step for start or stopping the service are not so complex, you can even run them inline on the release pipeline (For start change the stop to start)
<#stop environment#> net stop W3SVC net stop DynamicsAxBatch net stop Microsoft.Dynamics.AX.Framework.Tools.DMF.SSISHelperService.exe net stop MR2012ProcessService net stop ReportServer |
Swap Databases
Now all systems are down so we can swap a database.
<#rename Database#> $server = “.” Write-Output Drop AXDBOLD Invoke-Sqlcmd -ServerInstance $server -Query “DROP DATABASE IF EXISTS AXDBOLD” <#rename Database#> $server = “.” Write-Output Drop AXDBOLD Invoke-Sqlcmd -ServerInstance $server -Query “DROP DATABASE IF EXISTS AXDBOLD” Write-Output renaim AXDB to AXDBOLD Invoke-Sqlcmd -ServerInstance $server -Query “ALTER DATABASE AXDB MODIFY NAME = AXDBOLD” Write-Output renaim new DB to AXDB Invoke-Sqlcmd -ServerInstance $server -Query “ALTER DATABASE [$(BACKUPNAME)] MODIFY NAME = AXDB” Write-Output create technical users Invoke-Sqlcmd -ServerInstance $server -Database axdb -Query “the latest scripts MS” |
I will not distribute the latest scripts MS statements, because it can change depended on latest updates from MS. In general, it should be in line with https://docs.microsoft.com/en-us/dynamics365/fin-ops-core/dev-itpro/database/dbmovement-scenario-exportuat#update-the-database
Lets Synchronize
The next step is about synchronizing the database, there are several ways to do it
- Start Visual studio and synchronize manually
- Scripts that need admin privileges that you do not have
- Use the next one 😊
<#sync database#> $command = Join-Path “K:\AosService\PackagesLocalDirectory” “\Bin\SyncEngine.exe” $connectionString = “Data Source=””.””;Initial Catalog=AXDB;Integrated Security=True” $arguments = “-syncmode=fullall -binfolder=K:\AosService\PackagesLocalDirectory -metadatabinaries=K:\AosService\PackagesLocalDirectory -connect=””$connectionString”” -fallbacktonative=False -verbosity=Diagnostic -continueOnError=false” Start-Process -FilePath $command -ArgumentList $arguments -Wait -PassThru -RedirectStandardError “SortError.txt” -RedirectStandardOutput “log.txt” |
Management Reporter
And finally reset the database mart, normally you get again that admin privilege issue , but by adding the option -scope local, we bypass this requirement
<#reset management reporter#> K: cd K:\MROneBox\MRInstallDirectory Import-Module .\Server\MRDeploy\MRDeploy.psd1 -scope local Reset-DatamartIntegration -Reason OTHER -ReasonDetail “” -force |
Please be aware the Bacpac to Tier1 is still a work in process, we must update the scripts regularly based on the requirements from MS
Additional we have on the release pipeline an RSAT provisioning step running, for more details please read XXXX

Please read the other related articles on this link https://kaya-consulting.com/category/lcs/

Which Agile methodology to use during ERP implementations
Agile SCRUM vs KANBAN
Do not use Agile SCRUM on customer implementations. With this one liner we start this blog. Why not and what should you use then?
In general, we see these days that all ERP implementations are done in an Agile way; the waterfall methodology has become an exception. But there are various Agile methodologies, the most common ones are SCRUM and KANBAN.
Customer projects
The method to use is important at Customer projects in our role of implementation partner (ERP/CRM implementations) but also on the ISV products we maintain ourselves as Kaya. Leading factor to determine which methodology to pick is not the methodology itself but how an organization is working and organized. Beside that the necessary skillset for building and releasing software plays a role too. Those skills are usually less present at a customer compared to a Microsoft, or other software, partner.
ISV products
In case you are in full control of your resources & requirements, planning fixed (release) dates is working fine. We see this at the implementation of ISV products. So, for those using Agile SCRUM would be a good choice.
However, on Customer implementations, we as a partner are not always in full control. Some examples are:
- Decisions that requirements are out of scope (read not before go-live)
- Capacity of customer resources are hard to plan. A partner has no control over it.
- Customer must accept the solution and usually they need support on the testing as they are not used to this.
So, the planning of a customer implementation needs much more flexibility. This flexibility can be better arranged with KANBAN. To have a closer look at what KANBAN is use following link https://www.scrum.org/resources/kanban-guide-scrum-teams.
Differences between Kanban and Scrum
Most important differences between KANBAN and SCRUM are:
Scrum | Kanban | |
Cadence Regular | fixed length sprints | Continuous flow |
Release methodology | At the end of each sprint | Continuous delivery |
Roles | Product owner, scrum master, development team | Stakeholder |
Key metrics | Velocity | ead time, cycle time, WIP |
Change philosoph. | Avoid changes during the sprint. | Change can happen |
Let us start with the Cadence. The functional confirmation of D365 F&S has a higher throughput than the development of code extensions. Beside the code extensions we also have the ISV solutions and the MS monthly updates.
This brings me to the next point. Release mythology: a code release is different from replicating a setup in a target environment.
Then we have the Roles. There are several Scrum Roles, where customers usually are not accustomed to, and the theory is that Kanban has no roles. To my opinion, this is not the case there is a stakeholder, wo defines what he needs and signs of when his wishes are realized. To a certain extend this resembles the Product owner role.
The change philosophy of KANBAN is more flexible. In case you have not started on a card, you can always put it back in the backlog. We see this happen when the Implementation gets disrupted by day to day topics or customer changes scope or priority.
The impact of using KANBAN at a customer implementation has an additional dimension. This dimension is the division between Development and Operations. In general, we can say the Partner is development and the customer is operations. This helps to determine who is responsible for what.
The next diagram narrows it even more clear on extensions to be delivered during implementation. The Partner (DEV) delivers a combined build into the asset library, it is up to the Customer (OPS) to continue from that point onwards.
OPS | DEV | DEV | OPS | OPS |
Customer | Partner | Customer | Customer | |
Backlog | DEV | TEST | Acceptance | Production |
Code changes | Code changes | LCS asset | LCS asset | |
Manual setup | Manual setup | Manual Setup |
Additions to Kanban
Beside the concept of KANBAN and DEVOPS, Kaya also uses the phrase requested today, delivered tomorrow.
- When a Developer is finished, the solution is automatically available next day in TEST
- When a Tester is finished, he has in general a release candidate feature for the Customer
- The customer can almost do cherry picking on the release candidate features. The ones the customer select will be available next day in the combined build in the LCS asset library.
Are we in control?
So, are we in control? Yes! But be careful there is still a risk. This is so called work in Progress (WIP). The WIP on Acceptance is extremely important, all acceptance task must be completed before going to production. Who is responsible? The Customer (stakeholder) is responsible, but of course supported by the partner (which task should be completed first, dependencies, risks etc.). How can the Customer control it? Simply throttle on the requests that are picked from the backlog.
For other DEVOPS tips have a look at https://kaya-consulting.com/category/lcs/

Can you sell me Regression Suite Automation Tool (RSAT)?
With the current fast appearance of new versions of D365 FO (10 times a year). Testing can become more time consuming. So, you must decide:
- Skip updates, take for example only the even or odd platform update numbers
- Do not test and have confidence at MS. But please be aware at MS they are also mere humans and humans make mistakes.
- Increase your Test team so more tests can be performed in a shorter time.
- Start using the Regression Suite Automation Tool.
In the past we chose option 1. We did still release every month, but the release of MS was not included the releases with ISV and tailored solutions for the customer.
How we use it
When we start talking about RSAT with the customer, they always reply, that there is no budget. This is not correct, there are always costs on testing. Only the current test cost is hidden within the organization of that customer in the form of internal tests and/or issues arising in production environments. What we see is that the customer has a budget for his IT projects. The hours that the customer/key user is testing is not part of this budget.
Then we see another point, the current task recordings are most of the time outdated (see blog Why should your Business Process Modeler (BPM) be up to date?). By using RSAT you are automatically being forced to bring your BPM library up to date.
RSAT can become even more powerful by using it like a provisioning tool. Below is an example as we use it in a release pipeline. The first example is when we restore a database on a tier 1 box (MS hosted, or customer hosted)

The current RSAT solution needs to be installed on an actual machine instead of running as a cloud service. To achieve this we installed it on a cloud hosted Devbox and turned on auto shutdown to manage costs. Therefor you see steps both starting and stopping said environment in the train.
The current RSAT on a devops only runs on a real machine where it is installed. So, we installed it on cloud hosted Devbox, to keep the Azure cost low we start that VM on demand. In the Parameters of the RSAT provisioning we select the RSAT settings file pointing towards the environment we just restored the database of.

The tasks we use RSAT for are:
- Enable users
- Start batches
- Update exchange settings
- Any other integration options
But RSAT is also reusable when you deploy a new release with your latest tailored customer code. The reason I use RSAT this way is, testing on a Tier 1 is not representing the world, I want to test the Package in the Operational area and not in the Development area.

And the last case is for updates of Microsoft. It is similar like above , you only run the RSAT task on the MS release candidate environment.
So now that the business case has been explained, let us have a short look at the building blocks.
For starting and stopping the VM we use Azure CLI

Agent Pools
The steps we used need different Agent pools. I always prefer to use only the azure Pipelines. But those have a limitation. In case the step needs programs or files that are located on the VM, the VM also need to become an agent pool. Below list shows the additional agent pools we need at a customer implementation

And when you realize you need a building pool on an existing cloud hosted or even running on a local VM, you can still install it. Look for information on https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/v2-windows?view=azure-devops
And now to wrap up; RSAT does not need a Sales story. If you know how to use it, It will start selling itself.
For other DEVOPS tips have a look at https://kaya-consulting.com/category/lcs/
Is your Business Process Modeler updated?
I see a lot of MS Dynamics 365 implementations where the BPM in Lifecycle Services (LCS) is not used. They are running fine (at least that is what the clients are telling me). But in general, their implementation partner did not tell them all the benefits LCS has in store for them. Why should a partner bother to mention all the benefits Improvements on testing and user help experience does not result in more billable hours? So is your Business Process Modeler updated?
Using the BPM in combination with DevOps supports more streamlined and efficient implementation projects. By defining the processes in the BPM library; there is a generic structure for tracking the project requirements / user stories / tasks. The BPM hierarchy can be synchronized with DevOps – as a result the DevOps backlog and boards are populated with structured work items ready to be used by the project team.
In addition to tracking the progress of a project, the BPM and DevOps processes and work items can be used for automated testing (RSAT) and user documentation (online help recordings). Read more on these topics in our other blogs!
When defining the processes and work items it is important to understand how LCS and DevOps share the information. There are various approaches one can take regarding the level of detail to track in BPM vs DevOps.
Business Process Modeler
The BPM is a necessary evil – I must admit, it is a strange thing. Currently it is the only way to bring business processes, task recordings, automatic testing, and user documentation of Dynamics 365 together and re-use the information between the different areas. At the same time, it is not extremely user friendly when compared to for example DevOps.
recommendations
- The model name must be short, the name is reused everywhere later-on in DevOps, I always call it simply BPM, so later the titles of my epics, features and tasks start with [BPM]
- The existing BPM models from MS are too big and complex, no key user will ever be able to put the task recording on the correct place.
- Create your own BPM
- 2 layers
- The 3 layer is in additional point for adding the task recording. A BPM could be like below picture, the customer has 2 tasks recording on how to create a customer

The BPM can then be synchronized with DevOps – the above hierarchy will generate DevOps work items with type “Epic” and/or “Feature” based on the synchronization parameters.
Once the epics and features have been synchronized, it is possible to add Requirements per process node via the button “Add requirement” – the work item type of a requirement in DevOps is also defined in the synchronization settings of LCS and depend on the DevOps project process template.
All links to DevOps are visible in LCM on the BPM nodes, including the test cases. The below considerations are important when making the decisions regarding what type of work items to track in which application.

BPM and DEVOPS
How does it integrate
- The synchronization between LCS and DevOps is one-directional from LCS BPM to DevOps, as a result:
- Any updates made in DevOps will not be reflected in the BPM library
- All work items that need to be available in the BPM need to be created via BPM and synchronized to DevOps
- Business processes (epics, features) need to be created in BPM, not in DevOps
- The mapping between BPM and DevOps is done in LCS and the options available depend on the project template type used for the DevOps project:
- when “Agile”, a BPM requirement type will generate a work item with type “User story” in DevOps
- when “Scrum”, a BPM requirement type will generate a work items with the type “Task” in DevOps
- If the DevOps template is “Custom” – including when it is derived from Agile – a BPM requirement type will generate a work items with the type “Task” in DevOps
- Test cases are linked to BPM process nodes (epic/feature in DevOps), not the requirements (user story/task in DevOps), maximum one test case can be linked to one process
- The one-directional synchronization rule applies also here
- If the test case should have a link to the requirement / user story / task, this can be added manually in DevOps
The DevOps process templates can be defined via DevOps Organization setup screen. The standard process templates are pre-created (Basic, Agile, Scrum, CMMI) and can be extended (KANBAN below).

The process template must be linked to the DevOps project via project settings screen:

The synchronization settings between LCS and DevOps are configured via LCS project settings tab “Visual Studio Team Services”. The options available in this screen depend on the process template used for the selected project.
In case of Agile process template, it is possible to select “User story” in the column “VSTS work item type” for LCS work item sub-type “Requirement”:

In case of Custom process template, it is not possible to select “User story” in the column “VSTS work item type” for LCS work item sub-type “Requirement”. The only available options are “Task” and “Bug”:

For additional details regarding the different DevOps process templates and how these are intended to be uses please refer to the Microsoft documentation
Conclusion
Considering the usability of LCS BPM and DevOps and the synchronization limitations our recommendation is to use BPM for 2-3 levels of processes and use DevOps for tracking the user stories / requirements / tasks. With this approach, the end users and key users can add new user stories / requirements via DevOps and link them to the proper BPM nodes (features / epics) inside DevOps.
When a new process is added, this must first be created via LCS BPM and synchronized to DevOps to make it available as a parent on new user stories / requirements.
Good luck on your improved user experience!
Performance tuning development VM
It is a general complain of any developer. Why is it so SLOOOWWWWW? And yes, it is slow. But there are performance tuning tricks. Slow is influence on 3 things
- Startup VM
- Is also startup of
- Batches
- Management reporter
- Windows security
- Windows update
- Windows maintenance
- SQL
- There is no SQL maintenance
- Disk
- Is also startup of
You could disable the batches & management reporter. That will help. But the real issue is how the disks of that VM are configured. In general, it looks like below 16 lazy disks.

Now you can change these to faster disks , but please be aware of the prices. Also, these are managed disk, the cost fixed amount every month, so stopping the VM will not reduce below cost.
Size | Premium SSD | Standard SSD | Standard HDD |
64 GiB | 9.47 | 4.05 | 2.54 |
128 GiB | 18.29 | 8.10 | 4.97 |
256 GiB | 35.26 | 16.20 | 9.56 |
512 GiB | 67.92 | 32.39 | 18.36 |
How does the disk migration work? The first step is migration to managed disks
Managed Disks simplify disk management for VMs by managing the Storage accounts behind the scenes. Managed Disks also provide granular access control with RBAC and better reliability for VMs in an Availability Set. Learn more about the benefits of using Managed Disks
Source unmanaged disks are not deleted after the migration. Managed Disks are created by making a copy of the source disks. You can revert back to unmanaged disks by creating a new VM with the source disks. Configuration of the VMs is not changed after the migration. Learn more about migrating to Managed Disks
In order to complete migration, we will need to start the virtual machine. Once migration is complete, you may stop the virtual machine.
Next you can change all the discs

But hold on 16 disks multiplied with 18.29 makes 292 euro on top of running the VM. This is very expensive in the relation to how you use it. So next step:
- Archive your code & database.
- Delete the VM
- Deploy a new one

And of course, do not forget to setup the auto shutdown and the azure software reductions
Dynamics 365 LCS Tricks – install Knowledge Base (KB) article
LCS is the place for getting all the information from Microsoft related to your Dynamics 365 for Operations projects. This Web portal also allows you to get your MS fixes to your environment and knows which Knowledge Base (KB) articles have been applied.
In the next image, for instance, we can see that there are 167 fixes applicable. So, let’s start cherry picking.
Dynamics 365 LCS tricks – platform update 4 experience
Dynamics 365 LCS Tricks – mass install packages
Deploying packages is a time-consuming thing. MS has improved the speed, but we partner can also help. This helping is by merging the packages. Good merged combinations are Binary updates and x++ updates, but your own code can also be combined with your customizations on your Retail (SDK) part.
Till now I was not able to merge ISV packages, so those must be deployed separately. However, there is quick dirty trick for local VM.