Sitecore on Azure PaaS File I/O errors

Published on
Authors

During the move to Azure PaaS there have been a few instances where we've noticed a series of errors being logged about file I/O issues which often result in instances dying within a PaaS "load balancer".

The error will look something like this:

Sitecore on Azure PaaS File I/O errors "Too many changes at once in directory:D:\home\site\wwwroot\"

Each time I've had to trawl through the Azure Kudo powershell file system view looking for where on earth some files are being written to so I thought I'd share a couple of (seemingly obvious upon reflection) places where I've had to work around this.

Logging - ensure you're using Application Insights and that all other logging is routed to AI and definitely not to the file system.

Diagnostic logging - disable this for production environments and only enable if you have a need (It's enabled by default which caught me out). See here: https://sitecore.stackexchange.com/questions/6384/filewatcher-error-internal-buffer-overflow

On a local instance we're referring to these logs/artefacts which is controlled by the \App_config\Include\Sitecore.Diagnostics.config file so set this to ".disabled" by default for PaaS environments.

Azure PaaS Diagnostic logs folder

Unicorn Serialisation - if you're using it, ensure that there is nothing which is being tracked by unicorn in the content tree (or anywhere else) which a content author might be making updates to (and therefor writing to the unicorn file). Seems obvious but a simple bit of configuration somewhere which is not set quite right can have you scratching your head...

It may seem like everything is OK but as soon as your PaaS hosted site is put under load, you'll quickly find that having file system writes happening will cause things to go wrong so it's worth confirming your configuration.