I support a system that uses third-party software. After a recent application upgrade, I began receiving sporadic 8623 errors. These began just once every few days and quickly escalated to 3-4 per day. The error I was receiving was 8623, Severity 16, State 1. DESCRIPTION: The query processor ran out of internal resources and could not produce a query plan. This is a rare event and only expected for extremely complex queries or queries that reference a very large number of tables or partitions. Please simplify the query. If you believe you have received this message in error, contact Customer Support Services for more information. I understood that this was a complex query but struggled with finding the source. I was using SQL 2008 R2 so the extended events available in SQL 2012 were not an option for debugging. (If you are on this version, I recommend this article by Jason Brimhall: http://jasonbrimhall.info/2014/01/02/day-9-queries-going-boom/) For me, the only option I had was to try to capture this in a trace. The error was occurring multiple times per day but not at consistent times. I setup a server-side trace for SQL and waited for the error to occur. I won’t go into a lot of detail on server-side traces as there are lots of examples of this on various websites. I created my profiler trace and exported the definition. In SSMS, I made minor changes to the code before starting the trace. I added a filecount parameter (50) and a stoptime parameter (8 hours in the future). This would give me 50 rolling trace files to review and would keep the trace from consuming excessive space on my server. Image and video hosting by TinyPic I then changed the sp_trace_create to include these new parameters. Image and video hosting by TinyPic I executed the code to start the trace and waited..and waited…and waited. Once the error occurred, I stopped the trace so I wouldn’t lose the data I had collected. I first found my TraceID: Image and video hosting by TinyPic and stopped the trace: Image and video hosting by TinyPic I then queried the trace files looking for the error: Image and video hosting by TinyPic Image and video hosting by TinyPic This showed me the error but not much more information (hostname, loginname, database name were removed for security purposes). I looked at the surrounding rows, before and after, and was still not able to find the issue. Since I had the SPID though, I was able to search through all the trace files for that SPID: Image and video hosting by TinyPic Image and video hosting by TinyPic I found the row shown above and pulled the TextData. This query ran for almost 2 minutes and ended just after the 8623 error. In this query, I was able to see the issue quickly in the WHERE clause. Image and video hosting by TinyPic At this point, I had found the source of the error and had code to send to the vendor. For my final cleanup, I closed the trace and manually deleted the .trc files: Image and video hosting by TinyPic SQL doesn’t provide an easy way to identify these rogue queries, but the data is there if we know how to extract it. Thank you for reading. Feel free to leave a comment below, and stay tuned for my next post in the coming weeks.