Debugging WCF – The underlying connection was closed: An unexpected error occurred on a receive.

This is another installment in the WCF vs. WTF article series. I mentioned before that I am quite a fan of WCF but having said that, feel that there are a few areas for improvement like sending method comments with the service contract discovery and more streamlined debugging, to name a couple.

The truth is that WCF actually provides some decent tools to debug service issues. However, I feel that the tools are a bit cumbersome to use and the bigger issue that the exception that bubbles up to the top is not necessarily informative of what actually caused the issue and requires digging deeper.

So.. Here is a pretty generic client error:

The underlying connection was closed: An unexpected error occurred on a receive. –    at System.Net.HttpWebRequest.GetResponse() at  System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout)
An error occurred while receiving the HTTP response to http://localhost:8731/Design_Time_Addresses/SomeService/. This could be due to the service endpoint binding not using the HTTP protocol. This could also be due to an HTTP request context being aborted by the server (possibly due to the service shutting down). See server logs for more details. 

So this basically says that something went wrong on the server-side and the service closed the connection. I should also note, that it is certainly extremely useful to have the following option configured for the service behavior even though it promises a bit more than it delivers:

 

<serviceBehaviors>
   <behavior name="SomeServiceBehavior">      
      <serviceDebug includeExceptionDetailInFaults="true"/>
   </behavior>
</serviceBehaviors>

I am generally pretty good at sticking an UnhandledException handler into my code but was surprised to find out that nothing was logged on the service to correspond to this client error. Hence, I took the next step and attached Visual Studio to the service and once again executed the client service call. Again, to my surprise, the breakpoint in the call got hit, I successfully stepped through the method right past the ‘return’ statement and saw with my own eyes that a valid method response was being generated. Yet, the client crashed again…

When this happens (i.e. you *know* the service call got executed, but the client crashed), it typically means something went wrong in the WCF plumbing. In order to figure out what exactly went wrong, WCF provides us with the plumbing stethoscope that is service diagnostic listeners. As anything in WCF, it seems there are many ways to configure it, but essentially you need to tell the service which things to listen for and where to log them. So a couple of sections need to be added to the config file:

 

<system.diagnostics>
  <sources>
    <source name="System.ServiceModel" switchValue="Warning, ActivityTracing" 
            propagateActivity="true">
      <listeners>
        <add type="System.Diagnostics.DefaultTraceListener" name="Default">
          <filter type=""/>
        </add>
        <add name="ServiceModelTraceListener">
          <filter type=""/>
        </add>
      </listeners>
    </source>
    <source name="System.ServiceModel.MessageLogging" 
            switchValue="Warning, ActivityTracing">
      <listeners>
        <add type="System.Diagnostics.DefaultTraceListener" name="Default">
          <filter type=""/>
        </add>
        <add name="ServiceModelMessageLoggingListener">
          <filter type=""/>
        </add>
      </listeners>
    </source>
  </sources>
  <sharedListeners>
    <add initializeData="c:\wcfLogs\SomeService\app_tracelog.svclog" 
         type="System.Diagnostics.XmlWriterTraceListener, System, Version=2.0.0.0, 
         Culture=neutral, PublicKeyToken=b77a5c561934e089" name="ServiceModelTraceListener" 
         traceOutputOptions="Timestamp">
      <filter type=""/>
    </add>
    <add initializeData="c:\wcfLogs\SomeService\app_messages.svclog" 
         type="System.Diagnostics.XmlWriterTraceListener, System, Version=2.0.0.0, 
         Culture=neutral, PublicKeyToken=b77a5c561934e089" name="ServiceModelMessageLoggingListener" 
         traceOutputOptions="Timestamp">
      <filter type=""/>
    </add>
  </sharedListeners>
</system.diagnostics>

This snippet creates two listeners (message and service trace) and specified where they should be stored. Then, inside the serviceModel configuration we can also specify what types of messages to log. For example:

 

<system.serviceModel>
    <diagnostics>
      <messageLogging logMalformedMessages="true" logMessagesAtTransportLevel="true"/>
    </diagnostics>

After adding the listeners, I ran the client again. Again it crashed, but now I had some logs created in ‘c:\wcfLogs\SomeService”.

Note: The directory where the logs will go has to exist – it will not be automatically created!

Checking out the lgos, (in Microsoft Service Trace Viewer) more information came to light. The Trace Viewer breaks up different service activities (i.e. construction, service open call, service action request, etc) and nicely enough highlights in red where the errors occur.

In my case, in  ‘Process action .. ‘ activity I saw that the service was ‘Throwing and exception”, which presumably it was also catching, since my Unhandled Exception handler did not get hit. Click on the exception revealed these details:

Maximum number of items that can be serialized or deserialized in an object graph is ‘65536’. Change the object graph or increase the MaxItemsInObjectGraph quota.

Ok, now this seems more informative than ‘underlying connection was closed’.. Wonder why instead of closing the connection, the service could not communicate this exception to the client given the exceptionDetailsInFaults configuration… But anyway, at least now I had something to work with.

This error above just said that the number of objects in the data returned by the service call was too large to be serialized. Wonder why they had to limit it… But I changed the configuration in the service behavior to resemble this:

 

<behaviors>
  <serviceBehaviors>
    <behavior name="SomeServiceServiceBehavior">      
      <serviceDebug includeExceptionDetailInFaults="true"/>
      <dataContractSerializer maxItemsInObjectGraph="2147483647"/>
    </behavior>
  </serviceBehaviors>
</behaviors>

No need to skimp on the maxItemsInObjectGraph, I say!

I restarted the service, ran the client… tun dun dun! and it crashed again. Well, this time, the exception on the client looked like this:

Maximum number of items that can be serialized or deserialized in an object graph is ‘65536’. Change the object graph or increase the MaxItemsInObjectGraph quota.  –    at System.Runtime.Serialization.XmlObjectSerializerContext.IncrementItemCount(Int32 count) at ReadSisyphusUriRecordFromXml(XmlReaderDelegator , XmlObjectSerializerReadContext , XmlDictionaryString[] , XmlDictionaryString[] )…..

What? It looks the same as the exception above!? Yep, except this time, it is on the client side and occurs around the same place where the “underlying connection was closed..” exception was happening before. So basically this says that the client deserializer also needs to be reconfigured. On the client side, this configuration needs to be specified for the behavior associated with the EndPoint. On the server side, it was a behavior associated with the service (i.e. within ServiceBehaviors). So a little tricky. The new client configuration change looked something like this:

 

<behaviors>
  <endpointBehaviors>
    <behavior name="SomeServiceClientEndpointBehavior">
      <dataContractSerializer maxItemsInObjectGraph="2147483647"/>
    </behavior>
  </endpointBehaviors>
</behaviors>

This ‘Behaviors’ defintion was added within the <system.serviceModel> tag, and then for the endpoint itself I added ‘behaviorConfiguration=”SomeServiceClientEndpointBehavior”‘ attribute.

That fixed the problem for me. Probably more than you wanted to know, but I hope this helps.

Cheers!