SharePoint Troubleshooting Tools & Techniques
Upcoming SlideShare
Loading in...5

Like this? Share it with your network


SharePoint Troubleshooting Tools & Techniques



Learn about the tools and techniques that Microsoft Premier Support engineers use to gather data to troubleshoot and resolve issues. This session includes an overview of the troubleshooting process ...

Learn about the tools and techniques that Microsoft Premier Support engineers use to gather data to troubleshoot and resolve issues. This session includes an overview of the troubleshooting process used to complete a Root Cause Analysis, and a review and demo of the different set of tools available for different needs including:
-- Diagnostic Logging
-- Data Collection
-- Data Analysis
-- Debugging



Total Views
Views on SlideShare
Embed Views



5 Embeds 34 30 1 1 1 1



Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment
  • Event logs in SharePoint, serve the same purpose as they do in any operating system or application. Careful monitoring of event logs can help you predict and identify the sources of a problem. Event logs such as application and system logs can help diagnose problems with SharePoint. Regardless of the source, each event log stores the following details: Date Time Type User Computer Source Category Event ID  For example, the SharePoint Health Analyzer detects a missing update.
  • The IIS log file format is a fixed ASCII text-based format that is configured by default, to write data into the World Wide Web Consortium (W3C) extended file format. The IIS log file format records the following data for a SharePoint web server: Demo: show which columns are not selected by default. Show how to change default log location. How to identify logs for a particular web site based on its ID.
  • Open IIS ManagerShow how to check IIS Logs configurationShow how to map IIS web site ID with IIS log folderStart Powershell as admincd “C:\Program Files (x86)\Log Parser 2.2\”Time taken command:./LOGPARSER.EXE "SELECT TOP 10 cs-uri-stem, max(time-taken) as MaxTime, avg(time-taken) as AvgTime FROM C:\inetpub\logs\LogFiles\W3SVC729378167\u_ex*.log WHERE extract_extension(to_lowercase(cs-uri-stem)) = 'aspx' GROUP BY cs-uri-stem ORDER BY MaxTime DESC“Top 10 images by size:./LOGPARSER.EXE "Select Top 10 StrCat(Extract_Path(TO_Lowercase(cs-uri-stem)),'/') AS RequestedPath, Extract_filename(To_Lowercase(cs-uri-stem)) As RequestedFile, Count(*) AS Hits, Max(time-taken) As MaxTime, Avg(time-taken) As AvgTime, Max(sc-bytes) As BytesSentFROM C:\inetpub\logs\LogFiles\W3SVC729378167\u_ex*.log WHERE (Extract_Extension(To_Lowercase(cs-uri-stem)) IN ('gif';'jpg';'png')) AND (sc-status = 200) GROUP BY To_Lowercase(cs-uri-stem) ORDER BY BytesSent, Hits, MaxTime DESC“Top 10 Pages By Size./LOGPARSER.EXE "Select Top 10 StrCat(Extract_Path(TO_Lowercase(cs-uri-stem)),'/') AS RequestedPath, Extract_filename(To_Lowercase(cs-uri-stem)) As RequestedFile, Count(*) AS Hits, Max(time-taken) As MaxTime, Avg(time-taken) As AvgTime, Max(sc-bytes) As BytesSent FROM C:\inetpub\logs\LogFiles\W3SVC729378167\u_ex*.log WHERE (Extract_Extension(To_Lowercase(cs-uri-stem)) IN ('aspx')) AND (sc-status = 200) GROUP BY To_Lowercase(cs-uri-stem) ORDER BY BytesSent, Hits, MaxTime DESC"Average Time per user./LOGPARSER.EXE "Select Top 20 cs-username AS UserName, AVG(time-taken) AS AvgTime, Count(*) AS Hits FROM C:\inetpub\logs\LogFiles\W3SVC729378167\u_ex*.log WHERE cs-username IS NOT NULL GROUP BY cs-username ORDER BY AvgTime DESC“Top 10 URLs./LOGPARSER.EXE "Select TOP 10 STRCAT(EXTRACT_PATH(cs-uri-stem),'/') AS RequestPath, EXTRACT_FILENAME(cs-uri-stem) AS RequestedFile, COUNT(*) AS TotalHits, Max(time-taken) AS MaxTime, AVG(time-taken) AS AvgTime, AVG(sc-bytes) AS AvgBytesSent FROM C:\inetpub\logs\LogFiles\W3SVC729378167\u_ex*.log GROUP BY cs-uri-stem ORDER BY TotalHits DESC”
  • Setup logs are usually stored in the %temp% folder of the user initiating the setup. There are three logs that you need to examine for setup-related issues: SharePoint setup logSQL Server Express wrapper logUninstall logSharePoint setup log You can use the SharePoint setup log to examine setup failures. This log is named in the format Microsoft SharePoint Foundation 2010 Setup (YYYYMMDD-HHMMSS-SSS).log, where YYYYMMDD is the date and HHMMSS-SSS is the time (hours in 24-hour clock format, minutes, seconds, and milliseconds). The SharePoint Server setup log follows the same naming standard but the root name is slightly different in order to indicate a difference in product type. This log is named in the format SharePoint Server Setup(YYYYMMDD-HHMMSS-SSS).log.It contains the list and status of the MSI packages installed for SharePoint. If any errors are encountered during the setup of these MSI packages, the SharePoint setup log file is the one that provides the most relevant information on how to proceed. If all prerequisite checks succeed, SharePoint setup continues to install the relevant MSI packages. The MSI packages are chained for both install and uninstall, one after the other, by a component known as the catalyst. The catalyst also logs the setup details of each MSI package in the setup log. The following MSI packages are recorded in the setup log: ACCSRV.en-usDLC.en-usIFS.en-usLHPSRV.en-usOSRV.en-usPPSMA.en-usSearch.en-usSPS.en-usVisioServer.en-usWasrv.en-usWDSRV-en-usWss.en-usXLSERVER.en-us
  • Each MSI package is either listed or associated with the install packages available in the install folder of SharePoint. After each MSI is installed, you can view the next MSI queued for install and its install status. The following is an excerpt from a SharePoint setup log that indicates that the wssmui.MSI package was installed successfully and that the next MSI, sts, is chained for installation by the catalyst: After installing the MSIs, setup writes the required registry keys and runs the post-setup configuration wizard, if opted.SQL Server Express wrapper logDepending on the type of installation performed there may be a SQL Server Express wrapper log file. This log will be generated when a SQL Server Express installation is performed as part of the installation of SharePoint and only during a standalone installation. It is stored in the temp directory of the user who initiates setup (%USERPROFILE%\AppData\Local\Temp).Uninstall logYou can trigger a SharePoint uninstallation by either using the Add or Remove Programs icon in Control Panel or running setup.exe from the install location and selecting Remove. When performing an uninstall, Setup will create a new log, SetupExe(xxxxxxxxxx).log, under the %temp% folder during the uninstall. You can examine the uninstall setup log to verify each MSI package chained by the catalyst is uninstalled completely.
  • SharePoint 2010, just like MOSS 2007 has a mandatory post-install configuration. The Post-Setup Configuration tool is also known as PSConfig.exe and PSConfigUI.exe. The purpose of the tool is to create a configuration database and perform all first-time activities. Subsequently, the tool provides options to change or repair the existing configuration settings. The configuration logs are stored at %CommonProgramFiles%\Microsoft Shared\web server extensions\14\logs. The logs are named as PSCDiagnostics_MM_DD_YYYY_HH_MM_SS_MS_XXX_XXXXXXXX.log.The logs files will contain information about what happens during the post setup configuration process. Because of the amount of information presented in the log and the formatting, it is easier to review the logs through Microsoft Excel or another third-party log file analyzer such as Log Parser (which we will review later in this module).The logs are not created when Windows PowerShell cmdlet is in use.
  • You can search through the logs for lines that contain "ERR" as these will indicate errors in the process. Each error will be recorded in the log file, presented on the screen, as well as the Event Log. However in order to get the most information about the error, you will need to review the configuration log itself.
  • Unified Logging Service (ULS) logs are a robust logging system first introduced in MOSS. By default, ULS logs contain events from all categories; there are over 266 categories of events in all. The following table lists some of the common events that you will use for troubleshooting SharePoint.The ULS logs, also referred to as trace logs in the UI, are created by the ULS. These logs are stored in the %CommonProgramFiles%\Microsoft Shared\web server extensions\14\LOGS folder with the name format of ServerName-Date-Timestamp.log. You can change the location of these log files by using the http://CentralAdminSite:Port/_admin/metrics.aspx page. However, the location must be consistent across all servers in your farm. A ULS log file contains nine columns: Timestamp, PID, TID, Product, Category, EventID, Level, Event Message, and Correlation.
  • The SharePoint Server 2010 environment might require configuration of the diagnostic loggings settings after initial deployment or upgrade and possibly throughout the system’s life cycle. The guidelines in the following list can help you form best practices for the specific environment.Change the drive to which the logs write By default, diagnostic logging is configured to write logs to the same drive and partition that SharePoint Server 2010 was installed on. Because diagnostic logging can use lots of drive space and writing to the logs can affect drive performance, you should configure logging to write to a drive that is different from the drive on which SharePoint Server 2010 was installed. You should also consider the connection speed to the drive to which logs are written. If verbose-level logging is configured, lots of log data is recorded. Therefore, a slow connection might result in poor log performance.Restrict log disk space usage By default, the amount of disk space that diagnostic logging can use is not limited. Therefore, limit the disk space that logging uses to make sure that the disk does not fill up, especially if you configure logging to write verbose-level events. When the disk restriction is used up, the oldest logs are removed and new logging data information is recorded.Use the Verbose setting sparingly You can configure diagnostic logging to record verbose-level events. This means that the system will log every action that SharePoint Server 2010 takes. Verbose-level logging can quickly use drive space and affect drive and server performance. You can use verbose-level logging to record a greater level of detail when you are making critical changes and then re-configure logging to record only higher-level events after you make the change.Regularly back up logs The diagnostic logs contain important data. Therefore, back them up regularly to make sure that this data is preserved. When you restrict log drive space usage, or if you keep logs for only a few days, log files are automatically deleted, starting with the oldest files first, when the threshold is met.
  • The logging database is a new SharePoint 2010 feature that is used to store all SharePoint usage and health data. The database is part of the Usage and Health Data Collection service application. Data logged in this database includes:Unified Logging Service (ULS) trace log dataEvent log dataPerformance dataBlocking SQL queriesCrawl and query statisticsFeature usagePage requestsTimer job usageSite inventoryRating usageImport and export usageSQLDMV and memory queriesThe logging database has a public documented schema, and it can be queried directly or written to by third-party applications; this includes the ability to create custom logging and diagnostic providers. All stored data has a customizable retention policy, with detailed data retained for 14 days by default.
  • The logging database is tuned to support a heavy load of simultaneous writes. This has been tested up to 5000 transactions/second in parallel, and as such, where possible, this database should be put on to a separate disk spindle.The logging database can be used in a number of scenarios, most of which relate to troubleshooting, but also include usage reporting. These include:Poor crawl or query performanceSQL queries that are causing blockingTimer jobs that are regularly failingDetermining how widely a feature is actually usedListing all site collections within the farm for reporting or billing purposesBasic configuration settings for the Usage and Health Data Collection service application, and as such the logging database, can be set from within Central Administration. These can be divided into two categories, usage data and health data.
  • Due to table partitioning, it is easier to use the provided views, as shown in the following screenshot, or create your own custom queries when viewing data in the logging database. You can use Excel with the Excel Web App to create quick custom reports that are hosted from within SharePoint.Some out-of-the-box reports make use of the logging database. These are discussed later in this section.
  • SPSFarmReport.exe=================This is a 32-bit executable that relies on .NET Framework 3.0. This assembly can run on both x86 and x64 Windows operating systems where Windows SharePoint Services 3.0, Office SharePoint Server 2007, and/or Project Server 2007 is installed and configured using psconfig. It references the microsoft.sharepoint.dll and at during runtime, loads the if osearch is installed and configured. 2010SPSFR.exe=============This is a 64-bit executable that relies on .NET Framework 3.5. This assembly can only run on x64 Windows operating systems where SharePoint Foundation 2010, SharePoint Server 2010, and/or Project Server 2010 is installed and configured using psconfig. It references the microsoft.sharepoint.dll and the powershell interface through It is recommended that you use IE to view the generated output and then optionally enable scripts.Optional Demo: Show SPSFarmReport running, and the report it creates
  • Process Monitor is an advanced monitoring tool for Windows that shows real-time file system, Registry, and process/thread activity. It combines the features of two legacy Sysinternals utilities—Filemon and Regmon—and adds an extensive list of enhancements, including rich and non-destructive filtering, comprehensive event properties such as session IDs and user names, reliable process information, full-thread stacks with integrated symbol support for each operation, simultaneous logging to a file, etc. Its powerful features make Process Monitor a core utility in system troubleshooting and malware hunting toolkit. Use case:Identify Conflicting processes,Identify Processes contending for resources, Identify Registry Access Issues, Identify Process Dependencies and their respective versions,Identify invalid runtime parameters A free download of Process Monitor is available at Monitor includes powerful monitoring and filtering capabilities, including:More data captured for operation input and output parametersNon-destructive filters that allows you to set filters without losing dataCapture of thread stacks for each operation make it possible in many cases to identify the root cause of an operationReliable capture of process details, including image path, command line, user and session IDConfigurable and moveable columns for any event propertyFilters can be set for any data field, including fields not configured as columnsAdvanced logging architecture scales to tens of millions of captured events and gigabytes of log data
  • Process tree tool shows relationship of all processes referenced in a traceNative log format preserves all data for loading in a different Process Monitor instanceProcess tooltip for easy viewing of process image informationDetail tooltip allows convenient access to formatted data that does not fit in the column.Cancellable searchBoot time logging of all operationsThe best way to become familiar with Process Monitor's features is to read through the help file and then visit each of its menu items and options on a live system.
  • Process Explorer shows information about which handles and DLL processes have opened or loaded.The Process Explorer display consists of two sub-windows. The top window shows a list of the currently active processes, including the names of their owning accounts, whereas the information displayed in the bottom window depends on the mode that Process Explorer is in - if it is in handle mode, you will see the handles that the process selected in the top window has opened and if it is in DLL mode, you will see the DLLs and memory-mapped files that the process has loaded. Process Explorer also has a powerful search capability that will quickly shows which processes have particular handles opened or DLLs loaded.The unique capabilities of Process Explorer make it useful for tracking down DLL-version problems or handle leaks, and provide insight into the way Windows and applications work.Use case: Identify a Single Process’s resource consumption (Memory, CPU, Network), Identify Single or Multiple Processes dependencies and active network connections (only when TCPView is in the same folder as Process Explorer), Helps identify memory leaks or application level crashes.
  • Failed Request Tracing (FREB = Failed Request Event Buffering) is a great tool if your application is ‘mostly working’, that is, the process is not crashing. If you are randomly seeing errors on a page or one page repeatedly has errors, FREB Tracing can give you valuable data with which to determine the cause of the error. Request-based tracing provides a great way to figure out what exactly is happening with your requests and why it is happening, provided you can reproduce the problem you are experiencing. Problems like poor performance on some requests, authentication related failures on other requests, or even the server 500 error from ASP or can often be incredibly difficult to troubleshoot unless you have captured the trace of the problem when it occurs. That’s where failed-request tracing comes in. It is designed to buffer the trace events for a request and only flush them to disk if the request “fails,” where you provide the definition of “failure”. LimitationsWhen running .NET, it is sometimes not possible to inspect an error page to diagnose an error condition. This can happen if:You do not know which URL is experiencing an error. The error happens intermittently, and you are not able to manually reproduce it (the error may depend on user input or external operating conditions that may happen infrequently).The error only happens in the production environment.Common uses of FREBFailed-request tracing is designed to buffer the trace events for a request and only flush them to disk if the request "fails," where you provide the definition of "failure". For these common problems, use failed-request tracing to determine the source of the issue. You receive a 404.2 messageAnother fairly common application problem is code that hangs or enters into a resource-intensive loop. This can often happen because:A blocking I/O operation on a file or network takes a long time to complete, such as when accessing a remote Web service or databaseThe code has a bug that causes it to enter into an endless (or long-running) loop, possibly also spinning the CPU or allocating memoryThe code hangs or deadlocks on a shared resource or lockThese conditions result in long wait times or timeouts for the user making the request, and the conditions can also negatively impact the performance of the application and even the server as a whole.IIS 7 provides a quick way to determine which requests are hanging by inspecting the currently executing requests.
  • How to use FREBStep 1: Enable Failed-Request Tracing for the Site and Configure the Log File Directory Open a command prompt with administrator user rights. Launch inetmgr. In the Connections pane, expand the machine name, expand Sites, and then click (for example) Default Web Site.  In the Actions pane, under Configure, click Failed Request Tracing.In the Edit Web Site Failed Request Tracing Settings dialog box, configure the following: Select the Enable check box.Keep the defaults for the other settings. Click OK. Failed-request tracing logging is now enabled for the Default Web Site. Check the %windir%\system32\inetsrv\config\applicationHost.config file to confirm that the configuration looks as follows: Step 2: Configure Your Failure Definitions In this step, you will configure the failure definitions for your URL, including what areas to trace. You will troubleshoot a 404.2 that is returned by IIS 7 for any requests to extensions that have not yet been enabled. This will help you determine which particular extensions you will need to enable.  Open a command prompt with administrator user rights.Launch inetmgr.In the Connections pane, expand the machine name, expand Sites, and then click Default Web Site. Double-click Failed Request Tracing Rules.Click Finish.  In the Actions pane, click Add.... In the Add Failed Request Tracing Rule wizard, on the Specify Content to Trace page, select All content (*). Click Next.On the Define Trace Conditions page, select the Status code(s) check box and enter 404.2 as the status code to trace.Click Next. On the Select Trace Providers page, under Providers, select the WWW Server check box. Under Areas, select the Security check box  and clear all other check boxes. The problem that you are generating causes a security error trace event to be thrown. In general, authentication and authorization (including ISAPI restriction list issues) problems can be diagnosed by using the WWW Server – Security area configuration for tracing. However, because the FREB.xsl style sheet helps highlight errors and warnings, you can still use the default configuration to log all events in all areas and providers. Under Verbosity, select Verbose. Click Finish. You should see the definition listed for the Default Web Site.
  • Performance Monitor is a simple yet powerful visualization tool for viewing performance data, both in real-time and from log files. With it, you can examine performance data in a graph, histogram, or report.Some of the most common counters look at:CPUDisk utilizationMemoryNetwork InterfaceThere are a number of SharePoint specific counters:SharePoint Disk-Based CacheSharePoint FoundationSharePoint Foundation BDC MetadataSharePoint Foundation BDC OnlineSharePoint Foundation Search GathererSharePoint Publishing Cache
  • Log Parser is a powerful, versatile tool that provides universal query access to text-based data such as log files, XML files and CSV files, as well as key data sources on the Windows operating system such as the Event Log, the Registry, the file system, and Active Directory®. You tell Log Parser what information you need and how you want it processed. The results of your query can be custom-formatted in text based output, or they can be persisted to more specialty targets like SQL, SYSLOG, or a chart. It is a command line tool. It was intended for use with the Windows operating system, and was included with the IIS 6.0 Resource Kit Tools. The default behavior of Log Parser works like a "data processing pipeline", by taking an SQL expression on the command line, and outputting the lines containing matches for the SQL expression.Use case: Helps automate the analysis of large amounts of log data for the purpose of root cause analysis.
  • Open IIS ManagerShow how to check IIS Logs configurationShow how to map IIS web site ID with IIS log folderStart Powershell as admincd “C:\Program Files (x86)\Log Parser 2.2\”Time taken command:./LOGPARSER.EXE "SELECT TOP 10 cs-uri-stem, max(time-taken) as MaxTime, avg(time-taken) as AvgTime FROM C:\inetpub\logs\LogFiles\W3SVC729378167\u_ex*.log WHERE extract_extension(to_lowercase(cs-uri-stem)) = 'aspx' GROUP BY cs-uri-stem ORDER BY MaxTime DESC“Top 10 images by size:./LOGPARSER.EXE "Select Top 10 StrCat(Extract_Path(TO_Lowercase(cs-uri-stem)),'/') AS RequestedPath, Extract_filename(To_Lowercase(cs-uri-stem)) As RequestedFile, Count(*) AS Hits, Max(time-taken) As MaxTime, Avg(time-taken) As AvgTime, Max(sc-bytes) As BytesSentFROM C:\inetpub\logs\LogFiles\W3SVC729378167\u_ex*.log WHERE (Extract_Extension(To_Lowercase(cs-uri-stem)) IN ('gif';'jpg';'png')) AND (sc-status = 200) GROUP BY To_Lowercase(cs-uri-stem) ORDER BY BytesSent, Hits, MaxTime DESC“Top 10 Pages By Size./LOGPARSER.EXE "Select Top 10 StrCat(Extract_Path(TO_Lowercase(cs-uri-stem)),'/') AS RequestedPath, Extract_filename(To_Lowercase(cs-uri-stem)) As RequestedFile, Count(*) AS Hits, Max(time-taken) As MaxTime, Avg(time-taken) As AvgTime, Max(sc-bytes) As BytesSent FROM C:\inetpub\logs\LogFiles\W3SVC729378167\u_ex*.log WHERE (Extract_Extension(To_Lowercase(cs-uri-stem)) IN ('aspx')) AND (sc-status = 200) GROUP BY To_Lowercase(cs-uri-stem) ORDER BY BytesSent, Hits, MaxTime DESC"Average Time per user./LOGPARSER.EXE "Select Top 20 cs-username AS UserName, AVG(time-taken) AS AvgTime, Count(*) AS Hits FROM C:\inetpub\logs\LogFiles\W3SVC729378167\u_ex*.log WHERE cs-username IS NOT NULL GROUP BY cs-username ORDER BY AvgTime DESC“Top 10 URLs./LOGPARSER.EXE "Select TOP 10 STRCAT(EXTRACT_PATH(cs-uri-stem),'/') AS RequestPath, EXTRACT_FILENAME(cs-uri-stem) AS RequestedFile, COUNT(*) AS TotalHits, Max(time-taken) AS MaxTime, AVG(time-taken) AS AvgTime, AVG(sc-bytes) AS AvgBytesSent FROM C:\inetpub\logs\LogFiles\W3SVC729378167\u_ex*.log GROUP BY cs-uri-stem ORDER BY TotalHits DESC”Querying Diagnostics Logs$path = `C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\LOGS\PSCDiagnostics*.log’./LogParser -i:TSV "SELECT Field3 As Status, Field4 as Description FROM ‘$path’ WHERE Field3 LIKE 'ERR%'" -nSkipLines:1 -headerRow:off -iSeparator:space -nSep:2
  • The following screenshots show an example of a PAL report. Here you can see a breakdown of how many issues occurred in which time sliceShown below is the chronological list of alerts.And, the following screenshot shows a breakdown of alerts for an individual counter.
  • Optional Demo: Open Fiddler, open central admin (may need new IE window), analyse results, compare to opening http://spts1/, clear cache, refreshFiddler is a Web Debugging Proxy which logs all HTTP(S) traffic between your computer and the Internet. Fiddler allows you to inspect all HTTP(S) traffic, set breakpoints, and "fiddle" with incoming or outgoing data. Fiddler includes a powerful event-based scripting subsystem, and can be extended using any .NET language.Fiddler is a tool aimed at helping resolve simple or complex problems where the browser is unsuccessfully able to connect to a Website. Use case: Identify Slow Rendering of Web Pages, Identify Mal Formatted HTTP Requests, Identify Ill responses from a Web Server.The Web Sessions ListThe Web Sessions list contains the list of HTTP requests that are sent by your computer.  You can resize and reorder the columns in this list for your convenience.  You can also sort this list by clicking on the column header.Certain key information is available in this list, including:# - An ID# of the request generated by Fiddler for your convenience. Result - The result code from the HTTP response.  Protocol - The protocol (HTTP/HTTPS/FTP) used by this session. Host - The hostname of the server to which the request was sent. URL - The path and file requested from the server. Body - The number of bytes in the response body Caching - Values from the Response's Expires or Cache-Control headers. Process - The local Windows process from which the traffic originated. Content-Type - The Content-Type header from the response. Custom - A text field you can set via scripting.  Comments - A text field you can set from scripting or the session's context menu. New columns can be added in newer versions of Fiddler. The default text coloring of the Session entries, derives from the HTTP Status (red for errors, yellow for authentication demands), traffic type (CONNECT appears in grey), or response type (CSS in purple, HTML in blue; script in green, images in grey).Fiddler's Filters tab allows you to filter and flag traffic displayed in the Fiddler UI.TrimThe Keep only the most recent # sessions box, enables Fiddler to discard older sessions beyond the specified threshold.  This reduces memory usage and helps improve performance.
  • HostsThe Zone Filter dropdown at the top of the dialog allows you to show traffic only to your Intranet (for example, dotless hostnames) or only to the Internet (for example, dotted hostnames). This is a useful option when debugging a site in one zone while referencing web-based documentation from the other zone.The Host Filter dropdown enables you to flag or exclude display of traffic to specified domain names. When configured to hide traffic to certain hosts, Fiddler will still proxy traffic to those hosts, but that traffic will be hidden from the Fiddler Session List.Client ProcessThe process filter allows you to control which processes' traffic is shown within Fiddler. The Hide traffic from Service Host option will hide traffic from svchost.exe, a system process that synchronizes RSS Feeds and performs other background network activity.When configured to hide traffic from certain processes, Fiddler will still proxy their traffic, but that traffic will be hidden from the Fiddler Session List.BreakpointsThe breakpoints enable you to break requests or responses that contain the specified attributes.Request HeadersUsing these options, you can add or remove HTTP request headers, and flag responses that contain certain headers.You can also filter displayed traffic down to specific URLs with the Show only if URL contains box. You can demand case-sensitivity with the EXACT directive:EXACT:// Or you can use regular expressions, so you can use:REGEX:(?insx).*\.(gif|png|jpg)$ #only show requests for img typesResponse Status CodeUsing these options, you can filter display of responses based on the Response Status Code. It also allows you to hide sessions whose responses code matches target values [HTTP errors, redirects, authentication challenges and cache-reuse].Response Type and SizeUsing these options, you can control what types of responses appear within the session list.The list of "Block" checkboxes enables blocking responses of the specified types, returning a HTTP/404 error to the client instead of the target resource.Response HeadersUsing these options, you can add or remove HTTP response headers, and flag responses that contain certain headers.
  • Open Fiddler, open central admin (may need new IE window), analyze results, compare to opening http://spts1/, clear cache, refresh
  • Network Monitor is a protocol analyzer. It allows you to capture network traffic, view and analyze it. Version 3.4 is an update and replaces Network Monitor 3.3. Network Monitor 3.x is a complete overhaul of the previous Network Monitor 2.x version.The tool displays each frame found in the network traffic including the header information. The information is displayed in a packet by packet listing at the top, followed by detailed information of the frame contents. Often the listing of frames captured can seem daunting. Network Monitor helps with that through its filtering and color coding features. We can limit the frames displayed to just traffic between two machines or even types of traffic such as TCP or LDAP. If we are analyzing multiple machines or multiple types of traffic, the color coding can be very useful for quickly seeing the differences between the traffic listed.Use case: Helps identify network related errors and connectivity problems,Helps identify performance tuning opportunities for multitier applications and web sites. The Network Monitor installation affects network interface (and network traffic) when you install it.
  • Microsoft SQL Server Profiler is a graphical user interface to SQL Trace for monitoring an instance of the Database Engine or Analysis Services. You can capture and save data about each event to a file or table to analyze later. For example, you can monitor a production environment to see which stored procedures are affecting performance by executing too slowly. To run SQL Server Profiler, on the Start menu, point to All Program-> Microsoft SQL Server 2008->Performance Tools->SQL Server Profiler.Use SQL Server Profiler to monitor only the events in which you are interested. If traces are becoming too large, you can filter them based on the information you want, so that only a subset of the event data is collected. Monitoring too many events adds overhead to the server and the monitoring process, and can cause the trace file or trace table to grow very large, especially when the monitoring process takes place over a long period of time.Using SQL Server Profiler, you can open a Microsoft Windows performance log, choose the counters you want to correlate with a trace, and display the selected performance counters alongside the trace in the SQL Server Profiler graphical user interface. When you select an event in the trace window, a vertical red bar in the System Monitor data window pane of SQL Server Profiler indicates the performance log data that correlates with the selected trace event.To correlate a trace with performance counters, open a trace file or table that contains the StartTime and EndTime data columns, and then click Import Performance Data on the SQL Server Profiler File menu. You can then open a performance log, and select the System Monitor objects and counters that you want to correlate with the trace.
  • Replay a traceReplay is the ability to save a trace and replay it later. This functionality lets you reproduce activity captured in a trace. When you create or edit a trace, you can save the trace to replay it later.SQL Server Profiler features a multithreaded playback engine that can simulate user connections and SQL Server Authentication. Replay is useful to troubleshoot an application or process problem. When you have identified the problem and implemented corrections, run the trace that found the potential problem against the corrected application or process. Then, replay the original trace and compare results.Trace replay supports debugging by using the Toggle Breakpoint and the Run to Cursor options on the SQL Server Profiler Replay menu. These options especially improve the analysis of long scripts because they can break the replay of the trace into short segments so they can be analyzed incrementally.By default, running SQL Server Profiler requires the same user permissions as the Transact-SQL stored procedures that are used to create traces. To run SQL Server Profiler, users must be granted the ALTER TRACE permission.
  • The SQLdiag utility is a general purpose diagnostics collection utility that can be run as a console application or as a service. You can use SQLdiag to collect logs and data files from SQL Server and other types of servers, and use it to monitor your servers over time or troubleshoot specific problems with your servers. SQLdiag is intended to expedite and simplify diagnostic information gathering for Microsoft Customer Support Services.You can specify what types of information you want SQLdiag to collect by editing the configuration file SQLDiag.xml.SQLdiag can collect the following types of diagnostic information:Windows performance logsWindows event logsSQL Server Profiler tracesSQL Server blocking informationSQL Server configuration informationSecurity Requirements Unless SQLdiag is run in generic mode (by specifying the /G command line argument), the user who runs SQLdiag must be a member of the Windows Administrators group and a member of the SQL Server sysadmin fixed server role. By default, SQLdiag connects to SQL Server by using Windows Authentication, but it also supports SQL Server Authentication.Use cases: Helps diagnose and identify SQL server configuration issues, Helps identify performance related issues.Performance Considerations The performance effects of running SQLdiag depend on the type of diagnostic data you have configured it to collect. For example, if you have configured SQLdiag to collect SQL Server Profiler tracing information, the more event classes you choose to trace, the more your server performance is affected.The performance impact of running SQLdiag is approximately equivalent to the sum of the costs of collecting the configured diagnostics separately. For example, collecting a trace with SQLdiag incurs the same performance cost as collecting it with SQL Server Profiler. The performance impact of using SQLdiag is negligible.
  • Required Disk Space Because SQLdiag can collect different types of diagnostic information, the free disk space that is required to run SQLdiag varies. The amount of diagnostic information collected depends on the nature and volume of the workload that the server is processing and may range from a few megabytes to several gigabytes.Output Folder If you do not specify an output folder with the /O argument, SQLdiag creates a subfolder named SQLDIAG under the SQLdiag startup folder. For diagnostic information collection that involves high volume tracing, such as SQL Server Profiler, make sure that the output folder is on a local drive with enough space to store the requested diagnostic output.When SQLdiag is restarted, it overwrites the contents of the output folder. To avoid this, specify /N 2 on the command line.Data Collection Process When SQLdiag starts, it performs the initialization checks necessary to collect the diagnostic data that have been specified in SQLDiag.Xml. This process may take several seconds. After SQLdiag has started collecting diagnostic data when it is run as a console application, a message displays informing you that SQLdiag collection has started and that you can press CTRL+C to stop it. When SQLdiag is run as a service, a similar message is written to the Windows event log.If you are using SQLdiag to diagnose a problem that you can reproduce, wait until you receive this message before you reproduce the problem on your server.SQLdiag collects most diagnostic data in parallel. All diagnostic information is collected by connecting to tools, such as the SQL Server sqlcmd utility or the Windows command processor, except when information is collected from Windows performance logs and event logs. SQLdiag uses one worker thread per computer to monitor the diagnostic data collection of these other tools, often simultaneously waiting for several tools to complete. During the collection process, SQLdiag routes the output from each diagnostic to the output folder.Stopping Data Collection After SQLdiag starts collecting diagnostic data, it continues to do so unless you stop it or it is configured to stop at a specified time. You can configure SQLdiag to stop at a specified time by using the /E argument, which allows you to specify a stop time, or by using the /X argument, which causes SQLdiag to run in snapshot mode.When SQLdiag stops, it stops all the diagnostics it has started. For example, it stops SQL Server Profiler traces it was collecting, it stops executing Transact-SQL scripts it was running, and it stops any sub processes it has spawned during data collection. After diagnostic data collection is complete, SQLdiag exits.
  • Demo: Using SetSPN –x and –a commands.DelegConfigThis is an ASP.NET application that is meant to be called from Internet Explorer on an actual client machine. The tool (aspx page) attempts to look at all the common settings that contribute towards successful Kerberos authentication and delegation. Features:Supports IIS 7.0 (useKernelMode / useAppPoolCredentials).Allows adding backend servers of type UNC, HTTP, LDAP, OLAP, SQL, SSAS, and RDP.Allows chaining of multiple hops (versus only a single backend).Performs duplicate SPN check against all trusted domains./Set/SPNs.aspx - Allows adding and removing of ServicePrincipalNames./Set/Delegation.aspx - Allows changing Trust for Delegation settings./Set/Providers.aspx - Allows correcting of inadequate NTAuthenticationProviders settings./Report.aspx - Gives a picture of what is right and what is wrong./Wizard.aspx - A set of wizard steps that supports adding more tiers to /Report.aspx./Test.aspx - Allows double-hop tests for webServer-to-Sql or webServer-to-fileServer or webServer-to-webServer.You can download DelegConfig from: .
  • Microsoft SharePoint Diagnostic Studio 2010 (SPDiag version 3.0) was created to simplify and standardize troubleshooting of Microsoft SharePoint 2010 Products, and to provide a unified view of collected data. Administrators of SharePoint 2010 Products can use SPDiag 3.0 to gather relevant information from a farm, display the results in a meaningful way, identify performance issues, and share or export the collected data and reports for analysis by Microsoft support personnel.Traditionally, troubleshooting SharePoint 2010 Products involves manually collecting a wide array of data from servers in the affected farm, and then manually analyzing the data to determine the source of the problem. This process can be complex and time-consuming, and data collection itself can place a significant load on the servers.SPDiag greatly simplifies the troubleshooting process by providing a single interface for data collection and presentation in a series of preconfigured reports that cover a wide range of data points commonly used to diagnose SharePoint performance and capacity-related issues. Although most common troubleshooting scenarios are addressed by SPDiag, some SharePoint issues might require analysis of additional data not collected by SPDiag.The tool itself can be downloaded from TechNet and is included as part of the SharePoint 2010 Administration Toolkit v2 ( When installed, the built in logging capabilities of the logging database are extended. In order to run the tool remotely several PowerShell commands need executed to allow for remote reporting of the data. The Usage and Health Data Collection service application is required for proper use of the tool.The following is a list of new features and changes in SPDiag 3.0:Preconfigured reportsSPDiag provides a selection of preconfigured reports that aggregates data from the SharePoint farm present useful views into common SharePoint troubleshooting scenarios. Snapshots You can take snapshots of your farm that aggregate report images, farm topology information, Unified Logging Service (ULS) logs, and usage database data. This makes it easy to consolidate key troubleshooting information about a SharePoint farm and share it with other users or preserve it for comparison and trend analysis.Improved integration with SharePoint Server Enhanced data collection from more sources.
  • Integration into VS: SPDisposeCheck is a tool to help you to check your assemblies that use the SharePoint API so that you can build better code. It provides assistance in correctly disposing of certain SharePoint objects to help you follow published best practice. This tool may not show all memory leaks in your code. You can download this tool from takes the path to a managed .DLL or .EXE or the path to a directory containing many managed assemblies. It will recursively search for and analyze each managed module attempting to detect coding patterns based on the following article: Several new checks have been added on when “NOT” to Dispose objects instantiated by SharePoint internally. These newly reported “DO NOT DISPOSE” (DND) rules were unreported by SPDisposeCheck v1.3.* . We would encourage you to run the updated SPDisposeCheck tool on all customized SharePoint projects to help identify areas in code which may lead to memory pressure and server stability issues. As a best practice you should consider adding this tool to your SharePoint software development life cycle build process and review its output with a subject matter expert on a regular interval.SPDisposeCheck also integrates with Visual Studio 2008/2010 as an Add-in which calls out to the SPDisposeCheck executable and can also be added as a build step forcing every Sharepoint development to go through SPDisposeCheck.
  • Microsoft provides the Microsoft SharePoint Online Code Analysis Framework (MSOCAF) to customers for use in analyzing custom solutions, testing the deployment of the custom solutions, and submitting them for installation in the SharePoint Online environment. This tool can also be used to analyze on premise SharePoint solutions. Customers can obtain MSOCAF from the MSOCAF download site. MSOCAF is built using an extensible framework to run a set of executable rules against a custom solution, prior to submitting the custom solution for approval and deployment into pre-production and production environments. (For details see Appendix D: Rules Enforced by MSOCAF.) The SharePoint Online engineering team may continue to add rules and plug-ins according to the MSOCAF compliance policy, which is described in the SharePoint Online Dedicated Custom Solution Policies and Process document. MSOCAF includes a user's guide in a compiled Help file, and uses the ClickOnce technology that enables Microsoft to automatically perform updates or add new rules and test cases according to the compliance policy, without the need for customers to re-install the framework. MSOCAF must be used to validate all custom solutions developed for SharePoint Online by both customers and independent software vendors (ISVs). Custom solutions developed by customers for SharePoint Online must use solution package (WSP) files. To use MSOCAF with a custom solution, the deployment package also must be organized into a specific directory structure as described in Required Directory Structure and Components in the Deployment Package later in this document. If a customer wants to use an ISV solution that contains unmanaged code, it must be submitted to Microsoft separately for analysis and review.
  • Debugging tools help solve crashes, slow performance, memory leaks, etc. The following debugging tools are useful for tracking down problems with SharePoint. DebugDiagThe Debug Diagnostic tool (DebugDiag) is designed to assist in troubleshooting issues such as hangs, or any process or application that stops responding; slow performance; memory leaks or fragmentation; and crashes or failures, in any Win32 user-mode process. The tool also includes additional debugging scripts focused on Internet Information Services (IIS) applications, Web data access components, Component Services (COM+), and related Microsoft technologies. DebugDiag 1.2 is currently available as a standalone tool in both a 32-bit and 64-bit version from DebugDiag provides an extensible object model in the form of COM objects and provides a script host with a built-in reporting framework. It is composed of three components: a debugging service, a debugger host, and the user interface (UI). There are built in reports to run against dump files to analyze situations of memory pressure, crash/hang, and SharePoint.Debugger Service The Debugger Service (DbgSvc.exe) performs the following tasks:  Attach or detach the host to processesCollect Performance Monitor dataImplement HTTP ping to detect hangs Inject leak monitor in running processesCollect debugging session state information Display the state of each rule defined in the debug engineDebugger Host The Debugger Host (DbgHost.exe) hosts the Windows Symbolic Debugger Engine (dbgeng.dll) to attach to processes and generate memory dumps. It also hosts the main analyzer module to analyze memory dumps. Dbghost.exe does not have any dependency on the DbgSvc.exe and can be used separately. User Interface DebugDiag has two user interfaces—DebugDiag.exe and DebugDiagAnalysisOnly.exe—which help analyze memory dumps, automate the creation of control scripts, and show the status of running processes, including services. It should be noted that on 64-bit Windows Server 2008 installations only the "DebugDiagAnalysisOnly.exe" will be available for installation.
  • WinDbg is a Windows debugging tool and is ideal for troubleshooting issues with ASP.NET applications such as SharePoint. It is distributed as part of the Debugging Tools for Windows suite. You can use the Debugging Tools for Windows suite to debug drivers, applications, and services on systems running Microsoft Windows NT 4.0, Microsoft Windows 2000, Windows XP, Windows Server 2003, Windows Vista, and Windows Server 2008. In addition, you can use these tools for debugging the operating system itself. There are different versions of the Debugging Tools for Windows suite available for 32-bit x86 and x64 platforms. WinDbg allows a SharePoint engineer to attach it to a running process, or analyze a memory dump. You can capture the memory dump either by using WinDbg through the command prompt or the DebugDiag UI.

SharePoint Troubleshooting Tools & Techniques Presentation Transcript

  • 1. SharePoint Troubleshooting Tools and Techniques Manuel Longo Senior Manager – SharePoint Consultant – Sogeti US Twitter: @SPSChicago Hashtag #SPSChicago 1 | SharePoint Saturday Chicago 2013
  • 2.      Twitter: @SPSChicago Hashtag #SPSChicago 2 | SharePoint Saturday Chicago 2013
  • 3.         Twitter: @SPSChicago Hashtag #SPSChicago 3 | SharePoint Saturday Chicago 2013
  • 4. Twitter: @SPSChicago Hashtag #SPSChicago 4 | SharePoint Saturday Chicago 2013
  • 5.             Twitter: @SPSChicago Hashtag #SPSChicago 5 | SharePoint Saturday Chicago 2013
  • 6.   Twitter: @SPSChicago Hashtag #SPSChicago 6 | SharePoint Saturday Chicago 2013
  • 7. 1. 2. 3. 4. 5. 6. 7. 8. 9. Twitter: @SPSChicago Hashtag #SPSChicago Correct Gather Analyze 7 | SharePoint Saturday Chicago 2013
  • 8. 1. 2. 3. 4. Twitter: @SPSChicago Hashtag #SPSChicago 8 | SharePoint Saturday Chicago 2013
  • 9. Twitter: @SPSChicago Hashtag #SPSChicago 9 | SharePoint Saturday Chicago 2013
  • 10.         Twitter: @SPSChicago Hashtag #SPSChicago 10 | SharePoint Saturday Chicago 2013
  • 11. Twitter: @SPSChicago Hashtag #SPSChicago 11 | SharePoint Saturday Chicago 2013
  • 12.     Software Boundaries and limits       Twitter: @SPSChicago Hashtag #SPSChicago 12 | SharePoint Saturday Chicago 2013
  • 13. Twitter: @SPSChicago Hashtag #SPSChicago 13 | SharePoint Saturday Chicago 2013
  • 14.         Twitter: @SPSChicago Hashtag #SPSChicago 14 | SharePoint Saturday Chicago 2013
  • 15.        Twitter: @SPSChicago Hashtag #SPSChicago 15 | SharePoint Saturday Chicago 2013
  • 16.    An error from the SharePoint Health Analyzer showing the need for an update Twitter: @SPSChicago Hashtag #SPSChicago 16 | SharePoint Saturday Chicago 2013
  • 17. Keep environment clean Twitter: @SPSChicago Hashtag #SPSChicago 17 | SharePoint Saturday Chicago 2013
  • 18.    Twitter: @SPSChicago Hashtag #SPSChicago 18 | SharePoint Saturday Chicago 2013
  • 19. Twitter: @SPSChicago Hashtag #SPSChicago 19 | SharePoint Saturday Chicago 2013
  • 20.         Twitter: @SPSChicago Hashtag #SPSChicago 20 | SharePoint Saturday Chicago 2013
  • 21.       Twitter: @SPSChicago Hashtag #SPSChicago 22 | SharePoint Saturday Chicago 2013
  • 22.    Twitter: @SPSChicago Hashtag #SPSChicago Begin trace logging for SharePoint 2010 Products Configuration Wizard. Version 14.0.4762.1000 07/10/2010 12:51:44 1 INF Entering function PsconfigUserInterfaceMain.Main 07/10/2010 12:51:44 1 INF Entering function Common.SetCurrentThreadCultureToInstalledCul ture 07/10/2010 12:51:44 1 INF Entering function Common.SetThreadCultureToInstalledCulture 07/10/2010 12:51:44 1 INF Current thread culture is English (United States), current thread ui culture is English (United States), installed culture is English (United States) 07/10/2010 12:51:44 1 INF Leaving function Common.SetThreadCultureToInstalledCulture 23 | SharePoint Saturday Chicago 2013
  • 23.     Twitter: @SPSChicago Hashtag #SPSChicago 24 | SharePoint Saturday Chicago 2013
  • 24.       Twitter: @SPSChicago Hashtag #SPSChicago 25 | SharePoint Saturday Chicago 2013
  • 25.           Twitter: @SPSChicago Hashtag #SPSChicago 26 | SharePoint Saturday Chicago 2013
  • 26.         Twitter: @SPSChicago Hashtag #SPSChicago 27 | SharePoint Saturday Chicago 2013
  • 27.      Twitter: @SPSChicago Hashtag #SPSChicago 28 | SharePoint Saturday Chicago 2013
  • 28.        Twitter: @SPSChicago Hashtag #SPSChicago 29 | SharePoint Saturday Chicago 2013
  • 29.     Twitter: @SPSChicago Hashtag #SPSChicago 30 | SharePoint Saturday Chicago 2013
  • 30.      Twitter: @SPSChicago Hashtag #SPSChicago 31 | SharePoint Saturday Chicago 2013
  • 31.   Twitter: @SPSChicago Hashtag #SPSChicago 32 | SharePoint Saturday Chicago 2013
  • 32.      Twitter: @SPSChicago Hashtag #SPSChicago 34 | SharePoint Saturday Chicago 2013
  • 33.   1. 2. Twitter: @SPSChicago Hashtag #SPSChicago 35 | SharePoint Saturday Chicago 2013
  • 34. Twitter: @SPSChicago Hashtag #SPSChicago 36 | SharePoint Saturday Chicago 2013
  • 35.        Twitter: @SPSChicago Hashtag #SPSChicago 37 | SharePoint Saturday Chicago 2013
  • 36.          Twitter: @SPSChicago Hashtag #SPSChicago 38 | SharePoint Saturday Chicago 2013
  • 37.    $ logparser <options> <SQL expression> $ logparser -e:IISW3C -q "SELECT date, time, cs-username FROM *.log WHERE cs-uri-stem LIKE '%.aspx' ORDER BY date, time;" Twitter: @SPSChicago Hashtag #SPSChicago 39 | SharePoint Saturday Chicago 2013
  • 38. Twitter: @SPSChicago Hashtag #SPSChicago 40 | SharePoint Saturday Chicago 2013
  • 39.     Twitter: @SPSChicago Hashtag #SPSChicago 41 | SharePoint Saturday Chicago 2013
  • 40. Twitter: @SPSChicago Hashtag #SPSChicago 42 | SharePoint Saturday Chicago 2013
  • 41. Twitter: @SPSChicago Hashtag #SPSChicago 43 | SharePoint Saturday Chicago 2013
  • 42.   Twitter: @SPSChicago Hashtag #SPSChicago 44 | SharePoint Saturday Chicago 2013
  • 43. Twitter: @SPSChicago Hashtag #SPSChicago 46 | SharePoint Saturday Chicago 2013
  • 44.              Twitter: @SPSChicago Hashtag #SPSChicago 47 | SharePoint Saturday Chicago 2013
  • 45.          Twitter: @SPSChicago Hashtag #SPSChicago 48 | SharePoint Saturday Chicago 2013
  • 46.       Twitter: @SPSChicago Hashtag #SPSChicago 50 | SharePoint Saturday Chicago 2013
  • 47.        Twitter: @SPSChicago Hashtag #SPSChicago 52 | SharePoint Saturday Chicago 2013
  • 48.     Twitter: @SPSChicago Hashtag #SPSChicago 53 | SharePoint Saturday Chicago 2013
  • 49.      Twitter: @SPSChicago Hashtag #SPSChicago 54 | SharePoint Saturday Chicago 2013
  • 50. Twitter: @SPSChicago Hashtag #SPSChicago 55 | SharePoint Saturday Chicago 2013
  • 51.        Twitter: @SPSChicago Hashtag #SPSChicago 56 | SharePoint Saturday Chicago 2013
  • 52.       Twitter: @SPSChicago Hashtag #SPSChicago 57 | SharePoint Saturday Chicago 2013
  • 53.       Twitter: @SPSChicago Hashtag #SPSChicago 58 | SharePoint Saturday Chicago 2013
  • 54.      Twitter: @SPSChicago Hashtag #SPSChicago 59 | SharePoint Saturday Chicago 2013
  • 55.         Twitter: @SPSChicago Hashtag #SPSChicago 60 | SharePoint Saturday Chicago 2013
  • 56. Twitter: @SPSChicago Hashtag #SPSChicago 61 | SharePoint Saturday Chicago 2013
  • 57.     Twitter: @SPSChicago Hashtag #SPSChicago 62 | SharePoint Saturday Chicago 2013
  • 58.      Twitter: @SPSChicago Hashtag #SPSChicago 63 | SharePoint Saturday Chicago 2013
  • 59.   Twitter: @SPSChicago Hashtag #SPSChicago 64 | SharePoint Saturday Chicago 2013
  • 60. Twitter: @SPSChicago Hashtag #SPSChicago 65 | SharePoint Saturday Chicago 2013
  • 61. 1. 2. 3. 4. 5. Twitter: @SPSChicago Hashtag #SPSChicago 66 | SharePoint Saturday Chicago 2013
  • 62.   Twitter: @SPSChicago Hashtag #SPSChicago 67 | SharePoint Saturday Chicago 2013
  • 63. Thanks to Our Sponsors! Twitter: @SPSChicago Hashtag #SPSChicago 68 | SharePoint Saturday Chicago 2013