Required Slide<br />Data Access Best Practices<br />Gieno Miao<br />
Session Objectives / Agenda<br />Connection Optimizations<br />Best Practices on DataSets & DataViews<br />Best Practices ...
1. Connection Optimizations<br />Open Late & Close Early<br />Use the Correct Provider<br />SqlClient<br />OleDb<br />ODBC...
2. Best Practices when using DataSets & DataViews<br />DataSets<br />Using Strongly Typed DataSets<br />Filling Best Pract...
3. Best Practices when using DataReaders<br />DataReaders<br />Row Level Optimizations<br />Speed/Efficiency Best Practice...
4. DataSets vs. DataReaders<br />When to use DataSets vs. DataReaders<br />Flexibility vs. Speed<br />Paging Through Data<...
5. Using Command Objects Efficiently<br />Query Performance Best Practice Techniques<br />CommandType<br />Parameter Data ...
5. Using Command Objects Efficiently<br />Command Object vs. CommandBuilder<br />Choosing the Right Execution Method<br />...
6. Transactions<br />Best Way to Design Transactions<br />Short and Sweet<br />As Close to DB as Possible<br />Use Read Co...
7. Database Level Best Practices<br />Storing Data Correctly<br />Correct Data Types and Lengths<br />Correct Defaults and...
8. Database Level Best Practices<br />Indexes<br />How Indexes Optimize Data Access<br />What Columns Should be Indexed?<b...
9. Security Best Practices<br />Securing via Stored Procedures (and objects)<br />Storing Connection Strings Securely<br /...
10. XML Data Retrieval Best Practices<br />XMLDocument vs XMLDataDocument<br />XMLReader vs XMLTextReader vs XMLValidating...
Data access best practices
Upcoming SlideShare
Loading in …5
×

Data access best practices

2,230 views

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
2,230
On SlideShare
0
From Embeds
0
Number of Embeds
117
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Today we’re going to talk about both data access best practices and actions you can take to secure your data access code using preferred methods.While discussing best practices for accessing data, we’ll review about all the popular ADO.NET data access API objects, such as connections, commands and datasets.We’ll also discuss scenarios that commonly occur like paging and when to use a command vs. a commandbuilder object.Then we will examine the preferred ways of dealing with database level optimizations as well as securing data access code. The discussion will end by assessing best practices for XML data access.
  • There are many considerations to take into account when connecting to a data source. We need to properly setup the connection string and connection information to be able to use the built in features of ADO.NET to our advantage.When dealing with connections, always open them as late as possible, and close them as quickly as possible when you are done with data access operations. If you are using a DataAdapter to fill a DataSet you can close the connection immediately after calling the .fill method. The Dataset works well in a disconnected state so you don’t need to wait until after you use it to close.Choosing the correct provider is important, as the ADO.NET data providers were built to accommodate their particular database. If you are using SQL Server 7.0 and above, you will want to use SqlClient, if you are using MS Access or SQL 6.5, you’ll want to use OleDB. For non Microsoft data stores, you’re best off using ODBC, unless that vendor has their own provider that is geared for performance when accessing data from their data store.Connection pooling can provide huge performance benefits and is therefore considered a best practice in accessing data. In order to take advantage of connection pooling we must ensure that we use the exact same connection string for each connection. The exactness is important, as even a space in the connection string will cause that connection to use a different pool.It’s always best to consider the roles that users will be categorized in rather than thinking in terms of individual users. When we gear our application toward the individual user, we end up with individual logins for them, which is hard to manage, and hard to take advantage of technologies like connection pooling.There are also some objects that manage connections for us, like the SqlDataAdapter object. Normally, if you were to retrieve the data you would control the opening and closing of the connections, however some ADO.NET objects will do that for you. In the case of the SqlDataAdapter if you do open the connection, it won’t close it for you, so you need to either allow the SqlDataAdapter to do 100% of the work, or you need to do 100% of the work.
  • When using datasets developers want to employ the best practices for coding, performance and maintenance. You can use strongly typed datasets to ensure your data access code is the most efficient when using datasets.Strongly typed datasets are considered best practices because they give us these benefits:-shorter code, since we can access fields as properties rather than using the ordinal syntax or string column syntax-more maintainable code, for the same reason-faster code, since the CLR can determine what data types we are working with at compile time rather than runtimeWhen filling DataSets we want to take care that we properly fill he DataSet and that our code meets performance standards. We should use strongly typed datasets with specific .fill or .fillby methods (e.g., FillByCustomerId or FillByCategory) rather than the generic .fill or .fillschema method. While the generic Fill method is easy to code it doesn’t provide a specific way to access strongly typed DataSet information. FillSchema takes extra processing steps because it retrieves the schema information from the database, and that generally has a negative impact on performance.If we are to retrieve data from many data sources, we should use DataSets. DataSets provide the provider agnostic way to not only store data locally in memory but to manage that data via relationships and constraints that would otherwise not be available using DataReaders.When searching through DataSets you have some options: using the Find method of the DataRow collection, the Select method of the DataTable or using a DataViewThe Find method is the best practice for performance since it uses the primary key to zero in on the only row that could match the search criteria. The select method returns an array of DataRow objects which is an efficient way to retrieve multiple rows, if the search criteria warrants. While the DataView object could be used to filter records from a DataTable object both the Select and Find methods perform better. This is because the DataView requires overhead when loading.Never loop through the DataRows collection, as this is the least efficient way to find data.DataViews are dynamic views of DataTable objects, and are not part of the DataSet’s collections directly. DataView do need to build an index when they are created so if the underlying DataTable object has no primary keys, a DataView can enhance performance by using its Find or FindRows methods. When using DataViews always supply a filter by using the parameterized constructor – when you build a dataview using the default constructor then set the rowfilter property later, two indexes are built, rather than just one.
  • DataReaders, also termed fire hose cursors, give an extremely fast option for accessing data, as they only provide read only, forward only access to a resultset. If an important requirement of your application is speed, then the best choice for accessing data is the DataReader. If intensive row level calculations (and/or optimizations) and processing are needed, a DataReader is also your best choice. DataReaders provide the lightweight and speedy mechanism that allows you to loop quickly through resultsets.DataReaders are known for their speed, and since DataReaders are the objects used behind the scenes to fill DataTables inside of DataSets so it makes sense that they are the best choice when performance is key.Closing DataReaders is the most important technique when using DataReaders. DataReaders, unlike DataSets, are connected to the database the entire time they are in use. When DataReaders are left open, they continue to consume resources and block other objects from using the connection they are associated with. When ending a data read operation before you’ve iterated through all the records, it is important that you call the Cancel method of the DataReader’s associated Command object. Calling Close causes data reader to retrieve pending results and empty the stream before closing cursor. Calling Cancel on Command discards the results on server and so data reader does not have to read when it closed. You’re most likely to need to cancel the DataReader’s operations when performing asynchronous calls, and will need some help from threads to do that properly as well, but the key is to make sure you do call the Cancel method of the DataReader’s assocated SqlCommand object.
  • When you need to consdier best practices for choosing DataSets vs DataReaders, consider the following:The best choice is a DataReader when:you need fast data accessyou need to do many calculations at the row levelyou are just “dumping out” data, such as in a reporting scenario or a read only web pageYour best choice is a DataSet when:you need to access data from multiple data sourcesyou’re writing a desktop app and don’t have to worry about memory constraintsyou need to navigate using relationshipsyou need disconnected data access, such as in offline applicationsYour primary choices when paging through data in ASP.NET applications are to use DataSets and default databinding or to use DataReaders and a custom paging scenario. The best practice, although it is more complex, is to create a stored procedure or query that retrieves only those rows needed and manages the paging algorithm, rather than leaving the paging details to ADO.NET via default databinding.
  • When using Command objects there are many best practice techniques for querying data.You should always set the CommandType property of the Command object, as it notifies SQL of what action it should take; it should call a stored procedure or if you are passing SQL text, making the call more efficient and better performing.It is best practice to also set the data types of each parameter that is passed to either a parameterized query or a parameterized stored procedure as this optimizes the T-SQL that is passed to SQL server (or to any underlying data store). Typed parameters pass the correct information to SQL so that type inferencing doesn’t occur. Likewise, to avoid type inferencing it is best practice to use the Add method and supply the data types of the parameters yourself rather than using AddWithValue.However, there is one issue to note between the .Add and .AddWithValue methods. One particular overload in the .Add method, the overload that accepts the name and value has been deprecated in .NET 2.0. This was done because it was too often confused with the overload that accepted the name and data type. Both the .AddWithValue and the .Add with the name/value or just the name/type parameters should be avoided, and in their place you should use the .Add method with the full overloaded set of parameters of name, type and size.
  • When considering best practices for execution choices, consider the following:If you are inserting, updating or deleting, use the ExecuteNonQuery methodIf you are returning an aggregate such as SUM, MIN, MAX, COUNT, etc… use the ExecuteScalar method rather than ExecuteReader, as it’s optimized as one row/one column.If you are returning a result set of data for forward only, read only access, use the ExecuteReaderWhen crafting code that creates actions such as insert, delete or update you should use the Command object, as it allows for finer control and better performance. While the command builder will create and execute all the necessary SQL to complete the task, it also comes with the downside of creating unnecessary code, which can hinder performance.
  • When dealing with transactions keep in mind that the shorter the better. Transactions hold locks on data, and if transactions run for lengthy periods of time, they can cause timeouts and prevent locks from unlocking quickly enough for others to access the necessary data.Transaction code (i.e., begin trans, end &amp; rollback trans) should be located as close to the database as possible, reducing the surface of the transaction. This is for the same reason that you would keep your transactions short, so they don’t hold locks open for too long, consuming too many resources.Unless you are developing applications that need very specific transaction options, you can use Read Committed. Read Committed as the transaction’s isolation level in SQL stored procedures or ADO.NET Transaction objects will not allow access to the row being modified until it is committed or rolled back, therefore preventing dirty reads. Other options for isolation levels are:Read Uncommitted: Specifies that statements can read rows that have been modified by other transactions but not yet committed. This is usually not acceptable for mission critical apps.Repeatable Read: Specifies that statements cannot read data that has been modified but not yet committed by other transactions and that no other transactions can modify data that has been read by the current transaction until the current transaction completes. RR uses a shared locking scheme and holds the locks down until transactions are complete, often disallowing access from other queries, which is often not acceptable in many application requirements.Serializable: This places the most locks on SQL resources, and therefore should only be used when absolutely necessary, as it will cause blocking to resources from other queries for longer than the other isolation options. Use Serializable if you have a very high volume of transactions per second – such as in an applications like stock brokering systems or airline ticket sales. General business data-in, data-out applications this is not necessary, and will drain resources and performance.
  • Proper storage of data is crucial for building top notch applications. We should always consider our choice of data types before using and to take notice what the fields will be used for before choosing the type. Ensuring that proper data types are used will result in code that’s easier to maintain and programs that execute more efficiently.Primary and Foreign keys are important items for setting up database tables. Without primary keys, the uniqueness of rows cannot be guaranteed, and duplicate data can occur. Without foreign keys, relationships cannot be guaranteed and navigating what should be related data is difficult. Developers often write applications without relationships and constraints setup in the beginning of the project, only to find that much code needs to be reworked in later phases of the project to accommodate them, or, worse – developers will forgo constraints altogether and leave it up to application code to do the work that is already built in to the database, and also tested and working reliably. In this case recreating the wheel of constraints leads to maintenance, speed and even potentially security issues, depening on what method of constraining data the developer has chosen.When dealing with integrity, you should always use the built in constraints and foreign keys as mentioned above. The built in constraints act on the data before it is inserted, deleted or updated whereas triggers do not. Triggers are only acceptable if complex logic is needed that cannot be done with built in constraints or code in the business logic layers.Stored procedures gives you a great way to encapsulate some business rules (but we don’t want to do all of them there), and to encapsulate reusable queries. Stored procs enhance performance by allowing SQL to create an optimized query plan that gets cached for quick retrieval. Stored procedures also give the benefit of reusable queries, and security (as discussed on previous slide). Indexes can greatly increase the performance of an application if used wisely. We should consider the following point of interest on how indexes optimize data access:There are two types of indexes, clustered and non clustered. Clustered indexes are the actual physical arrangement of the data on disk. Non clustered indexes are a series of pointers to the actual data, which is arranged a different way. We would use clustered indexes for:data that is primarily contiguous or sequential in nature, without large gaps in it.Only use clustered indexes for data that will be queried. Clustered indexes are put on primary keys by default, and if the key isn’t frequently used in WHERE clauses then the index would be best used with another colum that is frequently queried.We would use non clustered indexes for:Use JOIN or GROUP BY clauses.Create multiple nonclustered indexes on columns involved in join and grouping operations, and a clustered index on any foreign key columns. Queries that do not return large result sets.Create filtered indexes to cover queries that return a well-defined subset of rows from a large table.(see http://msdn.microsoft.com/en-us/library/ms179325.aspx)We want to index frequently queried columns, WHERE clause queries or exact match queries.We don’t want to use many indexes on tables that are frequently updated, inserted into or deleted from. Indexes are for data retrieval, not data manipulation.Consider column uniqueness when choosing an index type (clustered vs non) And if a column can be unique, then a UNIQUE index can be applied rather than other types.Consider the index order, ASC or DESC. This will vary based on how the data is queried. Keep in mind that many dates are queried in descending order, so they are often best served with the sort order set to DESC.Writing efficient queries that take advantage of the indexes laid out is essential for best practice scenarios. The following tips outline best practices for writing great queries: Do NOT query using SELECT *. Select the exact columns you want.Always limit with a where clause, return only the data you needAvoid cursors in MS SQL. Cursors are resource intensive and there is almost always an easy set based alternative using SELECT. Store query text in stored procedures rather than in the application’s code base.JOINS are usually better performing than subqueries, while returning the same resultsAvoid creating and using temporary tables (# and ## tables). Use table variables in SQL 2005+ instead.Avoid use of optimizer hints, such a NOLOCK or WITH for indexes, as the query engine does its job well.Use SQL Query Analyzer to tune queries for performance
  • Proper storage of data is crucial for building top notch applications. We should always consider our choice of data types before using and to take notice what the fields will be used for before choosing the type. Ensuring that proper data types are used will result in code that’s easier to maintain and programs that execute more efficiently.Primary and Foreign keys are important items for setting up database tables. Without primary keys, the uniqueness of rows cannot be guaranteed, and duplicate data can occur. Without foreign keys, relationships cannot be guaranteed and navigating what should be related data is difficult. Developers often write applications without relationships and constraints setup in the beginning of the project, only to find that much code needs to be reworked in later phases of the project to accommodate them, or, worse – developers will forgo constraints altogether and leave it up to application code to do the work that is already built in to the database, and also tested and working reliably. In this case recreating the wheel of constraints leads to maintenance, speed and even potentially security issues, depening on what method of constraining data the developer has chosen.When dealing with integrity, you should always use the built in constraints and foreign keys as mentioned above. The built in constraints act on the data before it is inserted, deleted or updated whereas triggers do not. Triggers are only acceptable if complex logic is needed that cannot be done with built in constraints or code in the business logic layers.Stored procedures gives you a great way to encapsulate some business rules (but we don’t want to do all of them there), and to encapsulate reusable queries. Stored procs enhance performance by allowing SQL to create an optimized query plan that gets cached for quick retrieval. Stored procedures also give the benefit of reusable queries, and security (as discussed on previous slide). Indexes can greatly increase the performance of an application if used wisely. We should consider the following point of interest on how indexes optimize data access:There are two types of indexes, clustered and non clustered. Clustered indexes are the actual physical arrangement of the data on disk. Non clustered indexes are a series of pointers to the actual data, which is arranged a different way. We would use clustered indexes for:data that is primarily contiguous or sequential in nature, without large gaps in it.Only use clustered indexes for data that will be queried. Clustered indexes are put on primary keys by default, and if the key isn’t frequently used in WHERE clauses then the index would be best used with another colum that is frequently queried.We would use non clustered indexes for:Use JOIN or GROUP BY clauses.Create multiple nonclustered indexes on columns involved in join and grouping operations, and a clustered index on any foreign key columns. Queries that do not return large result sets.Create filtered indexes to cover queries that return a well-defined subset of rows from a large table.(see http://msdn.microsoft.com/en-us/library/ms179325.aspx)We want to index frequently queried columns, WHERE clause queries or exact match queries.We don’t want to use many indexes on tables that are frequently updated, inserted into or deleted from. Indexes are for data retrieval, not data manipulation.Consider column uniqueness when choosing an index type (clustered vs non) And if a column can be unique, then a UNIQUE index can be applied rather than other types.Consider the index order, ASC or DESC. This will vary based on how the data is queried. Keep in mind that many dates are queried in descending order, so they are often best served with the sort order set to DESC.Writing efficient queries that take advantage of the indexes laid out is essential for best practice scenarios. The following tips outline best practices for writing great queries: Do NOT query using SELECT *. Select the exact columns you want.Always limit with a where clause, return only the data you needAvoid cursors in MS SQL. Cursors are resource intensive and there is almost always an easy set based alternative using SELECT. Store query text in stored procedures rather than in the application’s code base.JOINS are usually better performing than subqueries, while returning the same resultsAvoid creating and using temporary tables (# and ## tables). Use table variables in SQL 2005+ instead.Avoid use of optimizer hints, such a NOLOCK or WITH for indexes, as the query engine does its job well.Use SQL Query Analyzer to tune queries for performance
  • Most people think of performance and coding optimizations as best practices, however good security practices are extremely important .Stored procedures give the developer a way to control security by only allowing code and user access to execute the procedure itself, and not allowing direct acces to the table. When this type of security practice is applied, DB administration is limited to the procedures, and we do not need to manage the number of users x (times) the number of tables, which can become unnecessarily complex and create a maintenance and security nightmare.Stored procedures also aid in reducing the attack surface of SQL Injection attacks. SQL injection attacks happen when Connections should be stored securely, and when using .NET 2.0 and above we can do so by using the ASPNET IIS Registration Tool (aspnet_regiis.exe) to encrypt our connection string in the configuration file. Should unauthorized access to the configuration file happen, the connection string will not be able to be viewed.Many users pass administrative accounts to the database as the single point login. This leaves the database wide open for malicious accesses. You should always use a low privilege account when developing your application, as well as using lowest privilege accounts possible as the final credentials needed for the application. If malicious attempts against your datastore are made with a low privilege account, the amount of damage that can be done is limited. SamplesTo encrypt:aspnet_regiis -pe &quot;connectionStrings&quot; -app &quot;/SampleApplication“To decrypt:aspnet_regiis -pd &quot;connectionStrings&quot; -app &quot;/SampleApplication“See: http://msdn.microsoft.com/en-us/library/dx0f3cf2.aspx Reg iis switches/reference: http://msdn.microsoft.com/en-us/library/k6h9cz8h(VS.80).aspx
  • There are many types of data access objects available for XML data access and it can often be daunting choosing the right object for the right job. We’ll break downt the available objects from the System.XML namespace into two general categories: Documents and Readers. Documents are generally used to load and manipulate data by traversing the XML nodes via parent, child and siblings or searching nodes using Xpath, Xquery or a similar option. Documents are also used to manipulate and change data, then save back into the document itself. You would use an XMLDocument in most cases for general XML data access and manipulation. The only reason you would turn to the XMLDataDocument is if you need to sync your XML data with a DataSet object – then it’s definitely the best way to go.Readers, on the other hand are used for read only, forward only access to the data. Because of their limited set of operations available, they can boost performance over documents when these particular features are needed. The XMLValidating reader was once recommended for reading in XML data that needed validation, however at .NET 2.0 and later, it is recommended that you use the XMLReader’s create method and pass the appropriate XMLReaderSettings value such as ProcessInlineSchema or ProcessSchemaLocation. Likewise, it is also recommended that you use the XMLReader instead of the XMLTextReader, as the XMLTextReader makes for a lightweight parser of XML to verfiy well-formedness, but that is all. The XMLTextReader has not been obsoleted.While you would continue using the standard XMLReaders and XMLDocuments to support legacy code, moving forward you’d want to consdier using LINQ to XML. L2XML gives you a more robust and maintainable way to load XML documents into memory then query those objects. We can see a very easy sample that loads all of the customers elements from an XML document named customerx.xml into a variable named custs. We can then easily foreach through the custs variable or bind to UI components.
  • Data access best practices

    1. 1. Required Slide<br />Data Access Best Practices<br />Gieno Miao<br />
    2. 2. Session Objectives / Agenda<br />Connection Optimizations<br />Best Practices on DataSets & DataViews<br />Best Practices on DataReaders<br />DataSets vs. DataReaders<br />Using Command Objects Efficiently<br />Choosing the Right Command Execution<br />Transactions<br />Database Level Optimizations <br />Security Best Practices<br />Best Ways to Access XML Data<br />
    3. 3. 1. Connection Optimizations<br />Open Late & Close Early<br />Use the Correct Provider<br />SqlClient<br />OleDb<br />ODBC<br />Others<br />Take Advantage of Connection Pooling<br />Use Same Credentials in Connection Strings<br />Design connections for groups/roles, not users<br />Be aware of objects that manage connections<br />
    4. 4. 2. Best Practices when using DataSets & DataViews<br />DataSets<br />Using Strongly Typed DataSets<br />Filling Best Practices<br />Getting Data from Multiple Sources: Use DataSets <br />Most Efficient Ways to Search Through DataSets<br />Considerations for using DataViews<br /> // Untyped Dataset<br /> string empName = employeesData.Tables["Employees"].Rows[0]["EmployeeName"].ToString();<br /> // Strongly Typed DataSet<br /> string empName = employeesData.Employees[0].CompanyName;<br />
    5. 5. 3. Best Practices when using DataReaders<br />DataReaders<br />Row Level Optimizations<br />Speed/Efficiency Best Practices<br />When to Close DataReaders<br />What to do when cancelling data reads<br /> Thread myThread2 = new Thread(new ThreadStart(Thread_Cancel));<br />myThread2.Start();<br /> myThread2.Join();<br /> reader.Read();<br /> readerLabel.Text = (reader.FieldCount.ToString());<br /> reader.Close();<br /> public static void Thread_Cancel() {<br /> Command.Cancel();<br /> }<br />
    6. 6. 4. DataSets vs. DataReaders<br />When to use DataSets vs. DataReaders<br />Flexibility vs. Speed<br />Paging Through Data<br />DataSets vs. DataReaders<br />Use ROW_NUMBER() in SQL 2005 <br />
    7. 7. 5. Using Command Objects Efficiently<br />Query Performance Best Practice Techniques<br />CommandType<br />Parameter Data Types<br />Using Add Method (with full overloads) <br /> - instead of AddWithValue<br /> Preferred<br />cmd.Parameters.Add("@EmployeeID", SqlDbType.Int, 4).Value = empId ;<br /> Don’t Use<br /> command.Parameters.AddWithValue("@EmployeeId", empId);<br /> Deprecated<br /> cmd.Parameters.Add("@EmployeeID", empId);<br />
    8. 8. 5. Using Command Objects Efficiently<br />Command Object vs. CommandBuilder<br />Choosing the Right Execution Method<br />When to Use ExecuteNonQuery<br />When to Use ExecuteScalar<br />When to Use ExecuteReader<br />
    9. 9. 6. Transactions<br />Best Way to Design Transactions<br />Short and Sweet<br />As Close to DB as Possible<br />Use Read Committed Isolation Level for most Apps<br /> (SQL Default/ADO.NET Default)<br />
    10. 10. 7. Database Level Best Practices<br />Storing Data Correctly<br />Correct Data Types and Lengths<br />Correct Defaults and Constraints<br />Using Primary and Foreign Keys<br />Avoiding Triggers for Referential Integrity<br />Benefits of Using Stored Procedures<br />Performance<br />Security<br />Maintenance<br />
    11. 11. 8. Database Level Best Practices<br />Indexes<br />How Indexes Optimize Data Access<br />What Columns Should be Indexed?<br />Writing the Most Efficient Queries<br />
    12. 12. 9. Security Best Practices<br />Securing via Stored Procedures (and objects)<br />Storing Connection Strings Securely<br />Using Low Privilege Accounts<br />Why to Avoid Using System Admin Accounts<br />
    13. 13. 10. XML Data Retrieval Best Practices<br />XMLDocument vs XMLDataDocument<br />XMLReader vs XMLTextReader vs XMLValidatingReader<br />Using LINQ to XML<br /> var custs = from c in XElement.Load("Customers.xml").Elements("Customers")<br /> select c ;<br /> foreach (var customer in custs) {<br /> Console.WriteLine(customer);<br /> }<br />Dim cust As XElement = _ <br /> New XElement("Customers", _<br /> From cust In Customers _<br /> Select New XElement("Customer", _<br /> New XAttribute("Name", cust.Name), _<br /> New XAttribute("TotalSales", cust.TotalSales)))<br />

    ×