Your SlideShare is downloading. ×
Best Practices for Content Lifecycle Management with MS SharePoint
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Best Practices for Content Lifecycle Management with MS SharePoint

1,302
views

Published on


0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,302
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
28
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Now we come to Connecting…
  • Going back to the importance of Information architecture- some main considerations, among other things, include determining the businesss goals that you have for SharePoint, not just deploying SharePoint to deploy it, but whare are you going to accomplish with it that you weren’t getting before? And then deciding on that structure and taxonomy, and then you always want to make SharePoint your own, and a good way to do this, and to control the manageability, is to standardise branding with templates and master pages.

    Other considerations include making sure you have accountability of your published content using workflows and approvals.
  • MLM –
    This diagram maps out data growth over a period of time in a collaboration environment:
    Electronic data will continue to grow year over year
    Inactive data growth outpaces growth of active, operational data
    However, most of the data is actually inactive or stale data, which is the area in between the two lines
    In SharePoint
    Consider your work on a design document
    Through drafts and edits, multiple versions are being created. Ex. Up to 32 versions of a document, if each document is 2.5 MB, full version history takes up 80MB.
    When document is approved, initial 30 versions no longer needed and can be archived away
    Consider project sites
    Project sites bring groups of people together to work on related documents, task lists, discussions, etc.
    Within each org, there could be hundreds of projects that get completed each year. But entire project sites still reside on SharePoint

    As inactive data continue to grow, resources required for current active data is saturated by inactive data.
    Users experience diminishing service levels (i.e. performance degradation)
    Additional hardware, servers, processing power may be needed,
    As databases continue to grow, this would also impact the current SLA’s for backup and recovery windows that are currently in place.
  • There are several things you need to take into consideration when planning for how much storage you will need to allocate to SharePoint content.
  • There are a few basic ways to manage storage growth in SharePoint. First things first, set site quotas and alerts, we always recommend a 10 GB limit and 8 GB alert. This is going to let you stay on top of which sites are growing more quickly so you can plan future structure accordingly. Next is to monitor growth trends. Pay attention to how quickly your sites are growing, and then don’t forget to monitor the overall Content DB size.
    Finally, depending on growth, there’s a very good chance you’ll need to split content DB’s if they get too big. Now, what is “too big”? We’ll get to that in a minute, as there are several recommendations based on your concerns… First though, we’ll look at a couple of ways to control growth of content DBs…
  • By realizing how a site “chooses” a content DB, you can actually assign a site a content DB as its created. The link here describes the content DB selection process, but essentially, a site, upon creation, will attach to the content DB that has the greatest availability. Other factors come into play as well, such as if a content DB is running or not, so an option here would be to shut down all content DBs except the one you’d like your newly created site to go in.
  • Now, to the question we had early- how big is “too big” for content DBs? As previously mentioned, this depends on your concerns. If you’re most concerned about backup and recovery operations and you’re using a database-based backup solution, you really shouldn’t go over 100 GBs, even then you’re pushing it, as you need to make sure that you can recover your database in time to meet various SLAs. However, there are other tools out there, like the one AvePoint offers, that can leverage snapshot technology to greatly enhance the speed of backups and restores. This might also depend on your hardware. If performance is your main concern, then you might can let the size get a decent amount larger. The thing with performance, though, is this is not just going to depend on the database size, but also the # and size of objects, how heavily nested your site structure is, and once again, hardware. Obviously the goal for storage cost is to keep it as small as possible while allowing for proper backup and recovery windows and optimal performance. These are really considerations, but still we have the question, ‘what is “too big”? Which will have to come from your own organization’s policies and plans.

    Next we’re going to take a look at what’s actually being stored in Content DBs and evaluate ways to further optimize storage besides the few native tools already mentioned… Which brings us to the subject of BLOBs.
  • Why are we concerned with BLOBs? If the size of content DBs affects critical aspects of SharePoint- backup and recovery windows, performance, cost, then lets evaluate what the content in the databases consists of and what’s necessary and what’s not.
  • Here’s a look at a very basic SharePoint architecture. Here we have the front end with the object model, and the SharePoint content (blobs and metadata) are stored in SQL.
  • Already we’ve talked about how the size of the Databases affect these key areas…
  • MLM –
    This diagram maps out data growth over a period of time in a collaboration environment:
    Electronic data will continue to grow year over year
    Inactive data growth outpaces growth of active, operational data
    However, most of the data is actually inactive or stale data, which is the area in between the two lines
    In SharePoint
    Consider your work on a design document
    Through drafts and edits, multiple versions are being created. Ex. Up to 32 versions of a document, if each document is 2.5 MB, full version history takes up 80MB.
    When document is approved, initial 30 versions no longer needed and can be archived away
    Consider project sites
    Project sites bring groups of people together to work on related documents, task lists, discussions, etc.
    Within each org, there could be hundreds of projects that get completed each year. But entire project sites still reside on SharePoint

    As inactive data continue to grow, resources required for current active data is saturated by inactive data.
    Users experience diminishing service levels (i.e. performance degradation)
    Additional hardware, servers, processing power may be needed,
    As databases continue to grow, this would also impact the current SLA’s for backup and recovery windows that are currently in place.
  • Now we come to Connecting…
  • To optimize storage, we can essentially look at two major concepts. We already discussed earlier how BLOB’s don’t contribute to SQL queries, so essentially there’s no need to keep them in the database. So the first option is to move the BLOBs out of the database. The way to do this is to leverage Blob Services APIs. The second option is to Archive content, which currently there are no native tools for, so you’d have to look at a 3rd party.
  • John
    Aside from EBS API, no way to move BLOB contents off of SQL – providers not offered by SharePoint, requires 3rd party tools
  • Along the lines of the first option… move BLOBs out of the database, we can extend content.
  • There are 2 available APIs for extending content out of SQL. 1 is SharePoint specific, it’s the EBS API. The second, is SQL specific, it’s the RBS API. So which one do we use?
  • Here’s a quick overview of the EBS and RBS APIs
  • So if we leverage EBS… this is how the SharePoint architecture would change. The provider sits with the SP object model, and gives SharePoint tokens or stubs so it knows how to retrieve the content and maintains the context of the content. The metadata is stored in SQL, BLOBs go to a storage location of your choosing. This is completely transparent to the end user.
  • However, there are some things to note about EBS. As it is implemented by SharePoint, there’s only 1 provider allowed per SharePoint farm. There’s a chance that you could run into orphaned BLOBs, and then there are also compliance concerns.
  • So now let’s take a look at RBS… As RBS is SQL specific, it can be used across applications that leverage SQL, not just SharePoint, so this gives you more of an enterprise-wide storage architecture versus EBS, and here’s how enabling an RBS provider would affect your SharePoint storage architecture. You can have an RBS provider per database. No context, no ability to manage the object
  • As with EBS, there are some things to note with RBS… one of the main benefits is the ability to mange RBS via Powershell, which MSFT is highly encouraging the use of over STSADM, as I believe they’re eventually doing away with STSADM.
  • Natively with SharePoint 2010, MSFT offers a RBS provider, FILESTREAM. However, it does not recommend using this with very large databases in production. To leverage this feature, you’d have to 1, 2, 3, and then 4, so you would need admin privileges on SQL and Windows server. STORAGE LOCATION IS FILE SYSTEM ONLY!!
  • So again, looking that the two blob services available, which is better, with EBS we have tighter application integration, allowing for more rules and settings to determine which BLOBs are offloaded, and then you have RBS…
  • Which is simpler, and allows for a more unified storage architecture across applications, it’s not SharePoint specifid….
  • At this point, looks like RBS is the way to go… As EBS is SharePoint specific, there’s been no guarantee that in future releases they will continue to support such an API. With RBS, we know that MSFT has bought in to this API as it’s telling customers that it will provide a powershell solution to migrate from EBS to RBS, hence, any solution leveraging these APIs should account for this.
  • So once we leverage Blob Services APIs to offload BLOB out of SQL, these are the impacts that we’ll see relative to our previous concerns regarding Backup and Recovery, Performance, and Storage.
  • If you understand this, you understand EBS as you talk to customers and how it integrates with other systems:
    Examples – new search providers like Google, COVEO, FAST search
    - Workflow providers like Nintex, K2, ETC
  • We like the concept of incorporating cloud storage as a sort of ‘overdraft protection’. Typically, with Cloud storage, it’s cheap to setup but more expensive to use. Compare this with overdraft protection for a checking account… typically it’s free even to set up, but if you use it, if you overdraw and your bank covers your charges so it goes thru, they’ll slap you with a fee, that’s often times not tiny. So think of this scenario… I’ve got my content DB limit set at 100 GB. I get an alert at 80 GBs, so I start planning how to rearrange site structure, split the DB so I won’t hit the limit. But what if in the meantime, I hit my limit? Well nothing happens at 100 GB, but as I go over, if I still haven’t had time to split my DB, cloud storage will be leveraged, allowing all operations to continue, nothing will fail, and this allows you to continue to split up your DB as per usual. OK, so that’s a scenario of leveraging BLOB services APIs to extend content… lets look at some other options for storing BLOBs out of SQL.
  • Now we come to Connecting…
  • John
  • So how do we go about not creating the BLOB issue? And why would you want to connect something to SharePoint instead of migrating it in and extending it later? There are several considerations that come into play when deciding whether or not to migrate data into SharePoint. First, value add of legacy system- are you getting some functionality or capability with your legacy system that you CAN”T get with SharePoint? Then maintenance costs… Hardware that goes with legacy system, cost of not only licensing and support, but also personnel on staff to maintain the system. Then you have to look at migraton costs- thru the process, will you have major interuptions to business processes? What tools will you need to migrate? And then training your users on how to use the new system.
  • There are multiple architectures for geographically dispersed farms, but at a high level we can narrow down to the most used configurations;
  • Tlanni: Scratch the cube

    Different user experiences
  • Centralized architecture but we allow local content and sites in addition to the Main Farm;
    It will increase infrastructure complexity and governance process;

    Mobile Users= PAIN
  • A fully distributed global architecture will provide quick access to local SharePoint content with good user experience;
    You want to replicate only the relevant/global content
    You also want to handle special remote location (like the Alaska oil ring in this slide) via local Infrastructure and replication;
    Requires 3rd party tool
  • Another flavor of a distributed architecture with Centralized Backup and Cloud Storage;
    We can backup locally or to an alternate site;
    We can back up to the Cloud ;
    All available options build high redundancy for our SharePoint frams;
  • Now for Option 2… Archiving… so option 1, of not storing blobs in database is the most basic, or “good” scenario out of good, better, best, for SharePoint storage optimization… but you still want to keep that content on tier 1 storage. To put complete lifecycle management onto content, you need to add archiving to offload content to lower tiered storage.
  • MLM –
    This diagram maps out data growth over a period of time in a collaboration environment:
    Electronic data will continue to grow year over year
    Inactive data growth outpaces growth of active, operational data
    However, most of the data is actually inactive or stale data, which is the area in between the two lines
    In SharePoint
    Consider your work on a design document
    Through drafts and edits, multiple versions are being created. Ex. Up to 32 versions of a document, if each document is 2.5 MB, full version history takes up 80MB.
    When document is approved, initial 30 versions no longer needed and can be archived away
    Consider project sites
    Project sites bring groups of people together to work on related documents, task lists, discussions, etc.
    Within each org, there could be hundreds of projects that get completed each year. But entire project sites still reside on SharePoint

    As inactive data continue to grow, resources required for current active data is saturated by inactive data.
    Users experience diminishing service levels (i.e. performance degradation)
    Additional hardware, servers, processing power may be needed,
    As databases continue to grow, this would also impact the current SLA’s for backup and recovery windows that are currently in place.
  • Natively, SharePoint offers the records center. If you’re leveraging the Records center, be aware that it is essentially just another location, still in SQL, to store content. The best practice here would be to put the records center on its own database, and leverage RBS to offload content.

    Now, archiving… natively, I mentioned there were no tools, but in reality, you could essentially just create backup files of the content you’d want to “archive” and then delete them out of SharePoint. OR, you look at 3rd parties, like AvePoint’s DocAve Archiver to build business rules into your archival plans.
  • If I had to take away 4 main points from today…. These would be them.
  • I’d also like to briefly introduce some applicable AvePoint tools to help you better monitor and manage growth of your Content DBs.
  • Now we come to Connecting…
  • Thank you for participating to this and don’t forget to fill in the evaluation;
    Now we will open the floor for Q&A;
  • Here are additional resources… whitepapers on how to set up Filestream, a white paper on storage optimization sponsored by AvePoint, etc
  • Implement ISystemUtility, IConnectionManager, and ITypeReflector interfaces (only ISystemUtility is mandatory). Potentially, one can also override default connection manager and EntityInstance interfaces. In addition, implementing IAdministrableSystem provides Administration UI property management support and implementing ISystemPropertyValidator provides import time validation of LobSystem properties (not on the Microsoft Office client).
    Compile the code in step 1 into a DLL and place it in the global assembly cache on the server and clients.
    Author the model XML for the custom data source (SharePoint Designer 2010 does not support a model authoring experience for custom connectors).

    At run time when a user executes a BDC operation, this invokes the Execute method in the ISystemUtility class. Thus, the responsibility of executing the actual back-end method is given to the Execute method (as implemented by the custom connector).

  • Transcript

    • 1. SharePoint Content Lifecycle Management Presented by: Mary Leigh Mackie
    • 2. Content Lifecycle Management Organization Workflow Creation Repository Versioning Publishing Archives Sites Work- flow Office Doc Library Version Publish- ing Site Record Center SharePoint
    • 3. Agenda Content Organization & Storage Storage Optimization Content Access Archiving
    • 4. Content Organization & Storage
    • 5. Information Architecture • Accountability of published content using workflows or approvals • Managing search scopes, security trimming, federation • Isolate intranet content from extranets • Testing for consistency and performance • Training your site/content owners and end users http://technet.microsoft.com/en-us/library/cc262873.aspx#section2 • Determine the business goals • What will your site structure and taxonomy look like? • Standardize branding with templates and master pages Other considerations Source: Governance Resource Centre on Microsoft TechNet
    • 6. 0 1 2 3 4 Active Data Total Data Storage in a Content Repository Increase in % of inactive data over time Time in years DatainSQL
    • 7. Planning for SharePoint Storage • Recycle bin • Versioning • Search and index information • Growth Good rule of thumb for initial planning is: 3.5 x file system
    • 8. Basic Storage Management Methods • Set site quotas and alerts! – 10 GB quota, 8 GB alert is my favorite • Monitor growth trends – Sites: slow over time or large jump in size? – Overall content DB size • Split Content DBs if they get “too big”
    • 9. How SharePoint “chooses” a Content DB for a site • Highest remaining allotment rule – Content DB 1: 100 sites max – Content DB 2: 100 sites max – Content DB 1: 100 sites max – Content DB 2: 200 sites max SharePoint Site Content BD selection process: http://blog.jesskim.com/kb/293
    • 10. Optimal Content DB Sizing • Backup & Recovery operations(<50-100 GB) • Performance (<500 GB… nervous at 300 GB) – # of objects – size of objects – Hardware (servers and storage) • Storage Cost (as small as possible!) So what is too big?
    • 11. BLOBs-- What’s the Issue? • BLOBs = Binary Large Objects • SharePoint Content = BLOB + Metadata • Content DB = database of … BLOBs + Metadata • SQL DB storage needs high IOPS (input/output operations per second) and low latency • High IOPS + low latency storage = $$$$ • BLOBs do not participate in query operations, so no real reason to have BLOBs in a DB • DB full of BLOBs = wasted $$$
    • 12. SharePoint WFE SharePoint Object Model SQL Server BLOBs& Metadata Content DB Config DB Default SharePoint Storage
    • 13. Database Size Implications BLOBs increase DB size, creating issues with: • Backup & Recovery operations • Performance • Storage Costs
    • 14. 0 1 2 3 4 Active Data Total Data Issues with BLOBs Get much worse over time… Increase in % of inactive data over time Inactivesites,documents,list, librariestakeup SQLstorage,hindering performance Time in years DatainSQL
    • 15. Storage Optimization
    • 16. SharePoint Storage Optimization Methods • Move the BLOBs out of the database • Archive content
    • 17. Planning for Data Use & Growth What does SharePoint 2010 offer OOTB? • No native archiving tools • EBS extended to include RBS – Available only in SQL Server 2008 SP2 – Only accessible via API • BCS (BDC in 2007) extended to allow for easier connectivity with legacy data systems
    • 18. Storage Optimization Extending BLOBs out of the database
    • 19. Available APIs for Extending SQL Remote BLOB Service (RBS) SharePoint External BLOB Service (EBS)
    • 20. EBS/RBS Overview Blob Services to change BLOB storage locations • EBS = External BLOB Service – SharePoint 2007 SP1+ API • RBS = Remote BLOB Service – SQL Server 2008R2 Feature Pack API, with SharePoint 2010 support • Both are interface specifications – Need a provider to actually work • Cannot have both providers
    • 21. EBS • EBS provider cantake ownership of the BLOB • Provider gives SharePointa token or a stub so SharePoint knows howto retrieve theobject (context) • Transparentto the end-user SharePoint WFE EBS Provider BLOB Metadata SharePoint Object Model SQL Server Content DB Config DB BLOB Store
    • 22. EBS • Implemented by SharePoint • Only 1 EBS Provider per SharePoint farm • Orphaned BLOBs- no direct method to compare BLOB store and Content DB • Compliance- what if I don’t want to allow SharePoint to delete the object?
    • 23. RBS • Not unique to SharePoint, available to any application • A Provider Library can be associated with each database SharePoint WFE SharePoint Object Model Content DB X Content DB Y Relational Access Provider Library X Provider Library Y BLOB Store RBS Client Library BLOB Store BLOB Metadata BLOB& Metadata SQL Server
    • 24. RBS • Implemented by SQL • Only 1 RBS Provider per Content DB • Orphaned BLOBs much less of an issue • Can lock down operations, from a unified storage perspective • Can be managed via Powershell
    • 25. RBS: SQL Server 2008 Feature Pack API Handled natively by database Default Provider: FILESTREAM 1. Enable FILESTREAM provider on SQL 2. Provision data store and set storage location 3. Install RBS on all SP Web and App servers 4. Enable RBS
    • 26. RBS versus SQL Filestream • Filestream storage must be file system locally attached to the SQL server • RBS is an API set that allows storage on external stores - physically separate machines that may be running custom storage code, for instance EMC Centera
    • 27. EBS Tighter integration with application, allows for more rules and settings EBS versus RBS, which is better?
    • 28. EBS Tighter integration with application, allows for more rules and settings RBS Simpler, allows unified storage architecture across applications http://www.codeplex.com/sqlrbs EBS versus RBS, which is better?
    • 29. It looks like RBS has won… SQL Remote BLOB Service (RBS) SharePoint External BLOB Service (EBS) SharePoint 2007 SharePoint 2010 Future SharePoint Release (SPS 5?) SQL Server 2005 Future SQL Releases SQL Server 2008 Microsoft will provide a powershell solution to migrate from EBS to RBS
    • 30. Benefits of Extending BLOBs • Backup & Recovery operations – Databases are 60-80% smaller – Need a method to backup BLOBs synchronously • Performance – Databases are 60-80% smaller – Performance improvement increase as the file/BLOB size increases. Microsoft research indicates: • <256kb, SQL better • 256kb to 1mb, SQL and file system comparable • >1mb, file system better • Storage Cost – “Not as expensive” storage – Archiving still needed for true savings
    • 31. RBS is Completely Seamless for Users 31 • Users can access contents by: – Clicking and downloading directly through SharePoint – Opening the file using their Office client – Referencing the URL – Searching for contents natively in SharePoint • Users can interact with contents by: – Modifying metadata and content types – Modifying permissions – Applying alerts – Using workflows or publishing templates – Using site Quotas and Locks
    • 32. Cloud Storage Use Case SharePoint “Overdraft Protection” DB alert set at 80 GB, limit at 100 GB 0 80 100 Alert sent to admin No action taken Cloud Storage • Could be any storage • Cloud is ideal “insurance”--cheap to setup, expensive to use
    • 33. Content Access
    • 34.  Where is it in it’s lifecycle?  Do you want to expose it in SharePoint? • BCS is intended for connecting LOB’s (Databases, Windows Communication Foundation (WCF) or Web services, .NET connectivity assemblies, Custom data sources) into SharePoint, without migrating the data • No OOTB solutions for getting content out of users desktops, file shares, or other ECM systems Connecting Legacy Data SharePoint 2010 Support
    • 35. Options for Exposing Legacy Data (File Shares, Notes, Exchange Public Folders, eRoom Documentum, LiveLink… etc?) • Migrate – Manually download/upload, losing author, time, security, history, other metadata – 3rd Party Tool • Connect – BCS Mechanisms – Most major ECM Vendors – AvePoint’s DocAve Connector EBS/RBS API’s preferred
    • 36. Which option is better? Connecting vs. Migrating – Value add of legacy system – Maintenance costs • Hardware • Licensing and support • Knowledge – Migration costs • Migration process • Tools • Training
    • 37. Migrating vs. Connecting Migrating • Data is available in SharePoint • Data is moved into SharePoint • SharePoint replaced legacy system • Burden of storage is on SharePoint • Changes saved in SharePoint • Migrate and decommission Connecting • Data is available through SharePoint • Data is left in source (legacy) system • Give legacy system second life by increasing its value • Burden of storage is on legacy system • Changes propagate to source • Connect and forget
    • 38. Connect to SharePoint: BCS Mechanisms • .NET Assembly Connector – Provided with Microsoft Business Connectivity Services (BCS) – Each .NET connectivity assembly is specific to an external content type – Provides no Administration interface integration • Custom Connector – Connect to external systems not directly supported by Business Connectivity Services – Agnostic of external content types that connect to a kind of external system (all databases or all Web services) – Provides an Administration UI integration http://msdn.microsoft.com/en-us/library/ee554911.aspx
    • 39. Which BCS Mechanism Should I Use? • The .NET Assembly Connector approach is recommended if the external system is static. Otherwise, for every change in the back end, you must make changes to the .NET connectivity assembly DLL. This, in turn, requires recompilation and redeployment of the assembly and the models. • Custom connector approach is recommended if the back-end interfaces frequently change. By using this approach, only changes to the model are required. http://msdn.microsoft.com/en-us/library/ee554911.aspx
    • 40. Connecting: 3rd Parties 40 (File Shares, Notes, Exchange Public Folders, eRoom Documentum, LiveLink… etc?) • Most major ECM Vendors • Other 3rd Parties EBS/RBS API’s preferred
    • 41. Options for Exposing Legacy Data: Migration How much content needs to be migrated? How long will this take? How much downtime can you tolerate? How much customization do you have? Is this a “big bang” migration or can you migrate in a scaled/phased approach? Can you accept loss of metadata and securities? Can you engage other members to assist in the process and arrange for proper training? What minimal requirements do you have for this migration? Can you properly map non- SharePoint related assets into SharePoint? Questions to ask yourself… etc…
    • 42. ConsPros SharePoint Migration Strategies • Environments retaining ample amounts of outdated information • Moving to new hardware or new architecture • Puts Power Users in charge to recreate and manage sites • Migrate relevant content to avoid import of old data • Completely retains old environment • Virtually no downtime – requires user switch to new environment • Manual process, very resource intensive • Requires willing participants and intensive training • Requires additional steps to retain original URLs • Requires new server farm and additional SQL Server storage space for new content Best For User-Powered Manual Migration • SharePoint Administrator installs the new version on separate hardware or a separate farm and allows Power Users to manually recreate content
    • 43. ConsPros SharePoint Migration Strategies • Any size environment, from single server environments to large, distributed farms • Granular migration • Retains all metadata • Virtually no downtime • Applicable to non- SharePoint repositories • Costs associated with purchasing of additional software • Requires new server farm Best For Migration via 3rd Party Tool • SharePoint Administrator installs the new version on separate hardware or a separate farm, and migrates content and users using 3rd Party Tool
    • 44. What About Access for Geo-Dispersed Users? • Centralized environment, accessed globally • Centralized environment plus local content (sites, etc) • Fully distributed, replicated architecture accessed locally – Centralized or cloud storage backup for high redundancy
    • 45. • Out of the box SharePoint • Lowest complexity, least costly • Varied User Experience • Evaluate bandwidth and usage patterns Global Architectures Single Centralized Environment
    • 46. • Local services and sites, in addition to main farm • Increased infrastructure complexity • Governance can be an issue • Relocating teams/users is a pain Global Architectures Centralized plus local content
    • 47. • Fast local access to SharePoint content • Replicate only what is relevant • Ability to handle remote locations Global Architectures Fully distributed
    • 48. • Backup locally or to alternative sites • Consider cloud storage • Can be used for high redundancy Cloud Storage Global Architectures Distributed w/ Centralized Backup
    • 49. Archiving Adding Lifecycle Management to the picture
    • 50. Time Access/SLA Requirements Low High Initial content creation Moderate content retrieval Lifecycle of a Typical Item
    • 51. 0 1 2 3 4 Active Data Total Data Time in years DatainSQL Storage in a Content Repository Increase in % of inactive data over time
    • 52. Data Lifecycle Management • Records Center – Another SharePoint site – Higher % inactive content – Consider separate Content DB, with an RBS provider implemented for this DB • Archiving – Backup and delete – Workflow (Expirations) – 3rd Party tools solutions
    • 53. 3rd Party Archiving Tools • What rules are available? – Last modified time – Author – Versions • What scope can I apply rules to? (farm to item) • Does it use RBS/EBS APIs? • Does it integrate with other infrastructure management tools? (backup, replication, etc.)
    • 54. 1 2 3 4 Summary Think carefully about organization and storage Consider where content will be stored and how it will grow over time Leverage BLOB Services APIs to Optimize SharePoint StorageEBS/RBS API’s can be leveraged to store BLOBs outside of SQL with little impact on end-users, to save $$ and optimize storage Content access is key Develop strategies to handle access to legacy data and content access from remote locations Archive content Plan for long term growth and optimal system performance
    • 55. AvePoint – Who we are Global Leader in SharePoint Infrastructure Management Backup & Recovery, Administration, Replication, Migration, Compliance, Storage Optimization • Founded in 2001 • Headquartered in Jersey City, NJ, with global offices in: – USA: Chicago, San Jose, Houston, Washington D.C., Redmond – International: UK, Germany, Australia, Japan, Singapore, Canada • R&D team of 350+  Largest SharePoint team outside of Microsoft • Winner of 2008 Best of Tech Ed Award for Best SharePoint Product • Exclusive OEM relationships with IBM and NetApp • A Depth Managed Microsoft Gold Certified ISV Partner – MTC Alliance Member; Notes Transition Partner; Office TAP 14 Member; BPOS TAP Member
    • 56. Applicable Features of AvePoint Tools • DocAve Report Center – Storage growth and trending – Server performance and monitoring • DocAve Administrator – Manage site quotas and alerts – Move sites between Content DBs • DocAve Replicator – Fully mapped, live or scheduled replication of all SharePoint contents
    • 57. Applicable Features of AvePoint Tools Connecting • DocAve Connectors – Leverage EBS/RBS APIs to expose File Share Content as fully functional SharePoint object – Content works with Office Applications, alerts, workflows, 3rd party application, etc… Migrating • DocAve Migrators for SharePoint – From previous versions of SharePoint, File Shares, Exchange Public Folders, Lotus Notes, Documentum eRoom, EMC Documentum, Livelink, Oracle/Stellant, Vignette – Offers granular selection of content, full graphical user/domain/properties mapping • DocAve Content Manager – Consolidates existing SharePoint instances (other sites or farms that are the same SharePoint version) into a single SharePoint instance, while maintaining all metadata – Offers granular selection of content, full graphical user/domain/properties mapping
    • 58. Demo?
    • 59. Thank You! Q&A
    • 60. Resources - www.AvePoint.com 61 Visit us: http://www.AvePoint.com Email us: sales@avepoint.com maryleigh.mackie@avepoint.com Follow us: @AvePoint_Inc @mlmackie Download a FREE, fully-enabled 30 Day trial of DocAve at www.avepoint.com/download
    • 61. Additional Resources • Storage Optimization for SharePoint Whitepaper : http://www.avepoint.com/assets/pdf/sharepoint_whitepapers/Storage_Op timization_Technical_Advisor.pdf • Configure Content Database for RBS: http://technet.microsoft.com/en- us/library/ee748641(office.14).aspx • FILESTREAM RBS: http://blogs.msdn.com/opal/archive/2009/12/07/sharepoint-2010-beta- with-filestream-rbs-provider.aspx • Whitepaper about FILESTREAM: http://msdn.microsoft.com/en-us/library/cc949109.aspx
    • 62. Backup Slides
    • 63. SharePoint Migration Strategies Engage Power Users In Content Migration: • Create a dedicated Power Users group - have a Power Users SharePoint Site so that all the power users can share best practices and lessons learned with one another • Provide expensive training on SharePoint to all Power Users • Request Power Users to Migrate Content – they should be empowered and proactive about content migration and administration • Request Power Users to train new SharePoint users to properly use their specific sites – provide training materials, videos, etc. to new users to lower TCO for IT training A Power User should be very familiar with SharePoint and have either Full Control or Design permissions (or their equivalent) for the site they will manage. (Restrict Site Deletion Permission) TIP
    • 64. Connecting to SharePoint: .NET Assembly 65 • Write code as Microsoft .NET Framework classes and compile the classes into a primary DLL and multiple dependent DLLs. • Publish the DLLs into the Business Data Connectivity (BDC) service database. • Use Microsoft SharePoint Designer to discover the .NET Connectivity Assembly and create a model. • Map each entity to a class in the DLL, and map each BDC operation in that entity to a method inside that "Class".  At run time, when a user executes a BDC operation, the corresponding method in the primary DLL is executed. http://msdn.microsoft.com/en-us/library/ee554911.aspx
    • 65. Connecting to SharePoint: Custom 66 • Implement ISystemUtility, IConnectionManager, and ITypeReflector interfaces. • Implementing IAdministrableSystem provides Administration UI property management support and implementing ISystemPropertyValidator provides import time validation of LobSystem properties (not on the Microsoft Office client). • Compile the code into a DLL and place it in the global assembly cache (GAC) on the server and clients. • Author the model XML for the custom data source (SharePoint Designer 2010 does not support a model authoring experience for custom connectors).  At run time when a user executes a BDC operation, this invokes the Execute method in the ISystemUtility class. The responsibility of executing the back-end method is given to the Execute method. http://msdn.microsoft.com/en-us/library/ee554911.aspx

    ×