SharePoint components - business processes without development, InfoPath, SharePoint Designer or workflows. SharePoint Services, SharePoint Foundation & SharePoint Server 2007-2013.

  • Infowise Solutions Ltd, SharePoint web parts and solutions
Skip Navigation LinksHome > Blog
Category: Administration
 

Microsoft DPM - The best backup/restore solution for SharePoint 2010

We’ve all been in this situation – after a good long SharePoint based portal implementation – we start scratching our head to figure out what's the best way to back up the SharePoint farm and drown in the multitude of options – SharePoint native backup, stsadm or 3rd party products like AvePoint or EMC.
Now, Microsoft makes our life very easy by announcing Data Protection Manager as the main backup/restore application for SharePoint 2010.
Well, you might think that’s just another one of the many methods to protect SharePoint 2010 but DPM hides under its hood some pretty cool features which make it the #1 backup/restore application for SharePoint 2010. Here are some of them:

  • Application Awareness – it is probably one of the most important features of DPM, DPM is aware that it's backing up SharePoint and not any other solution when backing up SharePoint databases. The meaning of this is that DPM works hand in hand with SharePoint and as the result of that the backed up data is more consistent as DPM makes sure that all the transactions that are occurring during the backup are taken into consideration.
  • Item Level Recovery – DPM enables to recover a single object out of a full farm backup of a wide variety of SharePoint objects, such as documents, lists, document libraries, sites and sites collections.
  • Protects SharePoint Search. In a large farm the search index can get rather big. In these situations - backing up search is crucial as in a case of disaster it could take very long time, even days, to index the farm again.
  • Better Performance – DPM runs a very light agent which almost does not affect the farm production environment, with very low I/O, as it scans memory blocks to track changes in real -time.
  • Self-Healing Backup Process – during the backup, if one of the backup objects, such as server or database, goes offline – DPM will proceed with the backup as planned and try to self-heal and add the offline object during the next planned backup.
  • Reduce Storage Costs – DPM aims to use disks as the main backup storage and tapes for archiving. It's very understandable as disks are cheaper and more common than tapes and it's very easy and quick to restore data from them.
    Furthermore, DPM reduces storage costs by minimizing backed up data by backing up only the changed blocks of data, meaning that every backup after the first full backup is approximately 5% (average daily data change on a SharePoint farm) of the whole farm data. That saves about 75% of storage space comparing to differential SharePoint native backup over a one month period.
  • Search for Content (across Recovery Points). DPM enables to search the backed up data. It's very helpful if we want to perform an item level recovery, DPM search makes it very easy to locate an item to recover when needed and the time factor is crucial.

 

DPM is a great SharePoint 2010 backup/restore solution. And the fact it comes from Microsoft is a big advantage over other solutions as it integrates well with SharePoint within the backup/restore processes.
It very important to mention that DPM actually is the main Microsoft backup/restore solution for other Microsoft products such as Active Directory, Exchange, SQL, Virtual Server, file shares, System State and more. So it’s a great new backup/restore product family from Microsoft.

For more information – http://www.microsoft.com/dpm
Try it! There's also a 180 days evaluation download.

 

What’s New in SharePoint 2010 Capacity Planning

Originally posted on "http://www.sharepointjoel.com/"

Whether you are upgrading or deploying a new SharePoint 2010 farm, the capacity boundaries are one of the the most important thing you need to understand to have a successful deployment.
When designing a farm, people need to be educated about how best to scale. These links below contain a lot of word docs, essentially you may find it more difficult to find the guidance you’re looking for, but there is actually already way more information published at release than in the earlier version.
New TechNet documentation and Whitepapers have just became available:
  • Capacity management and sizing for SharePoint Server 2010 – Kelley Vice (Thanks for gathering this, there’s guidance here that is applicable even to 2007)
  • SharePoint Server 2010 performance and capacity technical case studies (Includes one example of Collab and one of Publishing)
  • SharePoint Server 2010 performance and capacity test results and recommendations (Various documents including: Access Services, Large Lists max performance, Divisional Portal, Infopath, Performance Point, Search, Web Application Services, Web Content Management, Workflow)



    • Designing Large Lists and Maximizing Performance
    • Estimate performance and capacity requirements for Access Services
    • Provides guidance on how using Access Services impacts topologies running SharePoint Server 2010.
    • Capacity Planning and Sizing for SharePoint Server 2010-based Divisional Portal
    • Provides guidance on performance and capacity planning for a SharePoint 2010 based divisional portal.
    • Designing Large Lists and Maximizing List Performance
    • Provides guidance on performance of large document libraries and lists. This document is specific to SharePoint Server 2010, although the throttles and limits that are discussed also apply to Microsoft SharePoint Foundation 2010.
    • SharePoint Server 2010 Capacity Management for InfoPath Solutions
    • Provides guidance on the footprint that usage of InfoPath Forms Services has on topologies running SharePoint Server 2010.
    • Estimate performance and capacity requirements for PerformancePoint Services
    • Provides guidance on the footprint that usage of PerformancePoint Services has on topologies running SharePoint Server 2010.
    • Estimate performance and capacity requirements for Search in SharePoint Server 2010
    • Provides capacity planning information for different deployments of Search in SharePoint Server 2010, including small, medium, and large farms.
    • SharePoint Server 2010 Capacity Management for Web Content Management Deployments
    • Provides guidance on performance and capacity planning for a Web Content Management solution.
    • Word Automation Services 2010 Capacity Planning Guidance
    • Provides capacity planning guidance for Word Automation Services in SharePoint Server 2010.
    • Estimate Performance and Capacity Requirements for Workflow in SharePoint Server 2010
    • Provides guidance on the footprint that usage of Workflow has on topologies running SharePoint Server 2010.
  • Databases That Support SharePoint 2010 Products
  • Topologies for SharePoint Server 2010
Note: I highly recommend digging into these docs to better understand what these various “limits” are. There is much more information on new types of limits. Essentially there are now boundaries, thresholds, and supported limits. Much of these limits come down to testing. “The capacity planning information in this document provides guidelines for you to use in your planning. It is based on testing performed at Microsoft, on live properties. However, your results are likely to vary based on the equipment you use and the features and functionality you implement for your sites.” My personal recommendation is to be cautious. There are very few of these limits I’d recommend passing or even getting close to passing.
I was interested in providing a comparison of how things have changed based on the SharePoint 2007 capacity boundaries and 2010 capacity boundaries document. Here are some of my observations in differences that are worth noting. There is more the same than different when it comes to capacity planning. Don’t throw out all of your knowledge of SharePoint with what you find that’s new. A lot of the best practices are the same. Here’s my list of SharePoint 2007 key capacity planning blogs and articles.
The biggest key changes…
SharePoint 2007
(recommended max)
SharePoint 2010
(recommended max)
Items per view 2000 5000
Documents per library 5 million 10 million
Database size 100GB 200GB (up to 1TB for workloads)
Simultaneous Doc Editors 1 (no Multi user editing of Word, Excel, PPT) 10 (max at 99)
Column 2000 per doc lib, 4096 per list with (rowOrdinal) New Row Wrapping (8,000 bytes)
Content Databases per Web App 100 300
App Pools per web server 8 10
Indexed (Crawl Count) 50 Million items per SSP 100 Million per search Application
Site Collections per Web App 50,000 500,000
Items per view
If there was one thing that was a focus it was list scale. Not only is the list itself much more scalable, and better designed, but also the actual rendering of the lists is more flexible and more manageable. I’ve discussed the list threshold throttling limits where you can control how much is returned in a view. In 2007 we had a recommended limit of 2000. This 2000 number found itself as a key designing principal. Now the default is 5000 and this is as well the recommended max for a view more than doubling the previous limit. I’ve seen examples where this was increased and still the list rendering had much better performance than in 2007. … “Maximum number of list or library items that a database operation, such as a query, can process at one time, outside of the daily time window set by the administrator during which queries are unrestricted.”
Documents per list

Column Limits
What seems like a decent investment is row wrapping. SharePoint Server 2010 data is stored in SQL Server tables. To allow for the maximum number of possible columns in a SharePoint list, SharePoint Server will create several rows in the database when data will not fit on a single row. This is called row wrapping. Six rows are recommended as a maximum.
Site Collection Sizing
25 to 100GB “A site collection should not exceed 100 GB unless it is the only site collection in the database. Certain site collection actions, such as site collection backup/restore or Move-SPSite, cause large Microsoft SQL Server® operations which can have performance impact or fail if other Site collections are active in the same database.”
Database
Published recommendations go from 2007 as 100GB content databases and in SharePoint 2010 looking at 200GB with possibilities of up to 1TB with careful non collaborative workloads like documents and records repositories. “We strongly recommended limiting the size of content databases to 200 GB to help ensure system performance. Content database sizes up to 1 terabyte are supported only for large, single-site repositories and archives with non-collaborative I/O and usage patterns, such as Records Centers.”
Storage
While structurally the blobs are better managed and more scalable, the fundamental storage story has changed a bit. We had EBS support with SharePoint 2007 SP1, but implementing anything was complex and many cautions against doing anything unless you had support with a vendor and even then, proceed with extreme caution. With SharePoint 2010 and SQL 2008 R2 we have a better story with RBS (Remote Blob Storage) essentially handled transparently by SQL. SharePoint 2010 is happy to have SQL store its blobs on external storage using a provider mechanism.
Databases per Web App & App Pools per Server
Knowing people are using 64 bit hardware you’ll see a few recommendations simply based on better scale with SQL and IIS. Examples of this is the numbers of databases per web app and the number of app pools per server which totally depends on the RAM on the box. SharePoint 2010 by default does not use less memory than SharePoint 2007. For example with App Pools, “This limit is dependent largely upon: 1) The amount of RAM allocated to the front end server 2) The workload that the farm is serving – the user base and the usage characteristics (the process of a single highly active application pools can reach 10 GB or more)”
Index Sizing
“SharePoint Search supports index partitions, which each contain a subset of the entire search index. The recommended maximum is 10 million items for a given partition. The overall recommended maximum number of items: e.g., people, list items, documents, Web pages, etc., is 100 million.”