Exchange Costs : Cloud vs. On-Prem

Moving your Exchange services to Office 365 seems to be a fairly simple decision for smaller companies; it just makes sense. Most of my larger customers however remain on-prem due to security concerns and higher estimated costs. In addition, there are often political challenges in eliminating local servers and eliminating jobs within IT departments and sometimes irrational fears of a global cloud shutdown or problem. (Blackberry/RIM problems come to mind.)

The slower adoption rate of Office 365 for larger entities is a huge topic and I would need some help in covering that completely, but I did find an interesting article today with price comparisons for small shops. It’s the first time I have seen calculations and a nice chart that shows the costs between the two options for various small company sizes. This article suggests that the price advantage with Office 365 starts to wither once you reach 1,000 mailboxes.

Anyway, check it out.

Comparing Cost for Exchange Online to On Premise for Small to Midsized Businesses





Blueprint for an Exchange Service Level Agreement


This article is intended to detail the mechanics behind an Service Level Agreement between a fictitious Company ABC and an outsourcing/hosting vendor. You may not have the luxury of getting an SLA with the vendor you have chosen but you may want to think about asking for one. Time to recovery, time to restore and up time guarantees are all just a few of the critical facets of IT we need to manage.

Several specific requirements have been assumed and documented in this article.Feel free to copy this content and edit it for your own use. My goal was to product a template that other companies could use to begin work on their own SLA.

There are two major parts to an SLA: the governing document and the process.

  1. The SLA Document is usually legally binding between a company and an outsourcing vendor(s). The document describes the exact services and service levels, with details about all agreements.
  2. The SLA Process represents the methods that the outsourcing vendor will use to support the SLA document. The methods of supporting the SLA document are usually left to the outsourcing vendor to identify. These processes should be discussed and possibly identified during SLA contract negotiation. It is important that both parties understand the processes and methods of support as well as the management and reporting tools.

The SLA process represents a third of the total solution. It is up to the hosting vendor and your company to ultimately choose the correct people to manage the systems and the best technology for implementation. The people involved in managing the process must also manage the technologies and understand the importance of reporting and monitoring the entire system.

System management and service desk automation technology can provide a supporting environment for tracking, escalation, and management of service metrics. End user satisfaction surveys can also provide input that will help target appropriate service levels and cost controls.

Service Level Agreements are often categorized in the following manner:

  • Basic: A single level service agreement is in place. Metrics are established and measured, possibly requiring manual collection of data for management reporting. Objective is to justify the technical support operation.
  • Medium: The automation of metrics data enables more comprehensive less labor intensive reporting of service level achievement. Introduction of cost recovery that maps to market rates and supported by service level reporting. Possible multi-level service agreements by cost per services rendered. Objective is to match service and cost levels with long term goal to increase service levels while decreasing costs.
  • Advanced: Service levels are embedded in overall service desk processes enabling dynamic allocation of resources either external or internal to meet changing business conditions. Goal is to provide a seamless mix of services, costs and service providers at better than competitive rates. Often enterprises at this level are ready to extend services to the open market.

The Scenario

For purposes of discussion, the remainder of this paper examines the considerations of a company evaluator who must complete an SLA document of the support requirements for Exchange/Outlook systems. We will call this company “ABC Company.” The evaluator works with one or more outsourcing companies to negotiate the final agreements recorded in the document. Recommendations and suggestions, which are based upon industry standards and project management experience, are provided throughout.


The primary objective of the SLA document is to correctly identify the support requirements for Company ABC in regards to supporting the Outlook/Exchange infrastructure.

The ABC company evaluator alone cannot determine the appropriate details for the SLA. The outsourcing vendor’s industry experience and project management capabilities will provide required information and guidance. In many cases, it will be required for Company ABC and management within Company ABC to conduct workshops on the issues to determine specific objectives.

Moreover, we should all use our best judgment in collecting ideas and suggestions from the appropriate people. For example, for specific questions regarding helpdesk requirements, the outsourcing vendor may need to be involved in order to correctly identify a requirement unknown to the Evaluator.

Service Level Agreement Document

The processes in creating the SLA are broken down to ease the management of the project. The first four sections require the input of ABC Company management and, in some cases, end-user surveys. The next group of tasks may require the input of the current out-sourcing company in order to ensure all requirements have been identified. Next, theevaluator assembles the data in a document that can be easily read and understood.


The last sections take place during the negotiations with outsourcing vendors. Usually, a legal instrument will be created to bind both parties to a final Service Level Agreement. While the final SLA will be based upon the evaluator’s SLA document, it is likely that sections will be added or removed as negotiations dictate.

Contract Specifics and Context

Management of the SLA is a critical part of supporting end users. Before we can determine if objectives have been met, we must first identify metrics and the specifics of the contract.

Contacts and Role assignment

First, name the key contact to the Service Level Agreements and delegate SLA management tasks to others. Other contacts for the SLA include:

ABC Company:


Exchange connection into other corporate systems

Management of 3rd Party Outlook/Exchange Development

Application Development

Remote and Mobile Access


The frequency and detail of reports must be identified as well. Reporting can then be further broken down into two techniques:

  • Automated system reporting should be implemented in order to provide current and historical data. This data should be made available to the above named contacts on a regular basis. The methods for providing reports to the named contacts may include a secured website or electronic mail attachments. Hard copies of the reports may also be requested. The reports, for these contacts should be fully detailed reports with data analysis and a trend summary for the month. Moreover, historical data should probably be included.
  • It may be necessary for regional and divisional managers to receive a summary report/graphic depicting uptime and overall system performance once a month, similar to the graph depicted to the right.

ABC Company may also require that an automatic mechanism be put into place to notify the named contacts when critical performance thresholds are met. Specific thresholds are discussed later in the document.

Questionnaires and end-user canvassing methods should also be performed by ABC Company and/or the Outsourcing Company as part of an overall customer service initiative.


Payment terms and contract length are negotiated with the outsourcing vendor. ABC Company may prefer a contract length of six months, but will consider contracts as long as one year. Renewals can be handled in many ways including automatic six month extensions. Both ABC Company and the outsourcing vendor should be able to request a formal renewal meeting to update the SLA with riders and to negotiate new terms.

There are two types of terminations possible:

  1. Contract Termination- Indicates that either ABC Company or the outsourcing company elects to terminate the contract. A “Technology Transfer” and associated fee would probably be required in order to shift the maintenance and support to another group.
  2. Technology Termination- A termination in technology would occur when the support requirements are no longer required due to a shift in ABC Company technologies. This form of termination may or may not require a formal “Technology Transfer.”

Termination Options are described as follows:

  • ABC Company may reserve the right to cancel the contract for either termination option with 60 days notice to the outsourcing company. ABC Company understands that there may be financial penalties for “Contract Termination” if the SLA objectives were met by the outsourcing vendor. These penalties often reflect the fee for one month of support.
  • The outsourcing vendor may reserve the right to “Cancel Termination” with 180 days notice to ABC Company. A “Technology Transfer” fee would be charged to cover labor costs associated with transferring the knowledge and technology to another group.

 Review Process

There should be a formal review to evaluate the performance and customer service levels as well as staff reviews. A quarterly review is sometimes formalized in order to include discussions on SLA fulfillment, staffing and future projects that may affect the SLA.

Change Management

Service Level Management is accomplished by negotiating a change or additional to an existing Service Level Agreement. Out-of-scope or new projects need not be discouraged. A change process occurs during every review process and can also be instigated as needed. Several things could require a change or addendum to the existing SLA:

  • A change in the process workflow
  • Additional services
  • Missed performance or customer service thresholds
  • Additional third-party applications

Changes are not made directly to the SLA. Instead, contract riders are appended to the SLA until such time that the SLA is rewritten to incorporate the addendums. The SLA can only be written during a renewal cycle with both parties present.

Financial Incentive Plan

Most groups believe that the total cost of ownership (TCO) is more a function of cost of service and support of the system than a function of the cost of hardware and software. SLAs can drive down TCO by identifying damages for missed service levels.

In the case of ABC Company, a third party may be asked to provide evaluations to determine if service level objectives have been met. The costs associated with the third-party evaluations are the responsibility of the party requesting the evaluations.

Penalties and bonuses for SLA performance guidelines could be “paid” quarterly. Performance objectives are met based on a +10/-10 (percent?) allowance. Penalties are paid as a deduction of regular costs for the pay period immediacy following the review cycle. Bonuses are paid with four weeks of the review cycle and do not require a separate purchase order from ABC Company.

Performance Level Guidelines

Inter-site Message Transfers

Because the outsourcing vendor may have little control over the stability of the hub servers, ABC Company may not require guaranteed delivery times for mail originating from, or addressed to, any mailbox outside of the ABC Company’s site.

However, inbound Internet email with legitimate addresses should not get returned as undeliverable from the Exchange systems within the supported (environment?). The outsourcing vendor should remedy any internal Exchange process that returns mail.

Intra-site Message Transfers

ABC Company requires that intra-site Exchange mail be delivered to the recipient’s server-based mailbox within 15 minutes of delivery to a server within the supported site.

Remote Synchronization Performance

Off-line Address book

Remote users who replicate the Offline Address Book should never wait more than thirty minutes for a complete refresh to transfer even over slow connections.

Mailbox Replication

You should probably define the mailbox limits. In many cases, mailboxes are classified into two or more categories. For example:

  • ClassA Users have a 1GB limit on mailbox size
  • ClassB Users have a 5GB limit on mailbox size

Directory update frequency

Many companies configure directory replication so that the directory is current within a forty-eight hour time period. For example, a mailbox that is added at 3:00 p.m. on Tuesday must appear in the directory and Offline Address Book before 3:00 p.m. on Thursday. This should be defined in the SLA.

System Changes

Administrative tasks, such as Exchange/NT username add/remove/change, should be able to be handled within one business day. The primary and backup responsibility may be divided up among the server support team and the helpdesk.

An matrix of administrative task groups and responsibilities should be created in order to identify the ability of each group, including:

  •  Add/Remove/Change of mailboxes and distribution lists
    • One business day or less
  • Lync Account Creation
    • One business day or less
  • Exchange Connector settings
    • Two business days per request
  • Updating permissions and security settings a Group Mailbox
    • One business day or less
  •  Assigning permissions to a SharePoint Site
    • One business day or less
  •  Distribution List creation
    • Two business days or less
  •  Distribution List modification
    • One business day or less
  •  Mailbox restoration (from backup or snapshot)
    • Three business days or less

The outsourcing company may want to define the maximum number of one-day requests that can be filled per business day. Additional requests will roll to the next business day and will take priority over new requests.

Note: Public Folder applications that require or use automation, such as routing or scripting, are considered a separate project. New projects, which are likely to incur extra costs,  are not part of the SLA.

Uptime Requirements

System availability can be an expensive requirement. It is important that we identify the specific requirements from a resource access standpoint and not necessarily on a server by server basis. The specifics dictate the availability of the servers.

Network and remote access

Network connectivity between sites and for users should be defined clearly as to the required uptime. Redundant links may be required based on the connectivity requirements.

Mailbox Access

This specification details the amount of time a user cannot access his/her mailbox on an Exchange server in the supported site. Many companies define at least two types of mailbox classifications:

  • Class A users can be without access to their mailboxes for no more than six business hours. This group usually contains managers and key people within ABC Company
  • Class B users can be without access to their mailboxes for no more than 24 business hours. This group represents the bulk of the ABC Company Exchange users.

Service Availability

Information gathered from the previous specifications dictate the level of availability that is required. The services are then classified using the following availability classes.



Unmanaged 1 50,000 90%
Managed 2 5,000 99.0%
Well-Managed 3 500 99.9%
Fault-Resilient 4 50 99.99%
High-Availability 5 5 99.999%

While the table represents system availability, it is important to note that the figures represent unscheduled down time. It is critical that “windows” are allowed for scheduled maintenance and upgrades. The down time is always be scheduled on the same day every week over the weekend. Many companies detail acceptable times during the weekend such as 11 p.m. Saturday to 2 p.m. Sunday. The specific time needs to be negotiated.

The maximum allowable scheduled down time per week for the Exchange systems should also be defined. For example, you could specify an eight-hour maximum window during the week and 25 hours one weekend per month.

You may want to send out a user survey in order to determine the best time for scheduled maintenance.

The outsourcing vendor must balance the uptime requirements with the inevitable cost. The foundation for a mission-critical architecture has specifications for server availability, data accessibility, data protection and disaster tolerance.

Equipment Support Requirements

Access and Security

ABC Company should require that named contacts be permitted physical access to the equipment at any given time. Moreover, overall access to the equipment must be secured and restricted. Access to the equipment must be available 24 hours a day, 7 days per week for the vendor’s support personnel.

Disaster Recovery Preparation


Many companies require that the clients be able to request a “recovery of deleted items” for up to 30 days of deleted items. Moreover, backups and snapshots must be maintained and also retired based on company retention policies.

In some cases, there may be a need to recover items from backup. The outsourcing vendor should honor such a request from any of the named contacts on the SLA. The outsourcing vendor may accept requests from the user community for restores, but should then verify the request with the named ABC Company contacts.


ABC Company does not have any specific requirements in regards to the types of systems (hardware and/or software) used to monitor the equipment.


Certification and Experience

ABC Company requires that at least one person supporting the systems have current MCSE status. At least two of the support personnel must be certified on the current version of Microsoft Exchange Server.

Exclusive/Nonexclusive use

Your company may have specific requirements in respect to Security and Privacy which could mandate the need for exclusive use of equipment. You may therefore need to add a line that specifies that the resources that support the ABC Company Exchange systems be exclusive to ABC Company and not used for non-ABC Company projects or tasks.



While ABC Company has no requirements as to the brands or types of equipment used for the Exchange Server environment, ABC Company does require that the equipment be included on the Hardware Compatibility List for the current version of Microsoft BackOffice.

The outsourcing vendor is responsible for the requisitions and costs associated with all equipment necessary to support the ABC Company Exchange Systems.

Spares for testing/recovery

ABC Company requires that at least two entire servers be allocated for spare equipment and testing/recovery. The servers must match the current servers in production so that parts can be swapped and/or replaced. ABC Company further requires that the test equipment be updated as the production equipment and online at all times for testing.


There are not many templates like this out there since most hosting companies mandate the terms to the customer. However, there are Exchange shops out there that balk at a commodity hosting solution due to privacy and security concerns. My intention with this article was to lay some of that groundwork out and save some research and headache with SLA planning.

Using Veeam for Exchange 2013 snapshots

Does the thought of VMWare snapshots for Exchange 2013 make you cringe? If so, we are much alike and your concern probably stems from unpleasant experiences you’ve had in the past. Exchange 2010 SP1 began the “healing” process between Exchange and VMWare however much of the stigma remains implanted in the heads of Exchange administrators.

Fear not! As the technologies continue to bridge together they grow more compatible and can thrive together with a small amount of work and monitoring. First, lets talk about the problems Veeam can cause; you will likely see these errors:

FailoverClustering – Event ID: 1135 – Cluster node ‘SERVER’ was removed from the active failover cluster membership. The Cluster service on this node may have stopped. This could also be due to the node having lost communication with other active nodes in the failover cluster.


MSExchangeRepl – Event ID: 4087 – Failed to move active database ‘NAME’ from server ‘SERVER’. Move comment: None specified.

Error: An error occurred while attempting a cluster operation. Error: Cluster API failed: “ClusterRegSetValue() failed with 0x6be. Error: The remote procedure call failed”


The problem is the very way that Veeam operates since it must necessarily “freeze” the guest node to complete the snapshot. This isn’t to say that Veeam isn’t following Microsoft Best-Practices. Veeam DOES in fact initialize VSS to take an Exchange-Aware snapshot so all is well with the backups and logs are being correctly truncated. However, during the snapshot period, the Cluster will detect a short outage and attempt to fail the databases which could set off some other failures as shown above.

What we have to do is change the Cluster settings to be more forgiving of these short “freezes”. The end result is an error-free backup and failover detection that is a little more forgiving of slight network outages or slower server responses. From any server in the DAG, open the Command-Prompt and enter the following text:

cluster /cluster:<DAGNAME> /prop SameSubnetDelay=2000:DWORD
cluster /cluster:<DAGNAME> /prop CrossSubnetDelay=4000:DWORD
cluster /cluster:<DAGNAME> /prop CrossSubnetThreshold=10:DWORD
cluster /cluster:<DAGNAME> /prop SameSubnetThreshold=10:DWORD

These settings are recommended by Veeam to force the cluster to allow for twice the amount of delay for the cluster’s heartbeat and network delay. The numbers look high but they really aren’t. 4,000 milliseconds is 4 seconds and for most companies a 4-second heartbeat tolerance will probably be just fine. I personally think the default second of 1,000 milliseconds is probably too low anyway. The other settings …SubnetThreshold is the failed heartbeat tolerance. By increasing this you also increase the failure tolerances before a fail-over automatically occurs. The default setting is 5 so by doubling that, we decrease the potential for an unplanned failover due to “glitches” with the network or short “freezes” like those instigated with products like Veeam.

These problems should instantly quieten down the Event Logs on your server if Veeam or anther product forces your Exchange Servers to “pause” momentarily for whatever reason. Moreover, if your Exchange environment is not closely being monitored then you may want to make these changes in order to make your DAG more stable during short unplanned problems with the network or perhaps host machines.

Alive and Kicking!


Microsoft Exchange Server is alive and well and Office 365 is here to stay.

Roughly 70% of my customers and partners are EXCELLENT candidates for Office 365 and I am very vocal about my support for the move. How many of us have witnessed a crashed database due to excessive logs? How many times have you fought with the load-balancer to figure out why connections are being dropped? How many days do you spend a year debating, fighting, monitoring or discussing Exchange backups and disaster recovery?  With thinning IT departments and greater messaging loads, it is far more difficult and costly to maintain a healthy Exchange environment than ever before. BUT, if BPOS was re-branded to “Office 365” in 2011 and it was the 3rd iteration of Microsoft’s Hosted messaging then why are 80% of the mailboxes still On-Premises in 2015? Why is there such a gap between Hosted and Local mail populations?



The Radicati Group (  has indicated that in 2014, Office 365 accounted for less than 20% of Exchange mailboxes worldwide.  (“Microsoft Office 365, Exchange Server and Outlook Market Analysis, 2014 – 2018”). This paper goes on to identify that Microsoft’s On-Premises market share will increase from 64% to 76% in by 2018 “as it continues to gain market share away from its competitors”.


Confused yet? The explanation is pretty easy when you remember that we are human and not machines.

We are in a transitional gap right now with Microsoft Exchange. The industry wants us to be in the cloud, Microsoft wants us in the cloud and most of us want to be there but we all move at different paces. It took me three years to ramp up on Office 365 due to my subdued interest and the complete lack of interest by most of my larger customers. I have since made the transition but very few of my larger customers have made the same mental shift. THIS is the reason for the gap and the reason Microsoft will continue offering the On-Premises version of Exchange until 2018 or even later.

We are creatures of habit and most resist change. Fears about security, privacy and resilience have slowed the adoption of Office 365 but eventually we will all be there. When the On-Premises mailbox population decreases to a number Microsoft is willing to sacrifice then the Exchange Server product will be forever retired.  Until then, I will continue to close the gap with bridges or catapults depending on the need.

Exchange 2010 SP1: Under the Hood

Windows IT Pro has asked me to put together three technical classes for Exchange to be presented January 2011. 

“During this one-day, free online conference, five-time Exchange MVP Steve Bryant will teach you how to:

  • Master the Exchange control panel – In this session, you’ll learn the administrative differences between the Exchange Management Console (EMC) and the Exchange Control Panel (ECP), as well as the benefits and available features for end-users, group administrators, and enterprise administrators. Most importantly, we’ll cover the setup scenarios to help introduce you to Role Based Access Control (RBAC) and how it can help you help others to help themselves.
  • Improve Exchange archiving – In this session, we’ll dive deep into the Exchange 2010 SP1 archive functions with an end-to-end scenario for creating a separate archive store (with different HA), adding and managing user archives, managing the auto-archive feature, searching the archive, and directly importing PST files into the archive. Having your cake and eating it too is now possible with SP1.
  • Accomplish high availability databases with Exchange – In this session, we’ll discuss compatibility with Hyper-V, overall storage planning, WAN implications, and end-to-end scenarios for planning, creating, monitoring, and managing DAGs in your environment. We’ll focus on UI and PowerShell cmdlets available with SP1, so even those experienced with this new feature should learn something new”

EDITED: The PPT files are available should anyone like them.


The Exchange 2010 SP1 Archive Solution

For those who don’t know me, let me say that I have a terrible poker face. I am not much for suspense or grandeur so I will now spoil the ending; unless you already use an archiving program like Mimosa NearPoint, Symantec Enterprise Vault or Zantaz EAS then you should definitely read this article and seriously consider configuring the Exchange 2010 SP1 archiving options. In this article, I will show you how to control databases growth, eliminate PSTs and allow users to access both current and archived items from Outlook 2007, Outlook 2010 and Outlook Web Access.

Archiving Principals

Over the years, I have worked and partnered very closely with Mimosa Systems and I have helped to implement email archiving solutions for Fortune 500 companies. Moreover, I have worked directly to prepare for both Federal and State court litigation with email archiving tools. So, before I go any further let’s talk about what archiving means. Wikipedia defines an Archive as “…a collection of historical records, as well as the place they are located.[1] Archives contain primary source documents that have accumulated over the course of an individual or organization’s lifetime.

In general, archives consist of records that have been selected for permanent or long-term preservation on grounds of their enduring cultural, historical, or evidentiary value.“

Based on this definition alone, I would say that an Archive (as it relates to email) is a collection of emails that have been preserved for a set amount of time dictated by the entity that owns the records. The benefit of having an email archive is that it provides fault tolerance to the messages and the ability to globally search and export messages as needed for whatever purpose necessary. This is common for companies that are tightly regulated or under orders (either by the court or internal requirements) to preserve messages.

The Exchange 2010 archive functions do NOT provide tools that match this definition of “archive” and I want to make that perfectly clear. Exchange 2010 does provide Journaling tools to collect, protect and store emails but Microsoft does not label that as Archiving. Even though the tools I describe in this paper have the word Archive plastered all over them, they are in fact designed to manage mailbox and database sizes. Yes, there are some excellent global search tools and yes searches can be delegated and content exported but the user never loses the ability to delete items from their mailbox or archive. I absolutely adore these new features and strongly recommend every Exchange shop use them, but because the data is not protected against user deletion, I have a hard time labeling it as an archive solution.

Exchange 2010 Archive Components

Now that I have said my peace, let’s move on shall we! There are several components that I want to describe before we get into the meat of this. The components are:


  • Exchange 2010 Mailbox Server- Yes, this is obvious, but I want to make sure you understand that these features are only available on Exchange Server 2010 RTM and SP1.
  • Exchange 2010 Mailbox Databases – Yes, another obvious point but I wanted to emphasize the fact that Exchange 2010 RTM automatically places the user’s archive within the same Mailbox Store as the user’s mailbox. My original excitement about the archive features somewhat dissipated when I learned about this during the beta. In this scenario, the Archive is forced to participate in the same High-Availability (HA) plan as the live mailbox so if your Service Level Agreements (SLA) requires several copies of the Mailbox Stores then your Archive must follow along and chew up valuable drive space. Fortunately, SP1 allows you to specify a separate Mailbox Store for the user’s archive so you have the ability to tier your Recovery Point Objectives (RPO) and Recovery Time Objectives (RTP) separately.
  • Retention Policy Tags and Retention Policies allow you to control when things are moved to the user’s archive and to whom the policies should apply. While these features are available in RTM, the management of these features require the use of the Exchange Management Shell and the documentation around this is pretty thin. I would recommend considerable lab testing to perfect the management process with the RTM. Exchange Server 2010 SP1 changes how these (and all Mailbox Policies) features are managed and applied. In fact, SP1 makes the application of Retention Policy Tags exceptionally easy and intuitive.
  • Outlook Client – There are three ways to access the archives;
    • Outlook Web Access (OWA) provides direct access to both the user’s mailbox and the users server-based Archive. Unfortunately, it seems that the search tools do not span both repositories so if a user would like to search EVERYTHING for a specific email, they will need to perform two searches; one in the mailbox and one in the archive.
    • Outlook 2010 also provides direct access to the user’s mailbox and the user’s archive. It too currently suffers from the two-search problem. While this is not a show-stopper, it will certainly cause some user-confusion as they will need to know to search twice or they will need to know which repository contains the items they require. It is also important to note that the archive is not cached in Outlook’s Offline Store (OST) and so you can only access the archive when you are connected to the Exchange environment.
    • Outlook 2007 support is added with Exchange 2010 SP1. As of this writing, I have not had the pleasure to test this since it will most certainly require a patch to Outlook 2007 and I was unable to acquire those bits. The expectation is that it will function as the Outlook 2010 client does. I am hoping that Microsoft will figure out a way to provide a unified search, but I am not holding my breath since even OWA 2010 SP1 does not have that functionality.
  • Exchange Control Panel (ECP) is the web-based management interface that among other things allows those assigned the appropriate role to perform a Multi-Mailbox search. While this is not speficially an archive function, it will automatically search both user mailboxes and user archives simultaneously so I wanted to spend a little time on the subject.


Archive Management

Exchange 2010 archives are user-specific and so the attributes of an archive are maintained on the User Mailbox object and can easily be accessed by Powershell cmdlets. You can add an archive to an existing mailbox by using the Set-Mailbox cmdlet with an –Archive switch. Additional switches are provided to allow you to specify a different database (new with SP1) as well as quotas so the archive settings for the mailbox may look a little like this:

ArchiveDatabase              : NY Archives

ArchiveGuid                       : d2a0d37c-3a05-4a88-b196-3f71f291fde8

ArchiveName                    : {Online Archive – Kendall Bryant}

ArchiveQuota                    : 50 GB (53,687,091,200 bytes)

ArchiveWarningQuota     : 45 GB (48,318,382,080 bytes)

ArchiveDomain                 :

You can also use the Exchange Management Console (EMC) to enable or disable an archive for the selected user. To enable the archive for a user, simply right-click the mailbox name and choose Enable Archive.


The Enable Archive option provides you the ability to select the specific database that should host the archive. With this feature, items that exist on your Tier1/High-Availability mailbox databases can be manually or automatically moved onto a database with lower availability. Those who are a Microsoft Online Business Suite tenant can enter their domain name to identify a remote hosted archive location.  These features became available with SP1 and represent a significant change of the archive architecture.

Also, the GUI does has a nice little icon it uses to denote who has an archive and who does not; a clever little folder-drawer icon! I am sometimes embaraased as to how easily I can be impressed or amused.



Automating  the Archive through Retention Polices

So if that was not enough, SP1 completely changed the policy tabs in the EMC. Gone are the tabs known as “Manage Custom Folders” and “Manage Default Folders.” Instead, we now see Retention Policy Tags and Retention Policies. This provides a much clearer definition and easier management for those new to Exchange Server administration.


The first thing you will need to do is define your Policy Tags. The Default Archive Policy is now exposed to the EMC. Hooray! You will probably want to create a new one though if you want to do some granular configurations. Creating a new retention policy tag is just a right-click away or you can just click on the New Retention Policy Tag selection from the Action menu.

As first glance this wizard looks the same Mailbox Manager rules but there are two major differences with SP1. First, under the Action drop down under Age limit you can now select “Move to Archive.”  Secondly, when you want to see or modify the mailboxes that should receive the policy, you can edit the policy the click the Mailboxes tab. From there you can add or remove mailboxes as will.


There are a few more changes that are little more subtle.  As I mentioned before, archive settings for the user are actually User-Mailbox attributes. Litigation Hold and Retention settings can be found under the MRM (Messaging Records Management) from the Mailbox Settings Tab.


On this same Tab, you can select Archive Quote to set rules on the Archive size.

Accessing the Archive

There are three ways to access the user archive; Outlook 2007 (with SP1), Outlook 2010 and OWA 2010. Once the user archive is enabled using the EMC or EMS, clients will see it as another level in their Outlook. In fact, it is very similar to what you would expect if your Outlook was configured to open more than one mailbox.


If you think of it as a separate mailbox, then the limitations I am about to mention make sense.

  • No offline Access – You must be connected to the Exchange environment to get to the user archive. In fact, the Outlook client even shows it as “Online Archive.”
  • Two searches – Neither Outlook or OWA can simultaneously search both the user archive and the Mailbox for items. NOTE: THIS IS NOT CORRRECT. ONE SEARCH WILL WORK FOR BOTH BUT IT RELIES ON THE MICROSOFT SEARCH SERVICE.

Multi-Mailbox Searches

Interesting enough, Exchange 2010 does provide the ability to search both the user archive and mailbox simultaneously, but not with Outlook clients and not with tools designed for the general population. Exchange 2010 now supports a robust Role Based Access Control permissions model. In this model the role-group named Discovery Management provides the assigned person the ability to perform Multi-Mailbox searches which have access to both mailboxes and archives.

Using the Exchange Control Panel, the Discovery Manager (Role Group) can select Reporting and New to Perform a New Multi-Mailbox Search.


The searches can be fairly complex as you can select the search strings and the types of messages to search. You can also limit the search to specific senders/recipients, date ranges, the specific mailbox(s) you want to search. Lastly, you determine where you want the results stored.image7

The search runs on the server and when the job is complete the assigned Discovery Search Mailbox will receive an email that summarizes the search results. This message also contains an attachment that lists the items found in the search. If, on the New Mailbox Search page you selected to Copy Results to the Selected Mailbox the Discovery Search Mailbox will also contain a copy of all the items that met your search criteria. These items will be located in a folder names for the search itself.

Exporting the Search Items

It’s fairly safe to speculate that those who would require a global Multi-Mailbox (and Multi-Archive)   search would need to present the items to someone, right? Getting to the data takes a little more work. For starters, you need to find the Discovery Search Mailbox in the EMC and give yourself (or the auditor you have assigned) Full Permissions. Now you can simply open Outlook Web Access and see all the items that matched the search.

But what if you need to transport the items out of Exchange; perhaps for litigation? Well you really need an Outlook client for that so you have to jump through a few more hoops. With Exchange 2010 RTM, the Discovery Search Mailbox(s) could not be opened with Outlook. Fortunately that changed with SP1 so you can open it like any other (additional) mailbox by using the Microsoft Exchange account settings in Outlook.

Since we can see the folder from Outlook, we can now export it.



The export feature in Outlook 2010 is a little more difficult to find however: First click File from the Outlook menu bar and then select Open in the left pane.image9

Now in order to export, we click Import (ironic huh?) Believe it or not, this is how we access both the import and export tools! Choose Export to File and then select Outlook Data File (.pst) and click Next again.

From this screen, you can select the parent folder you wish to export and make sure the “Include subfolders” option is chosen. Continue through the wizard to export the data to a PST file.


The Exchange 2010 Archiving tools (especially those that ship with SP1) have features that every Exchange 2010 shop can use. Tailored specifically to help control mailbox sizes, the Retention Policy Tags, Multi-Mailbox search and the separation of the Archive from the Mailbox database provide you the tools needed to better shape your databases and eliminate the need for PSTs. In fact, to make the transition easier, SP1 provides the means of importing PSTs directly into a person’s archive. One last thing I will point out is timing. You would be better served by waiting for SP1 before jumping head-first into Exchange 2010 archiving. Some things will need to be undone in order to do them right with SP1.


Debunking the top 5 Myths concerning Cross-Forest Exchange Migrations

Exchange Cross-Forest migrations are not as impossible, expensive or complex as you may think. If you are considering merging an Exchange organization into another organization, you should know that it can be done and you can do it.

Cross organizational moves are complex and on my last large cross-org project we had nearly 100 Exchange 2007/2010 servers and over thirty locations with multiple SMTP paths. Moreover, we were dealing with two separate AD forests with absolutely no automated directory synchronization. Even with these challenges plus WAN link migrations we established a process to successfully migrate roughly 600 people in a six hour window with minimal personnel and an exceptionally low failure rate. If you do the math you will see that we built capability to migrate 2400 a day or 16,800 people per week. Since we are not running four shifts a day, seven days a week I have a few moments to talk about how can do it too.

As you read this, you will notice that I have included code samples, a few tips and some overall ideas to enforce by conviction that this can be done without expensive tools and to illustrate my points. You should not take this article as a complete migration guide but as a confidence builder. There are far more technical strategies that are better described somewhere else such as sizing, migration throughput, error handling, WAN moves, server centralization, scheduling and the overall technical aspects of the scripting and process. I have tried however to give you enough information so you can understand how manageable this process really is.

So let’s set the stage. You are tasked with planning the migration of thousands of Exchange users from one company/organization to another. You have trusts in place and accounts in each Forest with rights and you have read very little documentation that would suggest you can accomplish this on your own. Moreover, you have a quote for $500,000 worth of migration software and have no idea how you will maintain your budget or if the software is even worth it.

Myth 1:

Migrating Exchange mailboxes from one org to another without 3rd Party tools is suicide


In my last large cross-org migration project, we moved roughly 30,000 mailboxes using the standard Exchange 2007 “Move-Mailbox” PowerShell command. The syntax is described here:

Having said that let me point out that you should augment that command with additional scripts that provide additional error-handling and account management. In the end, the Move-Mailbox command is the only tool I use to migrate terabytes of Exchange information from one organization to another.  I will show you the command in a moment, but first let’s talk about how we use it:

  •  For bulk moves, we script the command against a text file that contains the names we wish to migrate. I prefer this better than using an AD group to list the migration candidates since it allows us to “lock” the group and easily manipulate the names if so desired.
  • Perform your AD work ahead of schedule. Create Mail-enabled user objects in the target domain and instruct the user community in advance as to how to change passwords and logon. You should avoid using AD Contacts and focus on Mail-enabled users in order to maintain passwords, groups and other attributes before, during and after the moves. This part of the project is critical and deserves its own section as you must maintain all X500, SMTP attributes. Moreover, it is important to cross-pollinate the LegacyExchangeDN value in one directory as an X500 address in the opposite directory for each mailbox. This will dramatically reduce and possibly eliminate reply failures and meeting ownerships.




  • Use the Move-Migration script to Mailbox-enable the target object and move the mail but use an outside process to handle all account changes in the source domain. This will give you more control and reliability of the source objects.  The Move-Mailbox script can perform these functions but there is little in the way of error-handling so if the AD is not responsive or there is a connection failure during object modifications, the Move-Mailbox command does not always recover. It is super reliable as a mail migration tool and semi-reliable with its AD changes so focus on its benefits and shore-up its weaknesses.
  • Execute a series of Post Scripts to perform any additional cleanup you may require for the accounts and mailboxes. There will be plenty. You will need a script to disconnect (do not delete in case you need to reconnect later) the mailbox on the source object and to turn it in to a Mail-enabled object with all the previous addresses and mail attributes. You will need another script to compare the object to make sure it is correct.

Just to make sure my point is perfectly clear, this is the exact code we use for every migration:


$import = Get-Content $textfile

$SourceCredential = get-credential

$TargetCredential = get-credential

$targetGC = ""

$sourceGC = ""

#move the migrated user's mailbox

$report = "g:\migrations\results\MailboxMove-$(Get-Date -format 'yyyy-MM-dd hh-mm-ss').xml"

Move-Mailbox -Identity $item -TargetDatabase $database -GlobalCatalog $targetGC -SourceForestGlobalCatalog $sourceGC -SourceForestCredential $SourceCredential -TargetForestCredential $TargetCredential -confirm:$False -RetryInterval 00:00:30 -BadItemLimit 50000 -IgnorePolicyMatch -AllowMerge -ReportFile $report


It is very, very simple. We create variables for the credentials and the Domain Controllers and allow the target database to be entered as a string so the execution of the migration looks something like this:

./migrationscript -textfile "C:\Group1.txt" -database "SERVERA\Storage Group 01\Database-01"

So let me explain a few of the details in this script. First, we force the retry interval to 30 seconds instead of the default 60. This is important since there is a delay between the time you write the object in AD and when the target Exchange server acknowledges the write. Also, there is a bug in the Move-Mailbox script that reports “Failed to set basic mailbox information, will retry in 60 seconds.”

You are more likely to see this message when performing cross-forest migrations:

“Failed to set basic mailbox information, will retry in 60 seconds”

Microsoft should rename this function to “Waiting” instead of “Failed” and you should just consider this 30 seconds part of the migration and move on!

Second, we set the BadIemLimit to a high number but we have NEVER seen a SINGLE item get dropped. Lastly, we added the IgnorePolicyMatch and -AllowMerge in order to meet our own goals.

TimeSaver-Make sure all of the target Exchange 2007 servers only have one Storage Group and one Mail Store. In bulk migrations, we found that roughly 5% of the migrations resulted in a (complete) disconnected mailbox in the designated target store and an empty connected mailbox in a completely different store. It seems that at some point during the end of a mailbox migration, the target server cannot enable the mailbox and Move-Mailbox creates a new empty mailbox on the same server on a different store. No error is flagged and the only way to detect this was to write a script:

get-mailboxserver | where {$ -like "SERVERNAME*"} |Get-mailboxStatistics | where {$_.DisconnectDate -notlike "" -and $_.Displayname -notlike "*test*"} | sort LastLogonTime | ft DisplayName, LastLogonTime, Database -wrap

This script is pretty simply as it is only looking to see if there are mailboxes that are in a disconnected state. This will be the case if the mailbox has been moved to another database or server or if the mailbox suffered the “split” problem as described.  However, by targeting a server with only a single database you eliminate this problem and have no need for my clever script.

Myth 2:

You must use 3rd party tools to automate the Outlook profile changes


This is even more false than the first item since Outlook 2007 will correct itself automatically! Yes, you read that correctly. Outlook 2007 will sense the change and use the Autodiscover feature to find the target AD and automatically reset the Exchange server connection settings. For Outlook 2003 clients you can use Microsoft’s Exchange Server Profile Redirector Tool which for us has a 90-95% success rate.

The profile redirector can be easily deployed from a logon script. You can place the redirector files in the netlogon share and execute it from a logon script like this:

%logonserver%\netlogon\exprofre.exe /targetgc= /n /v

You may notice that we are not using many of the switches here. That is by design.  By not adding the /F switch we are removing Outlook Favorites. By omitting the /A switch Outlook must download a new copy of the address book.  Since we omitted /O the OST file will be renamed instead of deleted. If you have strange problems with Outlook after test migrations you may want to add the /O switch in order to nuke OST files as they can be a problem. We left the/N switch in order to clear the nickname cache.

ExProfre is pretty sophisticated since it only makes changes when it detects the original mailbox is gone (converted to a Mail-Enabled object for example) and there is an entry in the target domain for the user.

Here is the link to the tool:

TimeSaver- Outlook problems will represent 5-10% of your help desk calls and the default fix for nearly all Outlook problems is to create a new fresh profile.

Myth 3:

Delegates and customized Mailbox Permissions are lost- FALSE


This is false since the source rights on the mailbox will come over as part of the Move-Mailbox process. In cross-forest migrations the original AD accounts in ForestA can be used to access the mailbox in ForestB. This behavior is supported by default by Move-Mailbox but not always desired. If for example, you plan for the users to begin using accounts in ForestB to access mailboxes in ForestB then the old Access Control Entries for ForestA could create some problems. We found that the legacy credentials may work for accessing the mailbox in ForestB but other Exchange functions in the new Forest did not work with the old credentials. This problem can be overcome by nesting certain Forest groups into each other for a true Forest trust or you can simply write a script to remove the old ACLS from the new mailbox. Here is an example of that code:

Get-Mailbox -Server "TARGETSERVER" | Get-ADPermission | where { ($_.IsInherited -eq $false) -and ($_.User -like “LEGACYDOMAIN\*") } | Remove-ADPermission -confirm:$false

Get-Mailbox -Server "TARGETSERVER" | Get-MailboxPermission | where { ($_.IsInherited -eq $false) -and ($_.User -like "LEGACYDOMAIN\*") } | Remove-MailboxPermission -confirm:$false

The permissions are split between ADPermission and MailboxPermission so you must run two commands to remove them completely. Moreover, there is no -server option with Get-ADPermission or Get-Mailboxpermission so you have to first enumerate the object using Get-Mailbox. Once you have the users for a particular server you can pipe the results to get the permissions limited by the ACLS that contain the domain name you wish to remove. You can then pipe the results to remove the permissions.

It is also important to note that Delegates will also come over but remember that with Outlook, the X.500 address is used behind the scenes to link users with mailboxes. So for this to work, you need to copy the LegacyExchangeDN value from each mailbox in the source domain and populate the migrated target object with a matching X500 proxy address. This will ensure that the delegate remains linked with the appropriate user. Here is a Microsoft article that explains the process in a little more detail:

The Move-Mailbox command should take care of this by itself, but it would be a good idea to write a script to collect the attribute then report on it after a group has been migrated just to make sure things are set as they should be. We don’t want end-user complaints to be our indicator that a directly entry is wrong!

Myth 4:

Cross-Forest Migrations are too complex and time-consuming-FALSE


Well, I say False but let me clarify. Yes, they are complex but they are manageable. Yes they are time-consuming but you can spend most of the time upfront in preparation and keep the actual migrations to a minimum. Here are some of the things you can do to make the process easier.

1)      Try to minimize expectations for the migrations. I usually send an email to the migration team and management that sets the expectations a little lower than we can deliver. For the most part, the migration will go far smoother than this message suggests but it sets the expectations to something we know we can deliver:

  • For the first week please choose recipients from the Global Address List instead of typing their name or using reply.
  • PST should be identified before the migration as the Outlook profile may “forget” about them even though they have not been moved or deleted.
  • The migration cannot move corrupt or damaged Outlook items. Our target is to move 99.9% of the mailbox items and provide a report when a corrupt or otherwise unmovable item is found.
  • Outlook may take a long time after the migration to recreate its offline cache (OST)
  • Many customized settings in Outlook may be gone
  • Delegates will need to be setup again
  • ny customized Outlook rules will need to be setup again
  • If they have SmartPhones configured, they will no longer work
  • You may get notifications for meetings that have already passed or ones you have already dismissed.

2)      To make the transition smoother, I would highly recommend the installation of the Microsoft Exchange Server Inter-Organization Replication tool. This tool will provide Free/Busy information across the two organizations and it will set the environment up to replicate other Public Folders if necessary. This tool is probably the easiest tool to setup and will provide the most value with the least amount of overhead. I usually install the tool on a Public Folder server in the Target organization and the publisher on a Public Folder server in the source organization.  The link to this free tool is here: Download the tool and expand to get the setup instructions. Once setup, this tool has never failed me.

3)      Move Workgroups at a time. Coexistence is by far the biggest point of confusion. “Have I been moved?” “Why does his/her email look differently than mine?” Moreover, when you move a workgroup together they become a support system for each other in the event that something does not go smoothly. When choosing who to move when, if you focus on business groups as the primary differentiator you will reduce helpdesk calls and overall confusion.

4)      Once you begin the migration, you should drive the migration to a conclusion. Every day you maintain a split organization you run an overly complex organization. Moreover, if your organization is not using automation to keep the directories synchronized every day that passes opens the door for more directly conflicts as people are added, removed or changed. You must minimize the amount of time you are coexisting on multiple platforms or in this case multiple Exchange organizations.

Myth 5:  

You must hire expensive Subject Matter Experts for your migration-False


This is absolutely false if you have some bright folks on your team who understand Powershell and Directory updates.  Having been hired to do many of these types of projects, I can say that I am usually only involved in the first 20-30% of the moves.  So someone like me is often involved in the beginning phases to get the migrations teams quickly ramped up and the process defined and refined so it is easy to repeat.

Most organizations just don’t have the time to (self) ramp up and continue to perform their day to day operations so bringing in an outside person/group to kick things off is pretty common but certainly not a requirement.

For example, unless you have done a considerable number of Cross-forest migrations, you may not truly appreciate the negative impact WAN links and remote Domain Controllers have on the process. Intra-Org moves say between two sites in the same organization works perfectly and rather fast. In a Cross-Organizational move however, the performance of the migration can be painfully slow and AD replication will offer considerable delays and even potential problems with conflicts. Moreover, targeting Exchange servers in various sites and locations means you are never really sure how fast the process will be and your projections will likely be WAY off.

Understanding those potential obstacles up front means you can plan around it and put into place a very consistent, reliable and predictable process. Here is one example of a process we have refined with experience:

CrossImage2One thing  we have learned with this model is we do not have to change the migration scripts to target different DCs and servers and most importantly we know exactly what our migration capacity is and can hit our projected numbers every single time with no surprises. As I mentioned before, it is also important that the target Exchange servers have only one storage group and one mail store. This will eliminate a potential problem with mailboxes that may “split” across stores.

To move people to remote servers after this move, you would just use the normal Move-Mailbox command or even the GUI to distribute the mailboxes. This process is reliable and a simple “Fire and Forget” operation since you can just queue them up before you go to bed at night!