Hybrid Mode in Tabular BI Semantic Model – Part 2

This is Part 2 of the Hybrid Mode in Tabular BI Semantic Model series, where we will learn more about the design tips, a few factors to consider on Hybrid mode implementation and a summarised pros/cons of the Hybrid Mode.

Part 1 of the Hybrid Mode in Tabular BI Semantic Model  is located here.


Partitioning for Hybrid Mode

DirectQuery only supports one partition. However, in Hybrid Mode, you can additionally define a set of mutually exclusive In-Memory partitions that can serve the In-Memory data access. The DirectQuery partition and the In-Memory partitions can overlap; however, the Processing Option of the DirectQuery partition must be set to “Never process this partition” otherwise “A duplicate attribute key has been found when processing” error will be returned as shown below.

Processing Hybrid Tabular Model with Duplicate Key Attribute Error

More information about Partitioning and DirectQuery mode: http://msdn.microsoft.com/en-us/library/hh230965.aspx

Note: the results of the DirectQuery partition and the In-Memory partition cannot be combined automatically. A separate connection must be made for each data access and query results can potentially be combined manually or programmatically. For example, SSRS reports can potentially use both DirectQuery and In-Memory data access types of the same Tabular database to retrieve the real-time and in-memory data. More on this, in another post.


Design Tips:

1. Only expose a subset of the data that is required for real-time access.

Requirement example: when performing Lead Analysis, detailed analysis of the past month is required in real-time using PowerView.

Design solution: Configure the DirectQuery partition with date filter to only retrieve past month data.

Analysts can retrieve the required up-to-date data of past month via PowerView to perform accurate analysis.


2. Only expose a subset of data that is required for in-memory analysis

By default the In-Memory partition is set to be the same one as the DirectQuery partition; i.e. it’s an exact copy. If there is a requirement to only make a subset of source data generally available via In-Memory (i.e. access via Excel), then this subset of source data is a good candidate for an In-Memory partition. Multiple partitions can be defined to efficiently process the data into the In-Memory part of the Hybrid Tabular Model.


3. Set DirectQuery partition to “Never process this partition”

This is to avoid processing error during design/development in SSDT.


4. The larger the combined size of the In-Memory partitions, the larger the Memory size needed in the server.

Although the compression in Tabular Model is great, but the source data needs to be processed and loaded into Memory which can be quite heavy on resources. VertipaqPagingPolicy can be tweaked if the data in the In-Memory partitions do not fit in the memory, to make use of virtual memory, paging data out to system pagefile. Marco Russo has a brilliant article on VertipaqPagingPolicy modes and Memory Properties.


5. Optimise at source level first, i.e. SQL Server database.

Use Columnstore index and partitioning techniques where necessary, especially on the tables that are source to the Tabular model. (This tip is applicable to any Tabular Model modes).


6. Partitioning in Tabular Model is a means of organising the data to be loaded and processed into In-Memory.

Unlike Partitions in SQL Server relational database engine where they can improve performance at query time, partitions in Tabular Model are strictly for optimisation in the processing performance.  Having said that, the larger the combined total size of the tabular model partitions, query performance may decline a little bit – but not in linear manner, remember that xVelocity engine uses outstanding compression algorithm. More about Partition and Processing strategy can be found below. Further reading on Partitions can be found here: http://msdn.microsoft.com/en-us/library/hh230976.aspx.



Factors to consider when adopting Hybrid Model

There are a few factors to consider when adopting Hybrid Model. It is a clever way of attacking “real time” and “flexibility” issues, but it is not always suitable for every scenario.


Although Hybrid Mode can offer the “real-time” feature of DirectQuery, semantic differences between the xVelocity In-Memory Analytics Engine and SQL Server engine should be carefully considered and investigated. The results when accessing the data from DirectQuery mode and In-Memory mode for Hybrid tabular model may differ due to the semantic differences. See more information on semantic differences here.

In Part 1, I have mentioned that the DAX query issued to the DirectQuery data access, will be translated to equivalent SQL queries. DirectQuery uses/relies on SQL Server Database Engine whilst In-Memory uses xVelocity In-Memory Analytics Engine which are different semantically and may return different results.


One of the reasons in choosing DirectQuery design is for scalability, i.e. relying on optimisation performed at SQL Server level and minimal additional resources required for DirectQuery.  Enabling Hybrid data access mode means enabling both DirectQuery mode as well as In-Memory mode.  Since In-Memory mode requires resources, particularly memory and CPU, this means that enabling Hybrid Mode requires considerations on memory and CPU allocation in a similar way to In-Memory. Using Hybrid mode without careful considerations on partitioning and how much data to be processed into the In-Memory Tabular Model may defeat the purpose of switching to DirectQuery design (exposed as Hybrid Mode Tabular Model)


Securing data when it is accessed through DirectQuery mode and the In-Memory mode of a Hybrid tabular model is not trivial. Please note that Row/Dynamic level security is not offered in Hybrid mode as it follows the DirectQuery design. One of the benefits of using DirectQuery on the first place may be to take advantage of existing permission definition in the SQL Server data source.

Most complex permissions defined the SQL Server data source could not be persisted when data is accessed through In-Memory mode of a Hybrid Tabular Model. An example of “complex” (not so complex in this one) permission defined in the SQL Server data source, user A only has read access to transaction records dated 2012 and newer; while user B has read access to transaction records dated 2010 and older.

In this example, In-Memory will not be able to capture user A and user B permission in this case, because:

1. Impersonation setting for In-Memory data processing of a Hybrid Tabular Model is specific to one credential (windows account or service account), all data will be processed into In-Memory is based on the credential supplied.

2. Calculated column is not available in Hybrid mode (as it follows DirectQuery design), thus it would be impossible to define the permission definition at row by row level to mimic SQL Server data source’s.

This means uniformity in complex permission definition regardless of the data access mode of a Hybrid tabular model is not possible.

Partitioning Strategy and Processing Frequency

Partitioning strategy and processing of the In-Memory mode can be aligned to business requirements. For example, if the business requires only the past 12 months of data to be accessed via Excel connecting via In-Memory, the In-Memory partition(s) can be defined such that only the past 12 month worth of data will be processed into the Hybrid tabular model. Please note that the DirectQuery partition can overlap the In-Memory partitions, so it is possible for the DirectQuery partition to contain all data or some of the data or a superset/subset of the In-Memory partition(s).

An example of Partition Design in Hybrid Mode

An example of Partition Design in Hybrid Mode



As outlined above, the architecture of Hybrid mode is quite clever but would not solve all real-time vs flexibility issue. The advantages and limitations of Hybrid mode can be summarised as follows.

Hybrid Mode Advantages

1. Greater options for client tools (compared to pure DirectQuery only mode), i.e. Excel and other MDX client tools can access the In-Memory partition(s)

2. When diligently partitioned, less resource required for caching, processing and querying In-Memory partitions.

3. Flexibility in accessing Real-time data using the DirectQuery data access, i.e. PowerView and SSRS

Hybrid Mode Limitations

Inherited design constraints of DirectQuery:

1. Restricted DAX functions (as it needs to follow DirectQuery design constraints). For example, TOTALYTD, SAMEPERIODLASTYEAR are not available in DirectQuery, as such, are not available in Hybrid Mode either.

2. No Row Level security. Even if it is possible to implement this at the Source database level, there will be inconsistency in the data returned between using In-Memory and DirectQuery

3. No Calculated Columns

Inherited In-Memory drawbacks:

1. Memory requirements to store the compressed In-Memory data.

2. Processor requirements to process source data into the tabular database and memory.

Data inconsistency limitations:

1. Stale data in the In-Memory and up-to-date data in DirectQuery, which may confuse users

2. Semantic differences between SQL Database Engine (returned by the DirectQuery mode) and the xVelocity In-Memory Analytics Engine (returned by the In-Memory mode)


Other Options

Hybrid mode still has the burden of Memory and Processor requirements as much as the default In-Memory mode. When designed and configured carefully the default In-Memory mode may be able to achieve real-time in a same manner as Hybrid mode with less complexity. This of course would be another topic of discussion. So, stay tuned!


Wrap Up

Hybrid Mode mainly offers both Real-Time access and Client Tool flexibility. However, it comes with price. This Part 2 of the Hybrid Mode in Tabular BI Semantic Model suggests including relevant data in the tabular model for both In-Memory and DirectQuery. Consistency, scalability, security, partitioning strategy and processing frequency are important factors to consider when implementing Hybrid Mode. Some limitations of the Hybrid Mode is valid as at the time of writing.

If you are implementing Tabular Model database, I’d love to hear your thoughts. Please leave your comments in regards to your experience with Tabular Model, or the Hybrid Mode posts.


Edit - 25 July 2012: 

Added more explanation on VertipaqPagingPolicy reference.

Added more explanation in Section 6. Partitioning in Tabular Model is a means of organising the data to be loaded and processed into In-Memory.

Hybrid Mode in Tabular BI Semantic Model – Part 1

During my presentation on “DirectQuery vs Vertipaq Modes in SSAS Tabular Model” at PASS BI/DW VC and SQL Rally in Dallas this year, I briefly explained the Hybrid Mode in Tabular BI Semantic Model (BISM). I would like to discuss Hybrid Mode further in two parts. Here is the first part, which contains a basic walkthrough of Hybrid Mode in Tabular BISM.

On a side note, DirectQuery vs Vertipaq Modes in SSAS Tabular Model slide deck can be downloaded from here: http://www.mssqlgirl.com/slide-deck-directquery-vs-vertipaq-for-pass-dwbi-vc.html



The default mode of SSAS Tabular Model is the In-Memory (also known as Vertipaq Mode), where the data from various types of data source are processed and loaded into the in-memory Tabular databases. Any queries executed on the Tabular database will be served based on the data in Memory (cache). Due to the state of the art compression algorithm and multi-threaded query processors, the xVelocity in-memory analytics engine can provide fast access to the tabular model objects and data. In-Memory mode supports both DAX and MDX query types.

Another mode available for SSAS Tabular Model is DirectQuery. DirectQuery translates all DAX queries at run time to SQL statements, allowing real time access to the SQL Server source database. Unlike the In-Memory mode, DirectQuery only works with one SQL Server data source. The main advantage of using DirectQuery is the real time access and scalability. This comes with a price of restrictions on a number of DAX functions and missing Calculated Column feature. Only client tools that issues DAX queries can access Tabular Model with DirectQuery mode.

Hybrid Mode combines the design aspect of DirectQuery and Client Tool flexibility of In-Memory. Essentially, Tabular Database Model with Hybrid Mode is designed/developed with DirectQuery enabled, and is published with both DirectQuery access mode and In-Memory access mode. When published, the metadata is deployed and data will be processed into memory for the InMemory access. Hence, Hybrid Mode also requires processing mechanism of In-Memory and also supports both In-Memory partition type and DirectQuery partition type.

Querying in Hybrid Mode

There are two options of querying a Tabular database with Hybrid mode enabled.

The following diagram shows how Hybrid Mode can serve DAX issuing client tools (PowerView, SSRS) and MDX issuing client tools (Excel, Tableau, SSRS).

SSAS_Tabular_Hybrid Mode_Query Flow


When a client tool issues a DAX query via the DirectQuery access mode, the DAX query is passed and then converted into an equivalent SQL query that accesses the source SQL Server database directly. The result returned to the DAX issuing client tool will be straight from the source SQL Server data source.

When executing an MDX query via In-Memory access mode, the query is served from the cache (the tabular database). There is no conversion and the results returned to the MDX issuing client tool will be based on the data that is in the cache.

Note: PowerView is a DAX issuing client tool that can work with both DirectQuery and In-Memory. If the primary/default Query Mode of a hybrid Tabular database is DirectQuery, the result is served straight from the source SQL Server data source. If the primary/default mode is In-Memory, the result is served from the cache. More about the two hybrid modes this later.


Enabling Hybrid Mode – the Basics

1. Design phase

When creating a Tabular Model solution using SSDT, ensure that DirectQueryMode value is set to On. This will ensure that the solution will conform to DirectQuery design features.

Model.bim Properties on SSDT

Enable DirectQueryMode on tabular model via SSDT

2. Deployment phase

Prior to deploying the solution, change the Query Mode to “In-Memory with DirectQuery” or “DirectQuery with In-Memory”. These are the two available Hybrid modes.

Project Properties of Tabular Model in SSDT

Change Query Mode of Tabular Project in SSDT



Query Mode: In-Memory with DirectQuery

In-Memory with DirectQuery option means that In-Memory is the primary (or default) connection. Howevever, when needed and if the client tool supports this, the secondary Query Mode, i.e DirectQuery, can be used instead.

This query mode is ideal for the following scenarios

  1. Users are mainly using Excel to perform analysis on the tabular model.
  2. The processed data in memory will be used to serve Excel queries.
  3. The processed data in memory will be used to serve PowerView report.
  4. Only occasional real-time queries required for accessing the real time data, using SSRS as an example.

Query Mode: DirectQuery with In-Memory

DirectQuery with In-Memory option means that DirectQuery is the primary (or default) connection. Howevever, when needed and if the client tool supports this, the secondary Query Mode, i.e In-Memory, can be used instead.

This query mode is ideal for the following scenarios

  1. Users are mainly using PowerView (or DAX issuing Client Tool) to perform analysis on the tabular model.
  2. By default, always returns real time data.
  3. Only occasional processed in memory data is required to be retrieved from Excel.


Connecting to Hybrid Mode tabular database

Connecting to a tabular database with Hybrid Mode is the same as connecting to either InMemory or DirectQuery – that is, if you would like to use the primary mode. For example, if a tabular database is published with Query Mode of “DirectQuery with In-Memory”, the default connection via a client tool will always be made through to the DirectQuery, with no extra steps required. Similarly, if the Query Mode is “In-Memory with DirectQuery”, the default connection will use the In-Memory.

When using “DirectQuery with In-Memory” Query Mode, a client tool can connect to the tabular database using the In-Memory mode, by specifying it in the connection. Below is an example for connecting via SQL Server Management Studio, to the In-Memory part to a tabular database that has been published with “DirectQuery with In-Memory” .

SQL Server Management Studio Additional Connection Parameters

Specify "DirectQueryMode" parameter in SSMS


Below is a sample of specifying the DirectQueryMode in Excel:

Excel Connection String Dialog Box

Specifying DirectQueryMode in Excel Connection String Property


Note: As at the time of writing, the DirectQueryMode connection property cannot be specified on PowerView. So, PowerView will use the primary (default) of the Hybrid Mode of the tabular database.

Wrap Up

Above is the basic walk-through of Hybrid Mode in SSAS Tabular. Hybrid Mode Tabular Databases come in two Query Modes, “DirectQuery with In-Memory” or “In-Memory with DirectQuery”. The first Query Mode in the name is the “primary” Query Mode of the tabular database. Tabular databases with Hybrid mode are designed in DirectQuery, but published with either of the two Query Modes.  Some client tools provide the option to switch to the secondary Query Mode by specifying “DirectQueryMode” connection parameter.

Stay tuned for Part 2 of this series for more information on the design tips and important factors to consider when implementing Tabular solution with Hybrid Mode.

Process Full on Tabular Model Database via SSMS

The past few months I have been working with SSAS Tabular Model introduced in SQL Server 2012. Generally, I automate my Tabular Model Processing using XMLA via SQL Server job. However, I have recently come across SQL Server Management Studio (SSMS) bug whereby performing “Process Full” on the Tabular Model database would incorrectly show “Process Recalc” instead . If it did, the data in tabular model would not be as expected. The underlying behaviour however seems to be correct.

Steps To Reproduce The Issue

I run Process Full on Tabular Model database, by Right clicking on the tabular model database in SSMS, and choose “Process Database” as shown below.

Process Database Menu Item

This brings up the “Process Database” dialog box, which I then choose ”Process Full” on the Mode drop down list as illustrated below.

Process Database dialog box in SSMS

Choose Process Full mode

The “Data Processing” dialog box will appear showing the progress. Once it’s finished you can click on the “Details” link which will show “Processing Details” with an unexpected “Type” in the Process XMLA command.

Data Processing Dialog Box

Data Processing Dialog Box

Processing Details dialog box showing ProcessRecalc instead of ProcessFull

Processing Details showing ProcessRecalc instead of ProcessFull

The Underlying Behaviour

Process Full and Process Recalc on Tabular Model database are different. As mentioned by Cathy Dumas in “Processing tabular models 101” on her MSDN blog:

Process Full Loads data into all selected partitions or tables. Any affected calculated columns, relationships, user hierarchies, or internal engine structures (except table dictionaries) are recalculated.
Process Recalc For all tables in the database, recalculates calculated columns,  rebuilds relationships. rebuilds user hierarchies, and rebuilds other internal engine structures. Table dictionaries are not affected.

This means that if the data is modified in the source database(s), Process Recalc will not be getting this modified data. Process Full, on the other hand, will.

Not convinced that invoking Process Full on the Tabular Model database via SSMS would indeed invoke Process Recalc as shown on the dialog box, I decided to run SQL Profiler.

According to SQL Profiler – on Line 61 or so on my trace – I noticed that the Command executed looks as follows.

The XMLA command looks like below (copied and pasted to Notepad):

After running a couple of tests, it seems that the Process Full on Database via SSMS in SQL Server 2012 RTM version, does actually perform Process Full despite reporting it as Process Recalc. My tabular model database is refreshed with the new data.

Related Posts

Looking at Microsoft Connect after I encountered this issue, there was a fairly old post that was not directly reporting this issue. However, in the response, Cathy Dumas mentioned that she found the above bug in a SQL Server 2012 version pre RTM. Check her comment in this link below.


Wrap Up

SQL Server Management Studio User Interface invokes Process Full properly on the Tabular Model database in SQL Server 2012 RTM version despite the misleading “Process Recalc” info on the Details. This is confirmed by using SQL Profiler and checking the outcome of the Process Database execution. When in doubt, always inspect the underlying behaviour.



Upcoming DirectQuery vs Vertipaq Presentation

How exciting is it for us SQL Professionals to have so many SQL Server events since the beginning of the year? We’ve had 12 Hours of SQL, 24 Hours of PASS, 24 Hours of PASS in Russian edition, SQL Server 2012 Virtual Launch, SQL Saturday (ANZ tour is currently running) and plenty other PASS virtual chapter sessions.
I’m quite honoured that I’ve been selected to present at SQL Server User Group in Sydney, 24 Hours of PASS, and SQL Saturday #138 as well as being picked by the Community to present at SQL Rally Dallas this year.

I’d like to focus a bit more on my upcoming SQL Saturday #138 session and SQL Rally Dallas session. The title is “DirectQuery vs Vertipaq Mode in SSAS Tabular Model”. This session will take you to a second step to see what’s beyond the default option (In-Memory / Vertipaq). 

Since the time I wrote the abstract for SQL Rally, Microsoft has rebranded Vertipaq to “xVelocity in-memory analytics engine (VertiPaq)”. Some of the project settings / options have also been changed to refer to “In-Memory”, instead of Vertipaq; while some remain as Vertipaq such as in the Tabular Model Analysis Server Properties. Despite the name changes, they mean the same thing in SSAS. 

The DirectQuery vs Vertipaq Mode in SSAS Tabular Model session brings a tiny step beyond your first leap to deciding/considering In-Memory Tabular mode. It concentrates on introducing DirectQuery and how different it is to In-Memory. The demo will also show how the two modes differ in query execution, design and maintenance aspects; giving you enough information to make an informed decision on which mode to use based on your business case. 

If you are new to Tabular Model or PowerPivot and would like to know more about it, there are quite a number of great resources to get you up to speed with it, as listed at the end of this post. Hopefully by then, you’d be comfortable in learning more in my upcoming “DirectQuery vs Vertipaq Mode in SSAS Tabular Model ” session. Having said that, if you’re completely new to Tabular Model but want to know what’s the fuss is about, come to the session – and have your first leap to learning Tabular Model with me. 

I have been having a great joy using Tabular Model in my current work, and am continuously extending my knowledge by preparing for this presentation. So I do hope that you can attend SQL Saturday #138 in Sydney or SQL Rally in Dallas and join me at the session. 


Reading/Watching List

Welcome to Tabular Projects



Building your first Analysis Services Tabular BI Semantic model with SQL Server 2012

Speaker: Frederik Vandeputte



Building the Perfect BI Semantic Tabular Models for Power View

Speaker: Kasper De Jonge



Developing and Managing a Business Intelligence Semantic Model (BISM) in SQL Server Code Name “Denali” Analysis Services [BIA-316-M]

Speaker: Cathy Dumas


Note: if you have the PASS Summit 2011 DVDs, pull this session out and start watching it. Cathy was superb in this presentation and I would consider this as an energetic presentation where I “ooo… aaaa…”-ed a few times!

SQLRally Dallas 2012 Website



Undo Bad Tabular Mode Analysis Server Properties

Warning: This post contains some advice that may not be ideal for Production / critical environment. Please read thoroughly and proceed any changes on your environment with caution.

Late last week, I decided to change a few settings on my development Tabular Model Server. The first change was related to the VertipaqPagingPolicy value in an attempt to understand the setting a little better. The second was related to changing the DataDir path.  Changing values of the Analysis Server properties can have adverse effects. It is best done only if you understand the implication AND if you have thoroughly tested it in a dev environment prior to making the changes in Production.

The changes I’m outlining below were done through the Analysis Server Properties dialog box retrieved by right clicking on the Tabular Model Server and choosing Properties as shown below.


Accessing Analysis Server Properties

This will give you a nice dialog box to edit the Analysis Server Properties:

Tabular Model Analysis Server Properties

Changing Analysis Server Properties usually require a server restart. Fingers crossed, the Analysis Server should restart properly if the configuration is OK.

Let’s talk about the changes I’ve made, why and how I got stuck!

Vertipaq Paging Policy

I have hit a number of errors in relation to Memory Alocation Failure on my dev machine. So, I thought I’d start playing around with the VertipaqPagingPolicy values and see if it could help solving the problem.

The exact error message on my dev box was:

The following system error occurred:  Insufficient quota to complete the requested service.

Memory error: Allocation failure. If using a 32-bit version of the product, consider upgrading to the 64-bit version or increasing the amount of memory available on the machine.

The current operation was cancelled because another operation in the transaction failed.


MSDN Books online describes three different values acceptable for VertipaqPagingPolicy.


  • Zero (0) is the default. No paging is allowed. If memory is insufficient, processing fails with an out-of-memory error.
  • 1 enables paging to disk using the operating system page file (pagefile.sys).
  • 2 enables paging to disk using memory-mapped files.


As BOL mentioned, when VertipaqPagingPolicy is set to 1 or 2, processing is less likely to fail due to memory constraints because the server will try to page to disk using the method specified.  So I decided to change VertipaqPagingPolicy to 1 then restarted the server. Performing Process Full on the partitions one at a time worked perfectly without error with VertipaqPagingPolicy = 1.

Setting VertipaqPagingPolicy to 1

Marco Russo has published more information on VertipaqPagingPolicy in his article titled “Memory Settings in Tabular Instances of Analysis Services“. I strongly recommend reading this article - may be even a few times.


By default the DataDir location for Tabular Model server is located at:

C:\Program Files\Microsoft SQL Server\MSAS11.MSSQLSERVER\OLAP\Data

This folder typically is a home to the files / folders that I consider as “system” oriented, and it may also have data files and folders of users’ Tabular Model databases. The “system” files/folder that you typically find in DataDir folder are:

  1. ExcelMDX.0.asm.xml
  3. master.vmp
  4. VBAMDX.0.asm.xml
  5. VBAMDXINTERNAL.0.asm.xml
  6. ExcelMDX.0.asm
  8. VBAMDX.0.asm
  10. CryptKey.bin

If you have accidentally changed the DataDir location by mistake and restarted the Analysis Services (Tabular Mode) server, it would come up with an error.

Error when Starting Analysis Server

As long as you know what the correct DataDir path and that the files are still in tact, don’t panic. The fix to this is fairly simple.

In the Config folder of the server (by default installed on C:\Program Files\Microsoft SQL Server\MSAS11.MSSQLSERVER\OLAP\Config\), there is a file called “msmdsrv.ini”. This file contains the settings that will be read and used when the Analysis Server is started. So when the server doesn’t start due to incorrect settings, the easy approach is to correct the configuration settings in msmdsrv.ini file. After the settings are corrected accordingly in this file, the server can be restarted via SQL Server Configuration Manager and it should be able to start properly.

There are two options to correct the msmdsrv.ini file.

Option 1 – Manually fixing it, if you know what you’re doing.

In regards to my earlier case with accidental DataDir change, I changed the <DataDir> element on the second line of “msmdsrv.ini” back to the previous value, i.e.

  <DataDir>C:\Program Files\Microsoft SQL Server\MSAS11.MSSQLSERVER\OLAP\Data</DataDir>


As previously stated, I would not recommend changing settings without understanding the implications and prior testing. I would also like to suggest making a copy / backup of the msmdsrv.ini file BEFORE making the changes manually.

Another tip, when you try opening the “msmdsrv.ini” file and it may come up with the following error message.

Access Is Denied

This is fine and you can treat this as if it was a reminder to create a back up file of the file that you were about to change. So copy the file and call the new file as “msmdsrv_MessedUp.txt” and rename the original file to “msmdsrv.txt” then make the required correction. Then rename the “msmdsrv.txt” back to “msmdsrv.ini”.  Restart the server – and voila! - the Analysis Server should start if the new settings are good.


Option 2 – Restoring the “msmdsrv.ini” file from the “msmdsrv.bak” file.

In the same folder as “msmdsrv.ini”, there is also “msmdsrv.bak”. This bak file is a backup copy of the last known good configuration file to the Analysis Services server. So, you may be able to use this bak file to restore everything back to a previous good configuration.


Wrap up

As I’ve warned earlier, changes to msmdsrv.ini can result in an unstable state and/or non-startable Analysis Server. So, please explore this with caution. I am a firm believer of giving things a go and exploring options that are available, but always with caution.

Changing VertipaqPagingPolicy has helped me to work around the Memory Allocation Error issue on my dev machine temporarily (next step would be to acquire more powerful machine!). When mistakes are made in configuring the Analysis Server properties, it may cause the Analysis Server to be non-startable. In this case, the solution to it, may simply be reverting the “msmdsrv.ini” file back to its previous working state.



SQL Server 2012, The Good and The Bad


As a SQL Server specialist, hearing new updates on SQL Server always excites me. SQL Server 2012 provides plenty of features, such as Columnstore Index and AlwaysOn features would immensely help my previous projects. The PowerView feature and the integration of BI capabilities of SQL Server 2012 with Sharepoint 2010 is definitely not something to dismiss very easily.  These areas below are the new features in SQL Server 2012 that I would consider as being important for my past and upcoming projects, and I wlil discuss a few of the goods and the bads.


1. SQL Server Data Tools (SSDT)

The Good: This will replace BIDS. SSDT is using Visual Studio 2010 Shell and has revamped the look and feel of SSIS in some degree.

The Bad: After installing SQL Server 2012 RC0, I am unsure how to add some third party SSIS data flow component. It may mean that if you are using Balanced Data Distributor SSIS Data Flow component, it requires some testing first.

Further reading: http://msdn.microsoft.com/en-us/data/tools.aspx


2. ColumnStore Indexes

The Good: This would immensely help those Data Warehouse databases that are large (or getting larger and larger). Those fact tables with millions of records would definitely benefit this feature. There are some caveats to this; one in particular is that it only comes in the Enterprise Edition of SQL Server 2012.

The Bad: The table with ColumnStore indexes cannot be updated as per SQL Server 2012 release. This may not be critical as there is a workaround on this: http://msdn.microsoft.com/en-us/library/gg492088(v=SQL.110).aspx#Update


3. Rapid Data Analysis through PowerView and PowerPivot

The Good: Business Analysts and Data Analysts who are exploring the data, can start with simple, powerful and presentation ready reports using PowerView and PowerPivot.

The Bad: PowerView is only available with Sharepoint 2010 BI integration – as per the SQL Server 2012 RC0. This may change in the final version.

Further reading: http://msdn.microsoft.com/en-us/library/hh213579(v=sql.110).aspx


4. Tabular Model SSAS

The Good: This feature would help with leveraging data warehouse design with less effort from building overly complex cubes, for powerful and highly performing data analysis and business reporting.

The Bad: Self joins are not permitted, Composite Keys are not supported, Unique Lookup columns are required, only Single Active Relationship is supported and only One Relationship for each source column is allowed. Choosing DirectQuery mode is not fully compatible with Vertipaq mode. Careful consideration is required before switching all the way to DirectQuery mode. .

Further reading: http://msdn.microsoft.com/en-us/library/hh212945(v=sql.110).aspx


5. Licensing / editions

The Good: If you have Software Assurance Agreement with Microsoft, you may not be affected greatly by the costs incurred due to the new pricing model.

The Bad: Planning is required whether to choose Standard, BI or Enterprise edition appropriately and it has been reported that the new licensing model will incur more costs in most cases.

Further Reading: http://www.microsoft.com/sqlserver/en/us/future-editions/sql2012-editions.aspx, http://redmondmag.com/articles/2011/11/05/microsoft-unveils-sql-server-2012-licensing-and-pricing.aspx


The above are my take of SQL Server 2012, I strongly urge you to use it and to test it out. You may agree or disagree (or come up with different points to consider) with the above.  Please share your thoughts by leaving comments.


Last but not least, don’t forget to attend SQL Server 2012 Virtual Launch Event on March 7th: http://www.sqlserverlaunch.com/ww/Home