I understand the very basics of MDX. I know how to, for example, get
some set of measures as columns with some dimension on the Rows.
However, I need to do something more complex and I just can't get my
head around it.
What I'm trying to do is compare some given measure for various periods.
For example, say you have a measure [Sales] and a dimenion called
[Company Regions], the query to get the sales numbers for the company
regions basically looks like this:
with member [Date Dim].[Date Range] as
'Aggregate( {[Date Dim].[2004].[M05]:[Date Dim].[2004].[M06]}'
select [Measures].[Sales] ON COLUMNS,
Descendants([Company Regions], [Lowest Level], SELF_AND_BEFORE)
from myCube
where ([Date Dim].[Date Range])
That gives me the sales numbers for the months May and June.
What I need to do is that same basic query but I want to compare the
Date Range with last year. I just can't grasp how to make it show up in
the columns. I've read about cousin() and parallelperiods() but all the
examples return the period itself, not some measure in that period.
Can someone point me in the right direction?
Zach
Try it with a calculated member:
For example:
WITH MEMBER [Measures].[Sales_LastYear] as 'Sum([Date Dim].[2003],
[Measures].[Sales])'
MEMBER [Measures].[Sales_ThisYear] as 'Sum( {[Date Dim].[2004].[M05],[Date
Dim].[2004].[M06]}, [Measures].[Sales])'
MEMBER [Measures].[Sales_Difference] as ' [Measures].[Sales_ThisYear] -
[Measures].[Sales_LastYear]'
SELECT
{[Measures].[Sales_LastYear],[Measures].[Sales_ThisYear],[Measures].[Sales_D
ifference]} on COLUMNS,
Descendants([Company Regions], [Lowest Level], SELF_AND_BEFORE) on ROWS
from myCube
Michael
"Zach Wells" <no_zwells_spam@.ain1.com> schrieb im Newsbeitrag
news:%2366CkJyUEHA.3336@.TK2MSFTNGP11.phx.gbl...
> I understand the very basics of MDX. I know how to, for example, get
> some set of measures as columns with some dimension on the Rows.
> However, I need to do something more complex and I just can't get my
> head around it.
> What I'm trying to do is compare some given measure for various periods.
> For example, say you have a measure [Sales] and a dimenion called
> [Company Regions], the query to get the sales numbers for the company
> regions basically looks like this:
> with member [Date Dim].[Date Range] as
> 'Aggregate( {[Date Dim].[2004].[M05]:[Date Dim].[2004].[M06]}'
> select [Measures].[Sales] ON COLUMNS,
> Descendants([Company Regions], [Lowest Level], SELF_AND_BEFORE)
> from myCube
> where ([Date Dim].[Date Range])
> That gives me the sales numbers for the months May and June.
> What I need to do is that same basic query but I want to compare the
> Date Range with last year. I just can't grasp how to make it show up in
> the columns. I've read about cousin() and parallelperiods() but all the
> examples return the period itself, not some measure in that period.
> Can someone point me in the right direction?
> Zach
Showing posts with label columns. Show all posts
Showing posts with label columns. Show all posts
Tuesday, March 27, 2012
Thursday, March 22, 2012
Bar Charts - possible to set max widths of columns ?
Hi,
Is it possible to fix the max width of the columns ? Sometime the report only generates 1 or 2 columns and it looks a bit odd as they're stretched to the width of the page.
Also, when we have a large number of culumns, say 20 odd, the legend text get's real squashed up and un readable, are there any tips/tricks so i can have more flexibility over what's displayed ?
thanks
Steve
i have the same issue. i would like to be able to set the width of the bars in the bar chart so that i dont end up with 2 fat bars or 20 razor thin bars. is there a way to do this?Tuesday, March 20, 2012
bad table design
A developer has just gave me a few tables table to put
into production, with all character columns defined as
varchar(255), null.
As I am absolutely positive that the majority of these
columns will not be storing this amount of data in each
of the columns, as some of them are code columns, types,
and one description field.
After asking him to review all of the columns and the
maximum estimated data stored in each one, he said that
it shouldn't make any difference because A) they're
varchar and B) they're defined as null.
What can I say to him to make him do the needful?
Thanks,
MarcusMarcus
Well, variablle-length characters ( as in your case) is stored 1 byte per
character.Declared but unused characters don't consume storage.
On the other hand I'll be conserned about using NULL's when you will be
querieng the tables.
Using IS NULL/NOT NULL clause will not allow Query Optimizer to use an index
and respectively your query will suffer from perfomance hit.
Also consider using DEFAULT values in those columns.
"Marcus" <anonymous@.discussions.microsoft.com> wrote in message
news:8b7101c404e1$04e07c70$a401280a@.phx.gbl...
> A developer has just gave me a few tables table to put
> into production, with all character columns defined as
> varchar(255), null.
> As I am absolutely positive that the majority of these
> columns will not be storing this amount of data in each
> of the columns, as some of them are code columns, types,
> and one description field.
> After asking him to review all of the columns and the
> maximum estimated data stored in each one, he said that
> it shouldn't make any difference because A) they're
> varchar and B) they're defined as null.
> What can I say to him to make him do the needful?
> Thanks,
> Marcus|||Pick up one of Joe Celko's post from here regarding how important it is to
get the database design right and where he state that he almost never have
to use NULL. If he question Joe's credibility, search for instance Amazon of
the books he has been writing (or go to www.celko.com).
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
"Marcus" <anonymous@.discussions.microsoft.com> wrote in message
news:8b7101c404e1$04e07c70$a401280a@.phx.gbl...
> A developer has just gave me a few tables table to put
> into production, with all character columns defined as
> varchar(255), null.
> As I am absolutely positive that the majority of these
> columns will not be storing this amount of data in each
> of the columns, as some of them are code columns, types,
> and one description field.
> After asking him to review all of the columns and the
> maximum estimated data stored in each one, he said that
> it shouldn't make any difference because A) they're
> varchar and B) they're defined as null.
> What can I say to him to make him do the needful?
> Thanks,
> Marcus|||Right. I've brought that up also and he's come back and
said that the table is for importing data only and it
shouldn't matter.
You're thoughts?
Thanks!
Marcus
>--Original Message--
>Marcus
>Well, variablle-length characters ( as in your case) is
stored 1 byte per
>character.Declared but unused characters don't consume
storage.
>On the other hand I'll be conserned about using NULL's
when you will be
>querieng the tables.
>Using IS NULL/NOT NULL clause will not allow Query
Optimizer to use an index
>and respectively your query will suffer from perfomance
hit.
>Also consider using DEFAULT values in those columns.
>
>"Marcus" <anonymous@.discussions.microsoft.com> wrote in
message
>news:8b7101c404e1$04e07c70$a401280a@.phx.gbl...
types,
>
>.
>|||Marcus
Look, we can discuss a lot on this topic.
In outer-join operation you should carefully account for NULL's values that
are generated to preserve rows that don't have a match in the table being
joined.
Also dealing with NULL's adding complexity to the storage engine because SQL
Server keeps a special bitmap in every row to indicate which nullable
columns actually are NULL. SQL Server must decode the bitmap for every row
accessed.
Again I'd recommed you/your opponent to make all nullable columns as NOT
NULL and DEFAULT values.
"Marcus" <anonymous@.discussions.microsoft.com> wrote in message
news:8c2301c404f2$255e24a0$a501280a@.phx.gbl...
> Right. I've brought that up also and he's come back and
> said that the table is for importing data only and it
> shouldn't matter.
> You're thoughts?
> Thanks!
> Marcus
> stored 1 byte per
> storage.
> when you will be
> Optimizer to use an index
> hit.
> message
> types,|||"Marcus" <anonymous@.discussions.microsoft.com> wrote in message
news:8b7101c404e1$04e07c70$a401280a@.phx.gbl...
> A developer has just gave me a few tables table to put
> into production, with all character columns defined as
> varchar(255), null.
Two aspects :
varchar(255), there is nothing wrong defining a varchar
type with a number of characters. Varchar(20) and varchar(255)
both occupie the same number of bytes if the strings are the same.
255 doesn't look like a 'natural' number, it is typical a number
which comes out of the head of a 'computer' oriented mind.
(80, 100, 200, 400 are more natural). But if the data comes
from another 'computer' system 255 could be the right number.
Null,
Never prefered and should be avoided. But for only importing
data (on the way to another table) and for text holding fields
on which no selection and no join is done a 'logical' choice.
If the text is human generated and a 'non' text is one of
the posibilities, I think that null is ok.
(When indexed, joined, used in a where clause one should
avoid a null).
Also concatenating with <null> strings should be avoided,
because this is handled differently in different databases.
(One could choose as wel for an empty string instead of
a null, this is supported in SQL-server, but be aware there
are RDBMSses which consider an empty string equal to
a Null string.)
I would not know of a default in a text string which is a
good replacement for a null string.
(Considering other factors as usage by programs, utilities
and other databases. Empty string is not supported
in all databases.
Using a single character as a default might conflict
with a 'real' single character even if it is a space.)
vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
vvvvvvvvvv
Does anybody know of a good 'default' for character strings
instead of using a null ?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ben brugman
> As I am absolutely positive that the majority of these
> columns will not be storing this amount of data in each
> of the columns, as some of them are code columns, types,
> and one description field.
> After asking him to review all of the columns and the
> maximum estimated data stored in each one, he said that
> it shouldn't make any difference because A) they're
> varchar and B) they're defined as null.
> What can I say to him to make him do the needful?
> Thanks,
> Marcus
into production, with all character columns defined as
varchar(255), null.
As I am absolutely positive that the majority of these
columns will not be storing this amount of data in each
of the columns, as some of them are code columns, types,
and one description field.
After asking him to review all of the columns and the
maximum estimated data stored in each one, he said that
it shouldn't make any difference because A) they're
varchar and B) they're defined as null.
What can I say to him to make him do the needful?
Thanks,
MarcusMarcus
Well, variablle-length characters ( as in your case) is stored 1 byte per
character.Declared but unused characters don't consume storage.
On the other hand I'll be conserned about using NULL's when you will be
querieng the tables.
Using IS NULL/NOT NULL clause will not allow Query Optimizer to use an index
and respectively your query will suffer from perfomance hit.
Also consider using DEFAULT values in those columns.
"Marcus" <anonymous@.discussions.microsoft.com> wrote in message
news:8b7101c404e1$04e07c70$a401280a@.phx.gbl...
> A developer has just gave me a few tables table to put
> into production, with all character columns defined as
> varchar(255), null.
> As I am absolutely positive that the majority of these
> columns will not be storing this amount of data in each
> of the columns, as some of them are code columns, types,
> and one description field.
> After asking him to review all of the columns and the
> maximum estimated data stored in each one, he said that
> it shouldn't make any difference because A) they're
> varchar and B) they're defined as null.
> What can I say to him to make him do the needful?
> Thanks,
> Marcus|||Pick up one of Joe Celko's post from here regarding how important it is to
get the database design right and where he state that he almost never have
to use NULL. If he question Joe's credibility, search for instance Amazon of
the books he has been writing (or go to www.celko.com).
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
"Marcus" <anonymous@.discussions.microsoft.com> wrote in message
news:8b7101c404e1$04e07c70$a401280a@.phx.gbl...
> A developer has just gave me a few tables table to put
> into production, with all character columns defined as
> varchar(255), null.
> As I am absolutely positive that the majority of these
> columns will not be storing this amount of data in each
> of the columns, as some of them are code columns, types,
> and one description field.
> After asking him to review all of the columns and the
> maximum estimated data stored in each one, he said that
> it shouldn't make any difference because A) they're
> varchar and B) they're defined as null.
> What can I say to him to make him do the needful?
> Thanks,
> Marcus|||Right. I've brought that up also and he's come back and
said that the table is for importing data only and it
shouldn't matter.
You're thoughts?
Thanks!
Marcus
>--Original Message--
>Marcus
>Well, variablle-length characters ( as in your case) is
stored 1 byte per
>character.Declared but unused characters don't consume
storage.
>On the other hand I'll be conserned about using NULL's
when you will be
>querieng the tables.
>Using IS NULL/NOT NULL clause will not allow Query
Optimizer to use an index
>and respectively your query will suffer from perfomance
hit.
>Also consider using DEFAULT values in those columns.
>
>"Marcus" <anonymous@.discussions.microsoft.com> wrote in
message
>news:8b7101c404e1$04e07c70$a401280a@.phx.gbl...
types,
>
>.
>|||Marcus
Look, we can discuss a lot on this topic.
In outer-join operation you should carefully account for NULL's values that
are generated to preserve rows that don't have a match in the table being
joined.
Also dealing with NULL's adding complexity to the storage engine because SQL
Server keeps a special bitmap in every row to indicate which nullable
columns actually are NULL. SQL Server must decode the bitmap for every row
accessed.
Again I'd recommed you/your opponent to make all nullable columns as NOT
NULL and DEFAULT values.
"Marcus" <anonymous@.discussions.microsoft.com> wrote in message
news:8c2301c404f2$255e24a0$a501280a@.phx.gbl...
> Right. I've brought that up also and he's come back and
> said that the table is for importing data only and it
> shouldn't matter.
> You're thoughts?
> Thanks!
> Marcus
> stored 1 byte per
> storage.
> when you will be
> Optimizer to use an index
> hit.
> message
> types,|||"Marcus" <anonymous@.discussions.microsoft.com> wrote in message
news:8b7101c404e1$04e07c70$a401280a@.phx.gbl...
> A developer has just gave me a few tables table to put
> into production, with all character columns defined as
> varchar(255), null.
Two aspects :
varchar(255), there is nothing wrong defining a varchar
type with a number of characters. Varchar(20) and varchar(255)
both occupie the same number of bytes if the strings are the same.
255 doesn't look like a 'natural' number, it is typical a number
which comes out of the head of a 'computer' oriented mind.
(80, 100, 200, 400 are more natural). But if the data comes
from another 'computer' system 255 could be the right number.
Null,
Never prefered and should be avoided. But for only importing
data (on the way to another table) and for text holding fields
on which no selection and no join is done a 'logical' choice.
If the text is human generated and a 'non' text is one of
the posibilities, I think that null is ok.
(When indexed, joined, used in a where clause one should
avoid a null).
Also concatenating with <null> strings should be avoided,
because this is handled differently in different databases.
(One could choose as wel for an empty string instead of
a null, this is supported in SQL-server, but be aware there
are RDBMSses which consider an empty string equal to
a Null string.)
I would not know of a default in a text string which is a
good replacement for a null string.
(Considering other factors as usage by programs, utilities
and other databases. Empty string is not supported
in all databases.
Using a single character as a default might conflict
with a 'real' single character even if it is a space.)
vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
vvvvvvvvvv
Does anybody know of a good 'default' for character strings
instead of using a null ?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ben brugman
> As I am absolutely positive that the majority of these
> columns will not be storing this amount of data in each
> of the columns, as some of them are code columns, types,
> and one description field.
> After asking him to review all of the columns and the
> maximum estimated data stored in each one, he said that
> it shouldn't make any difference because A) they're
> varchar and B) they're defined as null.
> What can I say to him to make him do the needful?
> Thanks,
> Marcus
Friday, February 24, 2012
Backup-Restore on Encrypted Columns
I have some columns that are encrypted using SQL Server 2005 built in
encryption. What happens when I backup and restore this database on another
server? Does SQL Server have to re-encrypt these columns on this new
server? The Master Key is different on the new server, right?
--IKEThe Service Master Key is different, yes. The Database Master Key and all
other keys in the database were carried over with your data.
So the only thing that you may need to do is to re-encrypt the Database
Master Key with the Service Master Key of the new server, if you had such SM
K
encryption on the old server as well. To do this, you need to do the
following steps while connected to the database:
-- open the database master key
OPEN MASTER KEY DECRYPTION BY PASSWORD = 'whatever_password'
-- re-encrypt with the current server's SMK
ALTER MASTER KEY ADD ENCRYPTION BY SERVICE MASTER KEY
For more info about these statements, see:
http://msdn2.microsoft.com/en-us/library/ms130214.aspx
http://msdn2.microsoft.com/en-us/library/ms186937.aspx
This scenario was also discussed here:
http://blogs.msdn.com/lcris/archive.../30/475822.aspx
Thanks
Laurentiu Cristofor [MSFT]
Software Design Engineer
SQL Server Engine
http://blogs.msdn.com/lcris/
This posting is provided "AS IS" with no warranties, and confers no rights.
"Ike the Potato" wrote:
> I have some columns that are encrypted using SQL Server 2005 built in
> encryption. What happens when I backup and restore this database on anoth
er
> server? Does SQL Server have to re-encrypt these columns on this new
> server? The Master Key is different on the new server, right?
> --IKE
>
>
encryption. What happens when I backup and restore this database on another
server? Does SQL Server have to re-encrypt these columns on this new
server? The Master Key is different on the new server, right?
--IKEThe Service Master Key is different, yes. The Database Master Key and all
other keys in the database were carried over with your data.
So the only thing that you may need to do is to re-encrypt the Database
Master Key with the Service Master Key of the new server, if you had such SM
K
encryption on the old server as well. To do this, you need to do the
following steps while connected to the database:
-- open the database master key
OPEN MASTER KEY DECRYPTION BY PASSWORD = 'whatever_password'
-- re-encrypt with the current server's SMK
ALTER MASTER KEY ADD ENCRYPTION BY SERVICE MASTER KEY
For more info about these statements, see:
http://msdn2.microsoft.com/en-us/library/ms130214.aspx
http://msdn2.microsoft.com/en-us/library/ms186937.aspx
This scenario was also discussed here:
http://blogs.msdn.com/lcris/archive.../30/475822.aspx
Thanks
Laurentiu Cristofor [MSFT]
Software Design Engineer
SQL Server Engine
http://blogs.msdn.com/lcris/
This posting is provided "AS IS" with no warranties, and confers no rights.
"Ike the Potato" wrote:
> I have some columns that are encrypted using SQL Server 2005 built in
> encryption. What happens when I backup and restore this database on anoth
er
> server? Does SQL Server have to re-encrypt these columns on this new
> server? The Master Key is different on the new server, right?
> --IKE
>
>
Subscribe to:
Posts (Atom)