Showing posts with label perform. Show all posts
Showing posts with label perform. Show all posts

Friday, March 30, 2012

how to simplify what i'm doing (AS the solution?)

i need to do create reports that perform queries where the results are
percentages of total count divided by counts based on different
subquery where values.
a la ...
select 100 * (totalCount/ (select count where columnA value is x) as
partialCount1) as perc1,
100 * (totalCount/ (select count where columnA value is y) as
partialCount2) as perc2
etc
from myTable where (..bunch of where stuff..)
I have simplified the above example just for brevity here.
So, I hate these kinds of nested queries - and they get incredibly
complex to read. I could break them up into longer scripts to simplify
- but I am wondering if there's another way.
The data I am using is from a warehouse on SQL 2005. I have very little
experience with warehouses - and know little about analysis services
and OLAP.
My question is whether it is possible to use Analysis services to
create new tables that make this data far easier to query? That's the
path I am considering venturing down, but I don't want to hit a dead
end... and need just to know if this makes sense. That is to use
Analysis services to crunch out some new tables from the tables I am
using which would have the results neatly packaged in a table or 2 that
is fast and simple to query. Is that the idea of AS?
thx for helping a learner here. let me know if i am making enough sense
in the question.
Hi
This looks like you are trying to do a pivot, in which case the best
solutions will be at the front end although with SQL 2005 you have the new
pivot function, check out the topic "Using PIVOT and UNPIVOT " in books
online.
John
<rsphere@.gmail.com> wrote in message
news:1137856740.675188.233050@.g49g2000cwa.googlegr oups.com...
>i need to do create reports that perform queries where the results are
> percentages of total count divided by counts based on different
> subquery where values.
> a la ...
> select 100 * (totalCount/ (select count where columnA value is x) as
> partialCount1) as perc1,
> 100 * (totalCount/ (select count where columnA value is y) as
> partialCount2) as perc2
> etc
> from myTable where (..bunch of where stuff..)
> I have simplified the above example just for brevity here.
> So, I hate these kinds of nested queries - and they get incredibly
> complex to read. I could break them up into longer scripts to simplify
> - but I am wondering if there's another way.
> The data I am using is from a warehouse on SQL 2005. I have very little
> experience with warehouses - and know little about analysis services
> and OLAP.
> My question is whether it is possible to use Analysis services to
> create new tables that make this data far easier to query? That's the
> path I am considering venturing down, but I don't want to hit a dead
> end... and need just to know if this makes sense. That is to use
> Analysis services to crunch out some new tables from the tables I am
> using which would have the results neatly packaged in a table or 2 that
> is fast and simple to query. Is that the idea of AS?
> thx for helping a learner here. let me know if i am making enough sense
> in the question.
>

how to simplify what i'm doing (AS the solution?)

i need to do create reports that perform queries where the results are
percentages of total count divided by counts based on different
subquery where values.
a la ...
select 100 * (totalCount/ (select count where columnA value is x) as
partialCount1) as perc1,
100 * (totalCount/ (select count where columnA value is y) as
partialCount2) as perc2
etc
from myTable where (..bunch of where stuff..)
I have simplified the above example just for brevity here.
So, I hate these kinds of nested queries - and they get incredibly
complex to read. I could break them up into longer scripts to simplify
- but I am wondering if there's another way.
The data I am using is from a warehouse on SQL 2005. I have very little
experience with warehouses - and know little about analysis services
and OLAP.
My question is whether it is possible to use Analysis services to
create new tables that make this data far easier to query? That's the
path I am considering venturing down, but I don't want to hit a dead
end... and need just to know if this makes sense. That is to use
Analysis services to crunch out some new tables from the tables I am
using which would have the results neatly packaged in a table or 2 that
is fast and simple to query. Is that the idea of AS?
thx for helping a learner here. let me know if i am making enough sense
in the question.Hi
This looks like you are trying to do a pivot, in which case the best
solutions will be at the front end although with SQL 2005 you have the new
pivot function, check out the topic "Using PIVOT and UNPIVOT " in books
online.
John
<rsphere@.gmail.com> wrote in message
news:1137856740.675188.233050@.g49g2000cwa.googlegroups.com...
>i need to do create reports that perform queries where the results are
> percentages of total count divided by counts based on different
> subquery where values.
> a la ...
> select 100 * (totalCount/ (select count where columnA value is x) as
> partialCount1) as perc1,
> 100 * (totalCount/ (select count where columnA value is y) as
> partialCount2) as perc2
> etc
> from myTable where (..bunch of where stuff..)
> I have simplified the above example just for brevity here.
> So, I hate these kinds of nested queries - and they get incredibly
> complex to read. I could break them up into longer scripts to simplify
> - but I am wondering if there's another way.
> The data I am using is from a warehouse on SQL 2005. I have very little
> experience with warehouses - and know little about analysis services
> and OLAP.
> My question is whether it is possible to use Analysis services to
> create new tables that make this data far easier to query? That's the
> path I am considering venturing down, but I don't want to hit a dead
> end... and need just to know if this makes sense. That is to use
> Analysis services to crunch out some new tables from the tables I am
> using which would have the results neatly packaged in a table or 2 that
> is fast and simple to query. Is that the idea of AS?
> thx for helping a learner here. let me know if i am making enough sense
> in the question.
>

how to simplify what i'm doing (AS the solution?)

i need to do create reports that perform queries where the results are
percentages of total count divided by counts based on different
subquery where values.
a la ...
select 100 * (totalCount/ (select count where columnA value is x) as
partialCount1) as perc1,
100 * (totalCount/ (select count where columnA value is y) as
partialCount2) as perc2
etc
from myTable where (..bunch of where stuff..)
I have simplified the above example just for brevity here.
So, I hate these kinds of nested queries - and they get incredibly
complex to read. I could break them up into longer scripts to simplify
- but I am wondering if there's another way.
The data I am using is from a warehouse on SQL 2005. I have very little
experience with warehouses - and know little about analysis services
and OLAP.
My question is whether it is possible to use Analysis services to
create new tables that make this data far easier to query? That's the
path I am considering venturing down, but I don't want to hit a dead
end... and need just to know if this makes sense. That is to use
Analysis services to crunch out some new tables from the tables I am
using which would have the results neatly packaged in a table or 2 that
is fast and simple to query. Is that the idea of AS?
thx for helping a learner here. let me know if i am making enough sense
in the question.Hi
This looks like you are trying to do a pivot, in which case the best
solutions will be at the front end although with SQL 2005 you have the new
pivot function, check out the topic "Using PIVOT and UNPIVOT " in books
online.
John
<rsphere@.gmail.com> wrote in message
news:1137856740.675188.233050@.g49g2000cwa.googlegroups.com...
>i need to do create reports that perform queries where the results are
> percentages of total count divided by counts based on different
> subquery where values.
> a la ...
> select 100 * (totalCount/ (select count where columnA value is x) as
> partialCount1) as perc1,
> 100 * (totalCount/ (select count where columnA value is y) as
> partialCount2) as perc2
> etc
> from myTable where (..bunch of where stuff..)
> I have simplified the above example just for brevity here.
> So, I hate these kinds of nested queries - and they get incredibly
> complex to read. I could break them up into longer scripts to simplify
> - but I am wondering if there's another way.
> The data I am using is from a warehouse on SQL 2005. I have very little
> experience with warehouses - and know little about analysis services
> and OLAP.
> My question is whether it is possible to use Analysis services to
> create new tables that make this data far easier to query? That's the
> path I am considering venturing down, but I don't want to hit a dead
> end... and need just to know if this makes sense. That is to use
> Analysis services to crunch out some new tables from the tables I am
> using which would have the results neatly packaged in a table or 2 that
> is fast and simple to query. Is that the idea of AS?
> thx for helping a learner here. let me know if i am making enough sense
> in the question.
>

How to silently uninstall SQL Server 2005 Express

Hi.

I'm trying to perform a silent install/uninstall of SQL Server 2005 Express.
I manage to install silently, but when trying to uninstall with the following command line:

sqlexpr.exe /qb REMOVE=ALL INSTANCENAME=<INSTANCENAME>

there is one item left behind - "Microsoft SQL Server Native Client"

Does anyone know how to uninstall this as well?

Also, if MSXML 6.0 is already installed when installing SQL Server 2005 Express, when performing an uninstall with the above command line, MSXML 6.0 is also uninstalled, which may not be the desired outcome.

SQL Server Native Client remains on the box after SQL uninstall if another program is relying on it. If you know no other apps are using it, you can use msiexec to uninstall sqlncli.msi (the native client installer/uninstaller).

Thanks,
Sam Lester (MSFT)

|||Hm,
did a full search for "sqlncli.msi" after uninstalling SQL Server 2005 Express - including hidden files, but I cannot find it?

How to silently uninstall SQL Server 2005 Express

Hi.

I'm trying to perform a silent install/uninstall of SQL Server 2005 Express.
I manage to install silently, but when trying to uninstall with the following command line:

sqlexpr.exe /qb REMOVE=ALL INSTANCENAME=<INSTANCENAME>

there is one item left behind - "Microsoft SQL Server Native Client"

Does anyone know how to uninstall this as well?

Also, if MSXML 6.0 is already installed when installing SQL Server 2005 Express, when performing an uninstall with the above command line, MSXML 6.0 is also uninstalled, which may not be the desired outcome.

SQL Server Native Client remains on the box after SQL uninstall if another program is relying on it. If you know no other apps are using it, you can use msiexec to uninstall sqlncli.msi (the native client installer/uninstaller).

Thanks,
Sam Lester (MSFT)

|||Hm,
did a full search for "sqlncli.msi" after uninstalling SQL Server 2005 Express - including hidden files, but I cannot find it?

How to silently uninstall SQL Server 2005 Express

Hi.

I'm trying to perform a silent install/uninstall of SQL Server 2005 Express.
I manage to install silently, but when trying to uninstall with the following command line:

sqlexpr.exe /qb REMOVE=ALL INSTANCENAME=<INSTANCENAME>

there is one item left behind - "Microsoft SQL Server Native Client"

Does anyone know how to uninstall this as well?

Also, if MSXML 6.0 is already installed when installing SQL Server 2005 Express, when performing an uninstall with the above command line, MSXML 6.0 is also uninstalled, which may not be the desired outcome.

SQL Server Native Client remains on the box after SQL uninstall if another program is relying on it. If you know no other apps are using it, you can use msiexec to uninstall sqlncli.msi (the native client installer/uninstaller).

Thanks,
Sam Lester (MSFT)

|||Hm,
did a full search for "sqlncli.msi" after uninstalling SQL Server 2005 Express - including hidden files, but I cannot find it?
sql

Friday, March 23, 2012

how to setup LOCKS in ATL OELDB Consumer?

in sql server, we can use TABLOCK to lock a table, but how can i do this in ATL, without using a SP

i worry if two user perform db.Update() at the same time and refer to the same row, there may cause a problem.

i fond this line in the OLEDB ref.

Locking can occur with any of the methods that interact with rows on the data source. These include IRowsetLocate::GetRowsAt, IRowsetLocate::GetRowsByBookmark, IRowsetUpdate::Update, and others. These operations take locks and can wait for locks to be released by other transactions.

does this mean the locks are prefomed automatically ?

and, in the deferred update mode, if two user insert the same row into the DB, say, both of them insert 2 rows, 1 is the same, and the others are different, what would happen? what will the server return?

Locks are managed on the server side by the database engine.

Different cursor types (static, keyset or dynamic) and isolation levels can affect the server locking behavior. On the client, you can control cursor types and isolation level through OLEDB.

So the answer to the question regarding inserting the same row depends on the cursor type and iso level etc. If the table has an index that does not allow duplicate key, the 2nd insert could even fail. Basically, server will ensure transactional integrity is maintained for the data.

|||

so i learned the isolation levels, but what is a cursor type in OLEDB? i only heard that in ADO, do they have the same mean? and how can i set the cursor type in OLEDB

|||Cursor definition is the same across all API's. For ADO and ODBC, you can request a particular cursor type directly. OLEDB does this a bit differently, you request the behaviors of the cursor such as updatability, scrollabilty etc through the corresponding OLEDB properties, and OLEDB will find the most suitable cursor for you under the cover.

Monday, March 19, 2012

How to set up related tables

I have a bunch of data related to a property. On that property, we have
several measurement requirements to perform several calculations. For
instance, I may have data that currently am planning to set up as follows:
PropertyInformation table:
PropertyAddressID, ResidentialPropertyTypeID, PropertyOwner,
CorporatePropertyTypeID
PropertyMeasurement table
PropertyInformationID
Measurement1 (related to Corporate property only)
Measurement 2 (related to Corporate property only)
Measurement 3 (related to Residential property only)
Measurement 4 (related to Residential property only)
Measurement 5 (related to both Corporate and Residential)
Measurement 6 (related to both Corporate and Residential)
Calculation table:
PropertyMeasurementID (FK)
Total (result of Measurements 2, 4, 5, 6)
Date (date calculation done)
In order to perform the final calculations, we must have one residential
property type and one corporate type (which comes from other lookup tables.
I originally was going to base the 2 latter tables on the
PropertyInformation; however, we may not have any property information
whatsoever when the measurements are recorded, all we know is we have one
ResidentialPropertyType and one CorporatePropertyType, but we may not even
know what that type is. My thought is to force an "Unknown" for property
type for the ResidentialPropertyTypeID (the CorporatePropertyTypeID is the
only information we collect on the property, all other fields in that table
relate to the ResidentialProperty, but the combination of the Corporate and
Residential measurements are required for the calculations). And we may
have Property data but no calculations are being done at this time.
My question is not really what to do with the PropertyInformation table
(although I'm open to suggestions), it's what to do with the Measurement
information -- should I leave all measurement information in one table,
requiring 2 fields for Measurements 5 and 6 because there will be
information on both, or should I split out the CorporateMeasurements and the
ResidentialMeasurements? I'm concerned that down the road the measurement
information will change, and it will become more important what properttype
that came from. If I split it out, then my calculations become more
complicated -- do I relate each calculation to the ID of both tables, or do
I fall back on the PropertyInformation to begin with? Or should I split out
only the duplicated fields, then the calculation table would still need to
pull information from both measurement tables...?
I hope this all makes sense, I suspect I may simply have to choose one way
and then stick with it. Any thoughts?--BEGIN PGP SIGNED MESSAGE--
Hash: SHA1
I'd set up the PropertyMeasurement table like this:
CREATE TABLE PropertyMeasurements (
PropertyInformationID INTEGER NOT NULL
REFERENCES PropertyInformation ,
MeasureTypeID TINYINT NOT NULL
REFERENCES MeasureTypes ,
Measure SMALLINT NOT NULL ,
CONSTRAINT PK_PM PRIMARY KEY (PropertyInformationID, MeasureType)
)
I'm not sure where the PropertyInformationID came from, but guessed the
PropertyInformation table. The PK allows only one MeasureType per
PropertyInformationID. If you want to allow more than one MeasureType
per PropertyInformationID you will have to add another attribute to the
table, like a date column.
You didn't say what type of metric unit would be used for the Measure so
I just put SMALLINT. Change to whatever data type you require.
To allow the further expansion of measurement types you should have a
table of MeasureTypes:
CREATE TABLE MeasureTypes (
MeasureTypeID TINYINT IDENTITY(1,1) UNIQUE ,
MeasureTypeName VARCHAR(20) NOT NULL PRIMARY KEY
)
I wouldn't have a calculation table; rather, I'd have a view or stored
procedure to return those calculations.
MGFoster:::mgf00 <at> earthlink <decimal-point> net
Oakland, CA (USA)
--BEGIN PGP SIGNATURE--
Version: PGP for Personal Privacy 5.0
Charset: noconv
iQA/ AwUBQjCstIechKqOuFEgEQIO+ACbBGgH6AnoXuC1
kmWwAvoJ1uzEQX0Aniuj
3NTuWjMlvl+f8EDFzU9op+bA
=Wmkm
--END PGP SIGNATURE--
Iams wrote:
> I have a bunch of data related to a property. On that property, we have
> several measurement requirements to perform several calculations. For
> instance, I may have data that currently am planning to set up as follows:
> PropertyInformation table:
> PropertyAddressID, ResidentialPropertyTypeID, PropertyOwner,
> CorporatePropertyTypeID
> PropertyMeasurement table
> PropertyInformationID
> Measurement1 (related to Corporate property only)
> Measurement 2 (related to Corporate property only)
> Measurement 3 (related to Residential property only)
> Measurement 4 (related to Residential property only)
> Measurement 5 (related to both Corporate and Residential)
> Measurement 6 (related to both Corporate and Residential)
> Calculation table:
> PropertyMeasurementID (FK)
> Total (result of Measurements 2, 4, 5, 6)
> Date (date calculation done)
> In order to perform the final calculations, we must have one residential
> property type and one corporate type (which comes from other lookup tables
.
> I originally was going to base the 2 latter tables on the
> PropertyInformation; however, we may not have any property information
> whatsoever when the measurements are recorded, all we know is we have one
> ResidentialPropertyType and one CorporatePropertyType, but we may not even
> know what that type is. My thought is to force an "Unknown" for property
> type for the ResidentialPropertyTypeID (the CorporatePropertyTypeID is the
> only information we collect on the property, all other fields in that tabl
e
> relate to the ResidentialProperty, but the combination of the Corporate an
d
> Residential measurements are required for the calculations). And we may
> have Property data but no calculations are being done at this time.
> My question is not really what to do with the PropertyInformation table
> (although I'm open to suggestions), it's what to do with the Measurement
> information -- should I leave all measurement information in one table,
> requiring 2 fields for Measurements 5 and 6 because there will be
> information on both, or should I split out the CorporateMeasurements and t
he
> ResidentialMeasurements? I'm concerned that down the road the measurement
> information will change, and it will become more important what properttyp
e
> that came from. If I split it out, then my calculations become more
> complicated -- do I relate each calculation to the ID of both tables, or d
o
> I fall back on the PropertyInformation to begin with? Or should I split o
ut
> only the duplicated fields, then the calculation table would still need to
> pull information from both measurement tables...?
> I hope this all makes sense, I suspect I may simply have to choose one way
> and then stick with it. Any thoughts?