Friday, March 30, 2012

How to simulate Nothing from a deep link

Hello-

I'm working on an app that will create an HTTP connection to the SSRS server to gather a PDF generated by SSRS and place it on the LAN. looks something like this :

http://server/ReportServer?%2fDEV%2frpt_Sheet&rs%3aCommand=Render&rc:Toolbar=false&userID=NULL&StoreID=23&rs:Format=PDF

We have a few reports where 1 or more parameters can be null, but I'm having one heck of a time trying get this to work with a null value. Does anyone know what value is actually passed to SSRS when a parameter is NULL?

I have tried Nothing, but to no avail.

Thanks in adavance,

Tristan

Do not include that parameter in the link and set its default value as null or send something like this:

parameter name = "param1"

&param1%3aisnull=True

http://reportserver/?%2fReport1Test&param1%3aisnull=True&rs%3aParameterLanguage=&rc%3aParameters=Collapsed

Maciej

How to simulate an array in SQL Server

Just thought some might be interested in how to identify individual
items in an 'array' by declaring a local variable as table
----
declare @.MyTable table (MyRowID int identity, MyData varchar(255))
declare @.MyCurrentRow as int
declare @.MyRowCount as int
insert into @.MyTable select Name from sysobjects -- (or whatever)
set @.MyRowCount = @.@.RowCount
-- @.@.Rowcount hold the count of the last table in these cases anyway
-- then you can use statement to access the nth. row in the 'array'
select MyData from @.MyTable where MyRowID = 5 -- as an example
--
-- or step through the table using @.MyCurrentRow
Set @.MyCurrentRow = 1
while @.MyCurrentRow < @.MyRowCount +1
Begin
Select MyData from @.MyTable where MyRowID = @.MyCurrentRow
set @.MyCurrentRow = @.MyCurrentRow + 1
Continue
end
Just thought it might help. It isn't rocket science but I saw a few
with problems so thought I'd post.
I needed it to obtain individual values from a table and use those
values to call a stored procedure repeatedly.
CheersSee if these help:
http://www.aspfaq.com/show.asp?id=2248 Arrays & Lists
http://www.sommarskog.se/dynamic_sql.html Dynamic SQL
http://www.users.drew.edu/skass/sql...bleProc.sql.txt List to
Table script
Andrew J. Kelly SQL MVP
<mal_k100@.hotmail.com> wrote in message
news:1125573743.271138.167070@.o13g2000cwo.googlegroups.com...
> Just thought some might be interested in how to identify individual
> items in an 'array' by declaring a local variable as table
> ----
> declare @.MyTable table (MyRowID int identity, MyData varchar(255))
> declare @.MyCurrentRow as int
> declare @.MyRowCount as int
> insert into @.MyTable select Name from sysobjects -- (or whatever)
> set @.MyRowCount = @.@.RowCount
> -- @.@.Rowcount hold the count of the last table in these cases anyway
> -- then you can use statement to access the nth. row in the 'array'
> select MyData from @.MyTable where MyRowID = 5 -- as an example
> --
> -- or step through the table using @.MyCurrentRow
> Set @.MyCurrentRow = 1
> while @.MyCurrentRow < @.MyRowCount +1
> Begin
> Select MyData from @.MyTable where MyRowID = @.MyCurrentRow
> set @.MyCurrentRow = @.MyCurrentRow + 1
> Continue
> end
> Just thought it might help. It isn't rocket science but I saw a few
> with problems so thought I'd post.
> I needed it to obtain individual values from a table and use those
> values to call a stored procedure repeatedly.
> Cheers
>sql

how to simulate a block

could someone provide some simple code to simulate a block. I want to test
a blocking script I want to implement on prod.
TIA
C
Execute this on one connection:
create table ##foo (SomeColumn int)
go
begin transaction
insert into ##foo (SomeColumn) values (1)
waitfor delay '0:00:40' /* hold the transaction open for 40 seconds */
go
commit transaction
/*****************************/
Execute this on another connection:
select * from ##foo
Keith
"CD" <mcdye1@.hotmail.REMOVETHIS.com> wrote in message
news:%23PbFfLMeFHA.1920@.tk2msftngp13.phx.gbl...
> could someone provide some simple code to simulate a block. I want to
> test a blocking script I want to implement on prod.
> TIA
> C
>
|||Try,
-- conn 1
use northwind
go
begin transaction
update orders
set orderdate = orderdate
-- conn 2
use northwind
go
select * from orders
Connection 2 will be blocked until you commit or rollback the transaction in
connection 1.
AMB
"CD" wrote:

> could someone provide some simple code to simulate a block. I want to test
> a blocking script I want to implement on prod.
> TIA
> C
>
>
|||Look at tablockx holdlock keywords
these will hold locks on tables with in a block of code...
i.e.
create procedure ...
select @.nextid = id from foo with (tablockx holdlock)
/* tablockx forces an exclusive table lock
tablock to do a non-exclusive lock
holdlock tells SQL Server to hold locks for duration of the transaction */
update foo set [id] = [id]+ @.block
return @.nextid
...
this will prevent n users from getting the same id
GO
"CD" wrote:

> could someone provide some simple code to simulate a block. I want to test
> a blocking script I want to implement on prod.
> TIA
> C
>
>

how to simulate a block

could someone provide some simple code to simulate a block. I want to test
a blocking script I want to implement on prod.
TIA
CExecute this on one connection:
create table ##foo (SomeColumn int)
go
begin transaction
insert into ##foo (SomeColumn) values (1)
waitfor delay '0:00:40' /* hold the transaction open for 40 seconds */
go
commit transaction
/*****************************/
Execute this on another connection:
select * from ##foo
Keith
"CD" <mcdye1@.hotmail.REMOVETHIS.com> wrote in message
news:%23PbFfLMeFHA.1920@.tk2msftngp13.phx.gbl...
> could someone provide some simple code to simulate a block. I want to
> test a blocking script I want to implement on prod.
> TIA
> C
>|||Try,
-- conn 1
use northwind
go
begin transaction
update orders
set orderdate = orderdate
-- conn 2
use northwind
go
select * from orders
Connection 2 will be blocked until you commit or rollback the transaction in
connection 1.
AMB
"CD" wrote:

> could someone provide some simple code to simulate a block. I want to tes
t
> a blocking script I want to implement on prod.
> TIA
> C
>
>|||Look at tablockx holdlock keywords
these will hold locks on tables with in a block of code...
i.e.
create procedure ...
select @.nextid = id from foo with (tablockx holdlock)
/* tablockx forces an exclusive table lock
tablock to do a non-exclusive lock
holdlock tells SQL Server to hold locks for duration of the transaction */
update foo set [id] = [id]+ @.block
return @.nextid
...
this will prevent n users from getting the same id
GO
"CD" wrote:

> could someone provide some simple code to simulate a block. I want to tes
t
> a blocking script I want to implement on prod.
> TIA
> C
>
>sql

how to simulate a block

could someone provide some simple code to simulate a block. I want to test
a blocking script I want to implement on prod.
TIA
CExecute this on one connection:
create table ##foo (SomeColumn int)
go
begin transaction
insert into ##foo (SomeColumn) values (1)
waitfor delay '0:00:40' /* hold the transaction open for 40 seconds */
go
commit transaction
/*****************************/
Execute this on another connection:
select * from ##foo
Keith
"CD" <mcdye1@.hotmail.REMOVETHIS.com> wrote in message
news:%23PbFfLMeFHA.1920@.tk2msftngp13.phx.gbl...
> could someone provide some simple code to simulate a block. I want to
> test a blocking script I want to implement on prod.
> TIA
> C
>|||Try,
-- conn 1
use northwind
go
begin transaction
update orders
set orderdate = orderdate
-- conn 2
use northwind
go
select * from orders
Connection 2 will be blocked until you commit or rollback the transaction in
connection 1.
AMB
"CD" wrote:
> could someone provide some simple code to simulate a block. I want to test
> a blocking script I want to implement on prod.
> TIA
> C
>
>|||Look at tablockx holdlock keywords
these will hold locks on tables with in a block of code...
i.e.
create procedure ...
select @.nextid = id from foo with (tablockx holdlock)
/* tablockx forces an exclusive table lock
tablock to do a non-exclusive lock
holdlock tells SQL Server to hold locks for duration of the transaction */
update foo set [id] = [id]+ @.block
return @.nextid
...
this will prevent n users from getting the same id
GO
"CD" wrote:
> could someone provide some simple code to simulate a block. I want to test
> a blocking script I want to implement on prod.
> TIA
> C
>
>

How to simplify/optimize this query?

Hi,
Could the following query be simplified/optimized? The database has a table
that maps users to locations, locations to chapters, and sections to
chapters. I need to issue a query that would return a list of sections that
can be assigned to a user (excluding the ones that have already been
assigned) .
@.UserId int
[...]
SELECT SectionID, [Name],
FROM Section
WHERE SectionID NOT IN (SELECT SectionID FROM UserSection WHERE UserID = @.UserID) AND
SectionID IN (SELECT DISTINCT SectionID FROM ChapterSection
WHERE ChapterID IN
(SELECT DISTINCT ChapterID FROM ChapterLocation
WHERE LocationID IN
(SELECT LocationID FROM UserLocation WHERE UserID = @.UserID)
)
);
I realize that this might not be as clear as it should be but I'd appreciate
_any_ suggestions.
Thanks,
Dan> Could the following query be simplified/optimized?
I think the query below is equivalent. I think indexes on the columns in
the WHERE/JOIN clauses may help performance(e.g. a composite index on
UserSection UserID, SectionID).
SELECT SectionID, [Name]
FROM dbo.Section s
WHERE
NOT EXISTS(
SELECT * FROM dbo.UserSection us
WHERE
us.SectionID = s.SectionID
AND UserID = @.UserID
) AND
EXISTS(
SELECT * FROM dbo.ChapterSection cs
JOIN dbo.ChapterLocation cl ON
cl.ChapterID = cs.ChapterID
JOIN dbo.UserLocation ul ON
ul.LocationID = cl.LocationID
AND UserID = @.UserID)
--
Hope this helps.
Dan Guzman
SQL Server MVP
http://weblogs.sqlteam.com/dang/
"dan" <dan@.company.com> wrote in message
news:uVUBR7mqIHA.3804@.TK2MSFTNGP02.phx.gbl...
> Hi,
> Could the following query be simplified/optimized? The database has a
> table that maps users to locations, locations to chapters, and sections to
> chapters. I need to issue a query that would return a list of sections
> that can be assigned to a user (excluding the ones that have already been
> assigned) .
> @.UserId int
> [...]
> SELECT SectionID, [Name],
> FROM Section
> WHERE SectionID NOT IN (SELECT SectionID FROM UserSection WHERE UserID => @.UserID) AND
> SectionID IN (SELECT DISTINCT SectionID FROM ChapterSection
> WHERE ChapterID IN
> (SELECT DISTINCT ChapterID FROM ChapterLocation
> WHERE LocationID IN
> (SELECT LocationID FROM UserLocation WHERE UserID = @.UserID)
> )
> );
> I realize that this might not be as clear as it should be but I'd
> appreciate _any_ suggestions.
> Thanks,
> Dan
>|||Thanks for the reply. The query looks clean but there seems to be a problem
with EXISTS() in the WHERE clause. Some of the sections included in the
result set are not assigned to the user's location(s) (implicitly through
chapters). I.e. the query selects chapters that are not assigned to user's
locations in addition to the chapters that are assigned.
Any suggestions?
Thanks,
Dan
"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
news:A400C6C3-53B6-4A01-B0DD-182ACF421236@.microsoft.com...
>> Could the following query be simplified/optimized?
> I think the query below is equivalent. I think indexes on the columns in
> the WHERE/JOIN clauses may help performance(e.g. a composite index on
> UserSection UserID, SectionID).
> SELECT SectionID, [Name]
> FROM dbo.Section s
> WHERE
> NOT EXISTS(
> SELECT * FROM dbo.UserSection us
> WHERE
> us.SectionID = s.SectionID
> AND UserID = @.UserID
> ) AND
> EXISTS(
> SELECT * FROM dbo.ChapterSection cs
> JOIN dbo.ChapterLocation cl ON
> cl.ChapterID = cs.ChapterID
> JOIN dbo.UserLocation ul ON
> ul.LocationID = cl.LocationID
> AND UserID = @.UserID)
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> http://weblogs.sqlteam.com/dang/
> "dan" <dan@.company.com> wrote in message
> news:uVUBR7mqIHA.3804@.TK2MSFTNGP02.phx.gbl...
>> Hi,
>> Could the following query be simplified/optimized? The database has a
>> table that maps users to locations, locations to chapters, and sections
>> to chapters. I need to issue a query that would return a list of sections
>> that can be assigned to a user (excluding the ones that have already been
>> assigned) .
>> @.UserId int
>> [...]
>> SELECT SectionID, [Name],
>> FROM Section
>> WHERE SectionID NOT IN (SELECT SectionID FROM UserSection WHERE UserID =>> @.UserID) AND
>> SectionID IN (SELECT DISTINCT SectionID FROM ChapterSection
>> WHERE ChapterID IN
>> (SELECT DISTINCT ChapterID FROM ChapterLocation
>> WHERE LocationID IN
>> (SELECT LocationID FROM UserLocation WHERE UserID = @.UserID)
>> )
>> );
>> I realize that this might not be as clear as it should be but I'd
>> appreciate _any_ suggestions.
>> Thanks,
>> Dan
>>
>|||> Thanks for the reply. The query looks clean but there seems to be a
> problem
> with EXISTS() in the WHERE clause. Some of the sections included in the
> result set are not assigned to the user's location(s) (implicitly through
> chapters). I.e. the query selects chapters that are not assigned to
> user's locations in addition to the chapters that are assigned.
> Any suggestions?
I missed the join from ChapterSection.SectionID back to Section.SectionID.
See the corrected query below. If this still doesn't work for you, please
post the table DDL and sample data so that I can test the solution.
SELECT SectionID, [Name]
FROM dbo.Section s
WHERE
NOT EXISTS(
SELECT * FROM dbo.UserSection us
WHERE
us.SectionID = s.SectionID
AND UserID = @.UserID
)
AND EXISTS(
SELECT * FROM dbo.ChapterSection cs
JOIN dbo.ChapterLocation cl ON
cl.ChapterID = cs.ChapterID
JOIN dbo.UserLocation ul ON
ul.LocationID = cl.LocationID
AND UserID = @.UserID
AND cs.SectionID = s.SectionID
)
--
Hope this helps.
Dan Guzman
SQL Server MVP
http://weblogs.sqlteam.com/dang/
"dan" <dan@.company.com> wrote in message
news:eB2qg%23sqIHA.2492@.TK2MSFTNGP06.phx.gbl...
> Thanks for the reply. The query looks clean but there seems to be a
> problem with EXISTS() in the WHERE clause. Some of the sections included
> in the result set are not assigned to the user's location(s) (implicitly
> through chapters). I.e. the query selects chapters that are not assigned
> to user's locations in addition to the chapters that are assigned.
> Any suggestions?
> Thanks,
> Dan
> "Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
> news:A400C6C3-53B6-4A01-B0DD-182ACF421236@.microsoft.com...
>> Could the following query be simplified/optimized?
>> I think the query below is equivalent. I think indexes on the columns in
>> the WHERE/JOIN clauses may help performance(e.g. a composite index on
>> UserSection UserID, SectionID).
>> SELECT SectionID, [Name]
>> FROM dbo.Section s
>> WHERE
>> NOT EXISTS(
>> SELECT * FROM dbo.UserSection us
>> WHERE
>> us.SectionID = s.SectionID
>> AND UserID = @.UserID
>> ) AND
>> EXISTS(
>> SELECT * FROM dbo.ChapterSection cs
>> JOIN dbo.ChapterLocation cl ON
>> cl.ChapterID = cs.ChapterID
>> JOIN dbo.UserLocation ul ON
>> ul.LocationID = cl.LocationID
>> AND UserID = @.UserID)
>> --
>> Hope this helps.
>> Dan Guzman
>> SQL Server MVP
>> http://weblogs.sqlteam.com/dang/
>> "dan" <dan@.company.com> wrote in message
>> news:uVUBR7mqIHA.3804@.TK2MSFTNGP02.phx.gbl...
>> Hi,
>> Could the following query be simplified/optimized? The database has a
>> table that maps users to locations, locations to chapters, and sections
>> to chapters. I need to issue a query that would return a list of
>> sections that can be assigned to a user (excluding the ones that have
>> already been assigned) .
>> @.UserId int
>> [...]
>> SELECT SectionID, [Name],
>> FROM Section
>> WHERE SectionID NOT IN (SELECT SectionID FROM UserSection WHERE UserID =>> @.UserID) AND
>> SectionID IN (SELECT DISTINCT SectionID FROM ChapterSection
>> WHERE ChapterID IN
>> (SELECT DISTINCT ChapterID FROM ChapterLocation
>> WHERE LocationID IN
>> (SELECT LocationID FROM UserLocation WHERE UserID = @.UserID)
>> )
>> );
>> I realize that this might not be as clear as it should be but I'd
>> appreciate _any_ suggestions.
>> Thanks,
>> Dan
>>
>|||Thanks. It works now.
"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
news:5F09A613-CEF3-45CA-B2EA-6A3D39CB69D5@.microsoft.com...
>> Thanks for the reply. The query looks clean but there seems to be a
>> problem
>> with EXISTS() in the WHERE clause. Some of the sections included in the
>> result set are not assigned to the user's location(s) (implicitly through
>> chapters). I.e. the query selects chapters that are not assigned to
>> user's locations in addition to the chapters that are assigned.
>> Any suggestions?
> I missed the join from ChapterSection.SectionID back to Section.SectionID.
> See the corrected query below. If this still doesn't work for you, please
> post the table DDL and sample data so that I can test the solution.
> SELECT SectionID, [Name]
> FROM dbo.Section s
> WHERE
> NOT EXISTS(
> SELECT * FROM dbo.UserSection us
> WHERE
> us.SectionID = s.SectionID
> AND UserID = @.UserID
> )
> AND EXISTS(
> SELECT * FROM dbo.ChapterSection cs
> JOIN dbo.ChapterLocation cl ON
> cl.ChapterID = cs.ChapterID
> JOIN dbo.UserLocation ul ON
> ul.LocationID = cl.LocationID
> AND UserID = @.UserID
> AND cs.SectionID = s.SectionID
> )
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> http://weblogs.sqlteam.com/dang/
> "dan" <dan@.company.com> wrote in message
> news:eB2qg%23sqIHA.2492@.TK2MSFTNGP06.phx.gbl...
>> Thanks for the reply. The query looks clean but there seems to be a
>> problem with EXISTS() in the WHERE clause. Some of the sections included
>> in the result set are not assigned to the user's location(s) (implicitly
>> through chapters). I.e. the query selects chapters that are not assigned
>> to user's locations in addition to the chapters that are assigned.
>> Any suggestions?
>> Thanks,
>> Dan
>> "Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
>> news:A400C6C3-53B6-4A01-B0DD-182ACF421236@.microsoft.com...
>> Could the following query be simplified/optimized?
>> I think the query below is equivalent. I think indexes on the columns
>> in the WHERE/JOIN clauses may help performance(e.g. a composite index on
>> UserSection UserID, SectionID).
>> SELECT SectionID, [Name]
>> FROM dbo.Section s
>> WHERE
>> NOT EXISTS(
>> SELECT * FROM dbo.UserSection us
>> WHERE
>> us.SectionID = s.SectionID
>> AND UserID = @.UserID
>> ) AND
>> EXISTS(
>> SELECT * FROM dbo.ChapterSection cs
>> JOIN dbo.ChapterLocation cl ON
>> cl.ChapterID = cs.ChapterID
>> JOIN dbo.UserLocation ul ON
>> ul.LocationID = cl.LocationID
>> AND UserID = @.UserID)
>> --
>> Hope this helps.
>> Dan Guzman
>> SQL Server MVP
>> http://weblogs.sqlteam.com/dang/
>> "dan" <dan@.company.com> wrote in message
>> news:uVUBR7mqIHA.3804@.TK2MSFTNGP02.phx.gbl...
>> Hi,
>> Could the following query be simplified/optimized? The database has a
>> table that maps users to locations, locations to chapters, and sections
>> to chapters. I need to issue a query that would return a list of
>> sections that can be assigned to a user (excluding the ones that have
>> already been assigned) .
>> @.UserId int
>> [...]
>> SELECT SectionID, [Name],
>> FROM Section
>> WHERE SectionID NOT IN (SELECT SectionID FROM UserSection WHERE UserID
>> = @.UserID) AND
>> SectionID IN (SELECT DISTINCT SectionID FROM ChapterSection
>> WHERE ChapterID IN
>> (SELECT DISTINCT ChapterID FROM ChapterLocation
>> WHERE LocationID IN
>> (SELECT LocationID FROM UserLocation WHERE UserID =>> @.UserID)
>> )
>> );
>> I realize that this might not be as clear as it should be but I'd
>> appreciate _any_ suggestions.
>> Thanks,
>> Dan
>>
>>
>

How to simplify XML Query? (SQL2000)

Hi, I am trying to write a query in SQL 2000 to produce a slightly more than
just basic xml schema. I can do the query using FOR XML EXPLICIT, but as the
schema that I want to produce is quite large (with a reasonable amount of
repitition) this seems unnecessarily complicated. A simplified version of the
schema I want to produce is:
<ArrayOfEntity>
<Entity Key=123>
<Name>...</Name>
<Prop1>...</Prop1>
<Prop2>...</Prop2>
<Prop3>...</Prop3>
<Node1 Key=32>text</Node1>
<Node2 Key=43>text</Node2>
<Node3 Key=54>text</Node3>
<Node4 Key=65>text</Node4>
</Entity>
<Entity Key=234>
...
</Entity>
</ArrayOfEntity>
The thing is, there are quite a few of these child "Node"s, which will only
contain a key and text value. There are also more complicated child nodes
(not shown here).
I was quite excited to find the [Element!Num!Attr!xml] explicit column name,
because I could then write a function to generate the child node. But this
does not seem to allow me to specify the Node Key's!! :(
Is there any way to write a reasonably simple query to get the above schema'
Thanks
GregHi Greg
You will need to use the FOR XML EXPLICIT option if you wish to retrieve the
data in this format. A different approach you be to apply a transform after
the data has been extracted. You could use XSLT to do this, possibly use the
document()
function to include and export the data in fragments.
http://groups.google.com/group/microsoft.public.xml/msg/24b2c92baf8fbde6?hl=en&
John
"Greg Bacchus" wrote:
> Hi, I am trying to write a query in SQL 2000 to produce a slightly more than
> just basic xml schema. I can do the query using FOR XML EXPLICIT, but as the
> schema that I want to produce is quite large (with a reasonable amount of
> repitition) this seems unnecessarily complicated. A simplified version of the
> schema I want to produce is:
> <ArrayOfEntity>
> <Entity Key=123>
> <Name>...</Name>
> <Prop1>...</Prop1>
> <Prop2>...</Prop2>
> <Prop3>...</Prop3>
> <Node1 Key=32>text</Node1>
> <Node2 Key=43>text</Node2>
> <Node3 Key=54>text</Node3>
> <Node4 Key=65>text</Node4>
> </Entity>
> <Entity Key=234>
> ...
> </Entity>
> </ArrayOfEntity>
> The thing is, there are quite a few of these child "Node"s, which will only
> contain a key and text value. There are also more complicated child nodes
> (not shown here).
> I was quite excited to find the [Element!Num!Attr!xml] explicit column name,
> because I could then write a function to generate the child node. But this
> does not seem to allow me to specify the Node Key's!! :(
> Is there any way to write a reasonably simple query to get the above schema'
> Thanks
> Greg|||Thanks for your reply John.
Can you do XSL Transforms in SQL 2000?
Greg
"John Bell" wrote:
> Hi Greg
> You will need to use the FOR XML EXPLICIT option if you wish to retrieve the
> data in this format. A different approach you be to apply a transform after
> the data has been extracted. You could use XSLT to do this, possibly use the
> document()
> function to include and export the data in fragments.
> http://groups.google.com/group/microsoft.public.xml/msg/24b2c92baf8fbde6?hl=en&
> John
> "Greg Bacchus" wrote:
> > Hi, I am trying to write a query in SQL 2000 to produce a slightly more than
> > just basic xml schema. I can do the query using FOR XML EXPLICIT, but as the
> > schema that I want to produce is quite large (with a reasonable amount of
> > repitition) this seems unnecessarily complicated. A simplified version of the
> > schema I want to produce is:
> >
> > <ArrayOfEntity>
> > <Entity Key=123>
> > <Name>...</Name>
> > <Prop1>...</Prop1>
> > <Prop2>...</Prop2>
> > <Prop3>...</Prop3>
> > <Node1 Key=32>text</Node1>
> > <Node2 Key=43>text</Node2>
> > <Node3 Key=54>text</Node3>
> > <Node4 Key=65>text</Node4>
> > </Entity>
> > <Entity Key=234>
> > ...
> > </Entity>
> > </ArrayOfEntity>
> >
> > The thing is, there are quite a few of these child "Node"s, which will only
> > contain a key and text value. There are also more complicated child nodes
> > (not shown here).
> >
> > I was quite excited to find the [Element!Num!Attr!xml] explicit column name,
> > because I could then write a function to generate the child node. But this
> > does not seem to allow me to specify the Node Key's!! :(
> >
> > Is there any way to write a reasonably simple query to get the above schema'
> > Thanks
> > Greg|||Hi Greg
It should be possible to do this using the SP_OA... procedures but really I
would expect it to be done externally.
You may want to look at SQL2005 as this could all be done a lot easier using
.NET
John
"Greg Bacchus" wrote:
> Thanks for your reply John.
> Can you do XSL Transforms in SQL 2000?
> Greg
> "John Bell" wrote:
> > Hi Greg
> >
> > You will need to use the FOR XML EXPLICIT option if you wish to retrieve the
> > data in this format. A different approach you be to apply a transform after
> > the data has been extracted. You could use XSLT to do this, possibly use the
> > document()
> > function to include and export the data in fragments.
> >
> > http://groups.google.com/group/microsoft.public.xml/msg/24b2c92baf8fbde6?hl=en&
> >
> > John
> >
> > "Greg Bacchus" wrote:
> >
> > > Hi, I am trying to write a query in SQL 2000 to produce a slightly more than
> > > just basic xml schema. I can do the query using FOR XML EXPLICIT, but as the
> > > schema that I want to produce is quite large (with a reasonable amount of
> > > repitition) this seems unnecessarily complicated. A simplified version of the
> > > schema I want to produce is:
> > >
> > > <ArrayOfEntity>
> > > <Entity Key=123>
> > > <Name>...</Name>
> > > <Prop1>...</Prop1>
> > > <Prop2>...</Prop2>
> > > <Prop3>...</Prop3>
> > > <Node1 Key=32>text</Node1>
> > > <Node2 Key=43>text</Node2>
> > > <Node3 Key=54>text</Node3>
> > > <Node4 Key=65>text</Node4>
> > > </Entity>
> > > <Entity Key=234>
> > > ...
> > > </Entity>
> > > </ArrayOfEntity>
> > >
> > > The thing is, there are quite a few of these child "Node"s, which will only
> > > contain a key and text value. There are also more complicated child nodes
> > > (not shown here).
> > >
> > > I was quite excited to find the [Element!Num!Attr!xml] explicit column name,
> > > because I could then write a function to generate the child node. But this
> > > does not seem to allow me to specify the Node Key's!! :(
> > >
> > > Is there any way to write a reasonably simple query to get the above schema'
> > > Thanks
> > > Greg

How to simplify XML Query? (SQL2000)

Hi, I am trying to write a query in SQL 2000 to produce a slightly more than
just basic xml schema. I can do the query using FOR XML EXPLICIT, but as the
schema that I want to produce is quite large (with a reasonable amount of
repitition) this seems unnecessarily complicated. A simplified version of the
schema I want to produce is:
<ArrayOfEntity>
<Entity Key=123>
<Name>...</Name>
<Prop1>...</Prop1>
<Prop2>...</Prop2>
<Prop3>...</Prop3>
<Node1 Key=32>text</Node1>
<Node2 Key=43>text</Node2>
<Node3 Key=54>text</Node3>
<Node4 Key=65>text</Node4>
</Entity>
<Entity Key=234>
...
</Entity>
</ArrayOfEntity>
The thing is, there are quite a few of these child "Node"s, which will only
contain a key and text value. There are also more complicated child nodes
(not shown here).
I was quite excited to find the [Element!Num!Attr!xml] explicit column name,
because I could then write a function to generate the child node. But this
does not seem to allow me to specify the Node Key's!!
Is there any way to write a reasonably simple query to get the above schema?
Thanks
Greg
Hi Greg
You will need to use the FOR XML EXPLICIT option if you wish to retrieve the
data in this format. A different approach you be to apply a transform after
the data has been extracted. You could use XSLT to do this, possibly use the
document()
function to include and export the data in fragments.
http://groups.google.com/group/micro...f8fbde6?hl=en&
John
"Greg Bacchus" wrote:

> Hi, I am trying to write a query in SQL 2000 to produce a slightly more than
> just basic xml schema. I can do the query using FOR XML EXPLICIT, but as the
> schema that I want to produce is quite large (with a reasonable amount of
> repitition) this seems unnecessarily complicated. A simplified version of the
> schema I want to produce is:
> <ArrayOfEntity>
> <Entity Key=123>
> <Name>...</Name>
> <Prop1>...</Prop1>
> <Prop2>...</Prop2>
> <Prop3>...</Prop3>
> <Node1 Key=32>text</Node1>
> <Node2 Key=43>text</Node2>
> <Node3 Key=54>text</Node3>
> <Node4 Key=65>text</Node4>
> </Entity>
> <Entity Key=234>
> ...
> </Entity>
> </ArrayOfEntity>
> The thing is, there are quite a few of these child "Node"s, which will only
> contain a key and text value. There are also more complicated child nodes
> (not shown here).
> I was quite excited to find the [Element!Num!Attr!xml] explicit column name,
> because I could then write a function to generate the child node. But this
> does not seem to allow me to specify the Node Key's!!
> Is there any way to write a reasonably simple query to get the above schema?
> Thanks
> Greg
|||Thanks for your reply John.
Can you do XSL Transforms in SQL 2000?
Greg
"John Bell" wrote:
[vbcol=seagreen]
> Hi Greg
> You will need to use the FOR XML EXPLICIT option if you wish to retrieve the
> data in this format. A different approach you be to apply a transform after
> the data has been extracted. You could use XSLT to do this, possibly use the
> document()
> function to include and export the data in fragments.
> http://groups.google.com/group/micro...f8fbde6?hl=en&
> John
> "Greg Bacchus" wrote:
|||Hi Greg
It should be possible to do this using the SP_OA... procedures but really I
would expect it to be done externally.
You may want to look at SQL2005 as this could all be done a lot easier using
..NET
John
"Greg Bacchus" wrote:
[vbcol=seagreen]
> Thanks for your reply John.
> Can you do XSL Transforms in SQL 2000?
> Greg
> "John Bell" wrote:

How to simplify XML Query? (SQL2000)

Hi, I am trying to write a query in SQL 2000 to produce a slightly more than
just basic xml schema. I can do the query using FOR XML EXPLICIT, but as the
schema that I want to produce is quite large (with a reasonable amount of
repitition) this seems unnecessarily complicated. A simplified version of th
e
schema I want to produce is:
<ArrayOfEntity>
<Entity Key=123>
<Name>...</Name>
<Prop1>...</Prop1>
<Prop2>...</Prop2>
<Prop3>...</Prop3>
<Node1 Key=32>text</Node1>
<Node2 Key=43>text</Node2>
<Node3 Key=54>text</Node3>
<Node4 Key=65>text</Node4>
</Entity>
<Entity Key=234>
..
</Entity>
</ArrayOfEntity>
The thing is, there are quite a few of these child "Node"s, which will only
contain a key and text value. There are also more complicated child nodes
(not shown here).
I was quite excited to find the [Element!Num!Attr!xml] explicit column n
ame,
because I could then write a function to generate the child node. But this
does not seem to allow me to specify the Node Key's!!
Is there any way to write a reasonably simple query to get the above schema?
?
Thanks
GregHi Greg
You will need to use the FOR XML EXPLICIT option if you wish to retrieve the
data in this format. A different approach you be to apply a transform after
the data has been extracted. You could use XSLT to do this, possibly use the
document()
function to include and export the data in fragments.
[url]http://groups.google.com/group/microsoft.public.xml/msg/24b2c92baf8fbde6?hl=en&[/u
rl]
John
"Greg Bacchus" wrote:

> Hi, I am trying to write a query in SQL 2000 to produce a slightly more th
an
> just basic xml schema. I can do the query using FOR XML EXPLICIT, but as t
he
> schema that I want to produce is quite large (with a reasonable amount of
> repitition) this seems unnecessarily complicated. A simplified version of
the
> schema I want to produce is:
> <ArrayOfEntity>
> <Entity Key=123>
> <Name>...</Name>
> <Prop1>...</Prop1>
> <Prop2>...</Prop2>
> <Prop3>...</Prop3>
> <Node1 Key=32>text</Node1>
> <Node2 Key=43>text</Node2>
> <Node3 Key=54>text</Node3>
> <Node4 Key=65>text</Node4>
> </Entity>
> <Entity Key=234>
> ...
> </Entity>
> </ArrayOfEntity>
> The thing is, there are quite a few of these child "Node"s, which will onl
y
> contain a key and text value. There are also more complicated child nodes
> (not shown here).
> I was quite excited to find the [Element!Num!Attr!xml] explicit column
name,
> because I could then write a function to generate the child node. But this
> does not seem to allow me to specify the Node Key's!!
> Is there any way to write a reasonably simple query to get the above schem
a'
> Thanks
> Greg|||Thanks for your reply John.
Can you do XSL Transforms in SQL 2000?
Greg
"John Bell" wrote:
[vbcol=seagreen]
> Hi Greg
> You will need to use the FOR XML EXPLICIT option if you wish to retrieve t
he
> data in this format. A different approach you be to apply a transform afte
r
> the data has been extracted. You could use XSLT to do this, possibly use t
he
> document()
> function to include and export the data in fragments.
> [url]http://groups.google.com/group/microsoft.public.xml/msg/24b2c92baf8fbde6?hl=en&[
/url]
> John
> "Greg Bacchus" wrote:
>|||Hi Greg
It should be possible to do this using the SP_OA... procedures but really I
would expect it to be done externally.
You may want to look at SQL2005 as this could all be done a lot easier using
.NET
John
"Greg Bacchus" wrote:
[vbcol=seagreen]
> Thanks for your reply John.
> Can you do XSL Transforms in SQL 2000?
> Greg
> "John Bell" wrote:
>sql

how to simplify what i'm doing (AS the solution?)

i need to do create reports that perform queries where the results are
percentages of total count divided by counts based on different
subquery where values.
a la ...
select 100 * (totalCount/ (select count where columnA value is x) as
partialCount1) as perc1,
100 * (totalCount/ (select count where columnA value is y) as
partialCount2) as perc2
etc
from myTable where (..bunch of where stuff..)
I have simplified the above example just for brevity here.
So, I hate these kinds of nested queries - and they get incredibly
complex to read. I could break them up into longer scripts to simplify
- but I am wondering if there's another way.
The data I am using is from a warehouse on SQL 2005. I have very little
experience with warehouses - and know little about analysis services
and OLAP.
My question is whether it is possible to use Analysis services to
create new tables that make this data far easier to query? That's the
path I am considering venturing down, but I don't want to hit a dead
end... and need just to know if this makes sense. That is to use
Analysis services to crunch out some new tables from the tables I am
using which would have the results neatly packaged in a table or 2 that
is fast and simple to query. Is that the idea of AS?
thx for helping a learner here. let me know if i am making enough sense
in the question.
Hi
This looks like you are trying to do a pivot, in which case the best
solutions will be at the front end although with SQL 2005 you have the new
pivot function, check out the topic "Using PIVOT and UNPIVOT " in books
online.
John
<rsphere@.gmail.com> wrote in message
news:1137856740.675188.233050@.g49g2000cwa.googlegr oups.com...
>i need to do create reports that perform queries where the results are
> percentages of total count divided by counts based on different
> subquery where values.
> a la ...
> select 100 * (totalCount/ (select count where columnA value is x) as
> partialCount1) as perc1,
> 100 * (totalCount/ (select count where columnA value is y) as
> partialCount2) as perc2
> etc
> from myTable where (..bunch of where stuff..)
> I have simplified the above example just for brevity here.
> So, I hate these kinds of nested queries - and they get incredibly
> complex to read. I could break them up into longer scripts to simplify
> - but I am wondering if there's another way.
> The data I am using is from a warehouse on SQL 2005. I have very little
> experience with warehouses - and know little about analysis services
> and OLAP.
> My question is whether it is possible to use Analysis services to
> create new tables that make this data far easier to query? That's the
> path I am considering venturing down, but I don't want to hit a dead
> end... and need just to know if this makes sense. That is to use
> Analysis services to crunch out some new tables from the tables I am
> using which would have the results neatly packaged in a table or 2 that
> is fast and simple to query. Is that the idea of AS?
> thx for helping a learner here. let me know if i am making enough sense
> in the question.
>

how to simplify what i'm doing (AS the solution?)

i need to do create reports that perform queries where the results are
percentages of total count divided by counts based on different
subquery where values.
a la ...
select 100 * (totalCount/ (select count where columnA value is x) as
partialCount1) as perc1,
100 * (totalCount/ (select count where columnA value is y) as
partialCount2) as perc2
etc
from myTable where (..bunch of where stuff..)
I have simplified the above example just for brevity here.
So, I hate these kinds of nested queries - and they get incredibly
complex to read. I could break them up into longer scripts to simplify
- but I am wondering if there's another way.
The data I am using is from a warehouse on SQL 2005. I have very little
experience with warehouses - and know little about analysis services
and OLAP.
My question is whether it is possible to use Analysis services to
create new tables that make this data far easier to query? That's the
path I am considering venturing down, but I don't want to hit a dead
end... and need just to know if this makes sense. That is to use
Analysis services to crunch out some new tables from the tables I am
using which would have the results neatly packaged in a table or 2 that
is fast and simple to query. Is that the idea of AS?
thx for helping a learner here. let me know if i am making enough sense
in the question.Hi
This looks like you are trying to do a pivot, in which case the best
solutions will be at the front end although with SQL 2005 you have the new
pivot function, check out the topic "Using PIVOT and UNPIVOT " in books
online.
John
<rsphere@.gmail.com> wrote in message
news:1137856740.675188.233050@.g49g2000cwa.googlegroups.com...
>i need to do create reports that perform queries where the results are
> percentages of total count divided by counts based on different
> subquery where values.
> a la ...
> select 100 * (totalCount/ (select count where columnA value is x) as
> partialCount1) as perc1,
> 100 * (totalCount/ (select count where columnA value is y) as
> partialCount2) as perc2
> etc
> from myTable where (..bunch of where stuff..)
> I have simplified the above example just for brevity here.
> So, I hate these kinds of nested queries - and they get incredibly
> complex to read. I could break them up into longer scripts to simplify
> - but I am wondering if there's another way.
> The data I am using is from a warehouse on SQL 2005. I have very little
> experience with warehouses - and know little about analysis services
> and OLAP.
> My question is whether it is possible to use Analysis services to
> create new tables that make this data far easier to query? That's the
> path I am considering venturing down, but I don't want to hit a dead
> end... and need just to know if this makes sense. That is to use
> Analysis services to crunch out some new tables from the tables I am
> using which would have the results neatly packaged in a table or 2 that
> is fast and simple to query. Is that the idea of AS?
> thx for helping a learner here. let me know if i am making enough sense
> in the question.
>

how to simplify what i'm doing (AS the solution?)

i need to do create reports that perform queries where the results are
percentages of total count divided by counts based on different
subquery where values.
a la ...
select 100 * (totalCount/ (select count where columnA value is x) as
partialCount1) as perc1,
100 * (totalCount/ (select count where columnA value is y) as
partialCount2) as perc2
etc
from myTable where (..bunch of where stuff..)
I have simplified the above example just for brevity here.
So, I hate these kinds of nested queries - and they get incredibly
complex to read. I could break them up into longer scripts to simplify
- but I am wondering if there's another way.
The data I am using is from a warehouse on SQL 2005. I have very little
experience with warehouses - and know little about analysis services
and OLAP.
My question is whether it is possible to use Analysis services to
create new tables that make this data far easier to query? That's the
path I am considering venturing down, but I don't want to hit a dead
end... and need just to know if this makes sense. That is to use
Analysis services to crunch out some new tables from the tables I am
using which would have the results neatly packaged in a table or 2 that
is fast and simple to query. Is that the idea of AS?
thx for helping a learner here. let me know if i am making enough sense
in the question.Hi
This looks like you are trying to do a pivot, in which case the best
solutions will be at the front end although with SQL 2005 you have the new
pivot function, check out the topic "Using PIVOT and UNPIVOT " in books
online.
John
<rsphere@.gmail.com> wrote in message
news:1137856740.675188.233050@.g49g2000cwa.googlegroups.com...
>i need to do create reports that perform queries where the results are
> percentages of total count divided by counts based on different
> subquery where values.
> a la ...
> select 100 * (totalCount/ (select count where columnA value is x) as
> partialCount1) as perc1,
> 100 * (totalCount/ (select count where columnA value is y) as
> partialCount2) as perc2
> etc
> from myTable where (..bunch of where stuff..)
> I have simplified the above example just for brevity here.
> So, I hate these kinds of nested queries - and they get incredibly
> complex to read. I could break them up into longer scripts to simplify
> - but I am wondering if there's another way.
> The data I am using is from a warehouse on SQL 2005. I have very little
> experience with warehouses - and know little about analysis services
> and OLAP.
> My question is whether it is possible to use Analysis services to
> create new tables that make this data far easier to query? That's the
> path I am considering venturing down, but I don't want to hit a dead
> end... and need just to know if this makes sense. That is to use
> Analysis services to crunch out some new tables from the tables I am
> using which would have the results neatly packaged in a table or 2 that
> is fast and simple to query. Is that the idea of AS?
> thx for helping a learner here. let me know if i am making enough sense
> in the question.
>

How to simplify this slow Stored Procedure

Dear Reader(s),

Is there anyway to write the following stored procedure without the loop so that it goes much faster? :confused:

-----------------------
use MJ_ReportBase
go
if exists(select 1 from sysobjects where type='P' and name='sp_Periode')
begin
drop procedure sp_Periode
end
go
create procedure sp_Periode
@.start int
, @.stop int
as
declare @.x int

set @.x = 0
set @.x=@.start

delete from tbl_periode

while (@.x>=@.stop)
begin

-- --
-- --
-- Create table tbl_inout
if exists(select 1 from sysobjects where type='U' and name='tbl_inout')
begin
drop table tbl_inout
end

select datetimestamp,accname,badgeid,personname,inoutreg into tbl_inout from WinXS..x18 where convert(varchar,datetimestamp,120)+' '+ltrim(str(id))+' '+ltrim(str(badgeid)) in
(select convert(varchar,max(datetimestamp),120)+' '+ltrim(str(max(id)))+' '+ltrim(str(badgeid)) as datetimestamp from WinXS..x18 where (accname='Kelder -1' or accname='Tnk Entree') and convert(varchar,datetimestamp,105)=convert(varchar ,getdate()-abs(@.x),105) group by badgeid)
and badgeid>0
order by personname

-- --
-- --
-- Create table tbl_result

if exists(select 1 from sysobjects where type='U' and name='tbl_result')
begin
drop table tbl_result
end

-- --
-- --

select
convert(varchar,datetimestamp,105) 'DATUM'
, badgeid 'PAS'
, initials 'VOORNAAM'
, personname 'NAAM'
, convert(varchar,min(datetimestamp),108) 'MIN'
, convert(varchar,max(datetimestamp),108) 'MAX'
into
tbl_result
from
WinXS..x18
where
convert(varchar,datetimestamp,105)=convert(varchar ,getdate()-abs(@.x),105)
and
accname in ('Kelder -1','Tnk Entree')
and badgeid>0
group by
convert(varchar,WinXS..x18.datetimestamp,105)
, badgeid
, initials
, personname
order by
initials
, personname asc
, convert(varchar,datetimestamp,105) asc

-- --
-- --
-- Rapportage tabel

insert into
tbl_periode
select
tbl_result.datum as DATUM
, ltrim(ltrim(rtrim(tbl_result.naam))+' '+ltrim(rtrim(isnull(tbl_result.voornaam,' ')))) as NAAM
, tbl_result.min as MIN
, tbl_result.max as MAX
, case tbl_inout.inoutreg when 1 then 'in' when 2 then 'out' else 'err' end as [IN/OUT]
, substring('00000',1,5-len(tbl_result.pas))+ltrim(str(tbl_result.pas)) as PAS
from
tbl_inout,tbl_result
where
tbl_result.datum+' '+tbl_result.max+' '+ltrim(str(tbl_result.pas))
= convert(varchar,tbl_inout.datetimestamp,105)+' '+convert(varchar,tbl_inout.datetimestamp,108)+' '+ltrim(str(badgeid))
order by
tbl_result.naam asc

-- --
-- --
--

set @.x=@.x-1
end
go

print 'Klaar!'
-----------------------

What it does is determining the minimum entry time and the maximum exiting time per day of people going true the main entrance of a building.

Many thanks in advance.
:)First...collapse the queries in to 1.

Second, what's the first select for?

Third

Loose the loop and do

WHERE datetimestamp > GetDate()-@.Start
AND datetimestamp < = GetDate()-@.Stop|||I don't even want to try to figure out your code. Don't use permanent tables for temporary storage (your tbl_inout table). That will get you into trouble in a multi-user environment.

Post the relevent DDL for your tables any somebody here can probably show you a more efficient set-based algorithm.|||Hi to all,

I have found the solution after a long time puzzeling. :)

Please check the sql below:
-----------------------
select
tbl_INOUT.*
, CASE B.inoutreg WHEN 1 THEN 'IN' WHEN 2 THEN 'UIT' ELSE 'Fout' END 'STATUS'
from
(
select
convert(char(10),datetimestamp,120) 'DATUM'
, badgeid 'PAS'
, initials 'VOORNAAM'
, personname 'NAAM'
, convert(varchar,min(datetimestamp),108) 'MIN'
, convert(varchar,max(datetimestamp),108) 'MAX'
from
WinXS..x18
where
(convert(varchar(10),datetimestamp,120)>=convert(varchar(10),'2005-01-01',120)
and
convert(varchar(10),datetimestamp,120)<=convert(varchar(10),'2005-02-28',120))
and
accname in ('Kelder -1','Tnk Entree')
and
badgeid>0
group by
convert(char(10),datetimestamp,120)
, badgeid
, initials
, personname
) tbl_INOUT, WinXS..x18 B
where
tbl_INOUT.pas=B.badgeid
and
convert(datetime,tbl_INOUT.datum+' '+tbl_INOUT.max,120)=B.datetimestamp
and
badgeid=81
order by
tbl_INOUT.naam asc
, tbl_INOUT.datum DESC

Greetz,
DePrins
:D

How to simplify my storeprocedure

Hello,if have more than two condition(City,State), how to simplify
following storeprocedure.(without Too much [if ... else] and duplicat
code),thanks.
CREATE PROCEDURE selectAuthors @.City Varchar(20),@.State Varchar(2)
AS
IF @.City<>''
IF @.State<>''
SELECT *
FROM authors where city=@.City AND state=@.State
Else
SELECT *
FROM authors where city=@.City
Else
IF @.State<>''
SELECT *
FROM authors where state=@.State
Else
SELECT *
FROM authorsDoes this help?
=====
DECLARE @.City VARCHAR(30)
DECLARE @.State VARCHAR(10)
SET @.City = 'Menlo Park'
SET @.State = ''
SELECT * FROM authors WHERE
city = CASE WHEN @.City = '' OR @.City IS NULL THEN city ELSE @.City END
AND
state = CASE WHEN @.State = '' OR @.State IS NULL THEN state ELSE @.State
END
=====
--
HTH,
SriSamp
Email: srisamp@.gmail.com
Blog: http://blogs.sqlxml.org/srinivassampath
URL: http://www32.brinkster.com/srisamp
"Simon" <w007@.seed.net.tw> wrote in message
news:OwuB%23X%23RGHA.1236@.TK2MSFTNGP11.phx.gbl...
> Hello,if have more than two condition(City,State), how to simplify
> following storeprocedure.(without Too much [if ... else] and duplicat
> code),thanks.
>
> CREATE PROCEDURE selectAuthors @.City Varchar(20),@.State Varchar(2)
> AS
> IF @.City<>''
> IF @.State<>''
> SELECT *
> FROM authors where city=@.City AND state=@.State
> Else
> SELECT *
> FROM authors where city=@.City
> Else
> IF @.State<>''
> SELECT *
> FROM authors where state=@.State
> Else
> SELECT *
> FROM authors
>
>|||Thank you for your help
"SriSamp" <ssampath@.sct.co.in> bl
news:eqxiiI$RGHA.5780@.TK2MSFTNGP10.phx.gbl g...
> Does this help?
> =====
> DECLARE @.City VARCHAR(30)
> DECLARE @.State VARCHAR(10)
> SET @.City = 'Menlo Park'
> SET @.State = ''
> SELECT * FROM authors WHERE
> city = CASE WHEN @.City = '' OR @.City IS NULL THEN city ELSE @.City END
> AND
> state = CASE WHEN @.State = '' OR @.State IS NULL THEN state ELSE @.State
> END
> =====
> --
> HTH,
> SriSamp
> Email: srisamp@.gmail.com
> Blog: http://blogs.sqlxml.org/srinivassampath
> URL: http://www32.brinkster.com/srisamp
> "Simon" <w007@.seed.net.tw> wrote in message
> news:OwuB%23X%23RGHA.1236@.TK2MSFTNGP11.phx.gbl...
>sql

How to simplify my endless query. Thanks.

Hello group!

I am having a problem with simplying my query...

I would like to get customers' balance info based on how many months
since they opened their accounts. The tricky part here is accounts
starting with '28' are treated differently than other accounts, they
are given 3 months grace period. In other words, for all other
accounts, their month0 balance is the balance of their open_month, and
month1 balance is the balance after the account is opened 1 month, and
so on. But accounts starting with '28' month0 balance would be the
balance after the account is opened 3 months, and month1 balance would
be the balance after the account is opened 4 months, and so on.

My query below works, but since some customers are more than 10 years
old (more than 120 months), my query is endless! Does anyone know a
better way to do the same job? Many thanks!

create table a
(person_id int
,account int
,open_date datetime)

insert into a values(1,200001,'11/15/2004')
insert into a values(2,280001,'8/20/2004')

create table b
(account int
,balance_date datetime
,balance money)

insert into b values(200001,'11/30/2004',700)
insert into b values(200001,'12/31/2004',800)
insert into b values(200001,'1/31/2005',900)
insert into b values(200001,'2/28/2005',1000)
insert into b values(280001,'8/30/2004',7000)
insert into b values(280001,'9/30/2004',8000)
insert into b values(280001,'10/31/2004',9000)
insert into b values(280001,'11/30/2004',10000)
insert into b values(280001,'12/31/2004',15000)
insert into b values(280001,'1/31/2005',20000)
insert into b values(280001,'2/28/2005',30000)

--Ideal output--

person_idacc_nomonth0_balancemonth1_balancemonth2_balancemonth3_balance
120000017008009001000
2280000110000150002000030000

select a.person_id
,a.account
,month0_balance=case
when a.account like '2%' and a.account not like '28%'
then
sum(case datediff(mm, a.open_date, balance_date) when 0
then b.balance else 0 end)
else sum(case datediff(mm, a.open_date, balance_date)
when 3 then b.balance else 0 end)
end
,month1_balance =case
when a.account like '2%' and a.account not like '28%'
then
sum(case datediff(mm, a.open_date, balance_date) when 1
then b.balance else 0 end)
else sum(case datediff(mm, a.open_date, balance_date)
when 4 then b.balance else 0 end)
end
from a as a
join b as b
on a.account=b.account
group by a.person_id, a.account(rong.guo@.gmail.com) writes:
> My query below works, but since some customers are more than 10 years
> old (more than 120 months), my query is endless! Does anyone know a
> better way to do the same job? Many thanks!

Hm, first I read endless as "query runs forever", but now I realize
that you want one column for each month.

That's a dynamic cross-tab, which you cannot to in SQL out-of-box,
since a query always produces a table with a well-defined set of
columns.

You could use dynamic SQL to create the beast; you can read about
dynamic SQL on my web site: http://www.sommarskog.se/dynamic_sql.html.
You may also consider the third-party tool RAC, which is very good
for this sort of things. (They say. I have never used it myself, but
it's a standard recommendation.) It's at http://www.rac4sql.net/.

Since I first misread your question, I looked at improving the query.
Since this is a simple join, good indexing is probably the best for
performance. But this maybe somewhat more effective:

select a.person_id, a.account,
month0_balance = SUM(CASE datediff(mm, a.open_date, balance_date)
WHEN CASE WHEN a.account LIKE '2%' AND
a.account NOT LIKE '28%'
THEN 0
ELSE 3
END THEN b.balance
ELSE 0
END),
month1_balance = SUM(CASE datediff(mm, a.open_date, balance_date)
WHEN CASE WHEN a.account LIKE '2%' AND
a.account NOT LIKE '28%'
THEN 1
ELSE 4
END THEN b.balance
ELSE 0
END)
from a as a
join b as b on a.account=b.account
group by a.person_id, a.account

(Egads! I have never had a CASE in the WHEN part of a CASE expression
before!)

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Thanks, Erland!
Sorry for the misleading word 'endless'... And you are right, I've
never had a CASE in the WHEN part of a CASE expression:-)) but it is so
exciting to learn! Thanks again!|||On Fri, 18 Mar 2005 23:20:17 +0000 (UTC), Erland Sommarskog wrote:

> (rong.guo@.gmail.com) writes:
>> My query below works, but since some customers are more than 10 years
>> old (more than 120 months), my query is endless! Does anyone know a
>> better way to do the same job? Many thanks!
> Hm, first I read endless as "query runs forever", but now I realize
> that you want one column for each month.
> That's a dynamic cross-tab, which you cannot to in SQL out-of-box,
> since a query always produces a table with a well-defined set of
> columns.

I would start with this (perhaps as a view) to collect the monthly balances
and do the 3 month correction for 28s:

select a.person_id, a.account,
sum(b.balance) AS MonthBalance,
'Month' + convert(varchar(),
datediff(mm, a.open_date, balance_date) -
case when a.account LIKE '2%' AND a.account NOT like '28%' then 0
else 3 END)
AS MonthNum
FROM a as a
JOIN b as b on a.account=b.account
GROUP BY a.person_id, a.account,
datediff(mm, a.open_date, balance_date) -
case when a.account LIKE '2%' AND a.account NOT like '28%' then 0
else 3 END

Then I'd put a crosstab tool on the client to consume this.

How to simpify the Deployment process of a SQL Server Report

Dear all,
I have 5 SQL Servers and totally around 100 databases (1 database for 1
client).
Whenever I finish a Report development on my development server, I will need
to repeat the same task on these 100 databases 100 times ?
Although I can copy the Report Definition File from the development server
to these 100 databases' folders, I still need to do the following tasks :
(1)Create 100 new Visual Studio projects
(2)Enter "Start Item" and "Target Server URL" 100 times
(3)Create 100 new Shared Data Sources
(4)Build and Deploy Visual Studio projects 100 times.
Are there any better or more efficient ways to deploy a new Report in a
multiple servers or multiple databases environment ?
Please help me.
Thanks a lot."cpchan" wrote:
> Dear all,
>
> I have 5 SQL Servers and totally around 100 databases (1 database for 1
> client).
>
> Whenever I finish a Report development on my development server, I will need
> to repeat the same task on these 100 databases 100 times ?
> Although I can copy the Report Definition File from the development server
> to these 100 databases' folders, I still need to do the following tasks :
>
> (1)Create 100 new Visual Studio projects
> (2)Enter "Start Item" and "Target Server URL" 100 times
> (3)Create 100 new Shared Data Sources
> (4)Build and Deploy Visual Studio projects 100 times.
>
> Are there any better or more efficient ways to deploy a new Report in a
> multiple servers or multiple databases environment ?
>
> Please help me.
> Thanks a lot.
>
Hi
I am not a reporting services expert, you may find more help in the
reporting services newsgroup microsoft.public.sqlserver.reportingsvcs
You can probably do what you require using multiple configurations for a
single project that deploys onto the different servers:
http://msdn2.microsoft.com/en-us/library/aa179464.aspx
http://msdn2.microsoft.com/en-us/library/aa237242(SQL.80).aspx
You could probably use the command prompt utilities and write a
parameterised script that will deploy to a server
John|||Thanks
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:679E4454-427D-4734-B5BB-7817C3C11BCB@.microsoft.com...
> "cpchan" wrote:
> > Dear all,
> >
> >
> > I have 5 SQL Servers and totally around 100 databases (1 database for 1
> > client).
> >
> >
> >
> > Whenever I finish a Report development on my development server, I will
need
> > to repeat the same task on these 100 databases 100 times ?
> > Although I can copy the Report Definition File from the development
server
> > to these 100 databases' folders, I still need to do the following tasks
:
> >
> >
> > (1)Create 100 new Visual Studio projects
> > (2)Enter "Start Item" and "Target Server URL" 100 times
> > (3)Create 100 new Shared Data Sources
> > (4)Build and Deploy Visual Studio projects 100 times.
> >
> >
> >
> > Are there any better or more efficient ways to deploy a new Report in a
> > multiple servers or multiple databases environment ?
> >
> >
> >
> > Please help me.
> > Thanks a lot.
> >
> Hi
> I am not a reporting services expert, you may find more help in the
> reporting services newsgroup microsoft.public.sqlserver.reportingsvcs
> You can probably do what you require using multiple configurations for a
> single project that deploys onto the different servers:
> http://msdn2.microsoft.com/en-us/library/aa179464.aspx
> http://msdn2.microsoft.com/en-us/library/aa237242(SQL.80).aspx
> You could probably use the command prompt utilities and write a
> parameterised script that will deploy to a server
> John

How to silently uninstall SQL Server 2005 Express

Hi.

I'm trying to perform a silent install/uninstall of SQL Server 2005 Express.
I manage to install silently, but when trying to uninstall with the following command line:

sqlexpr.exe /qb REMOVE=ALL INSTANCENAME=<INSTANCENAME>

there is one item left behind - "Microsoft SQL Server Native Client"

Does anyone know how to uninstall this as well?

Also, if MSXML 6.0 is already installed when installing SQL Server 2005 Express, when performing an uninstall with the above command line, MSXML 6.0 is also uninstalled, which may not be the desired outcome.

SQL Server Native Client remains on the box after SQL uninstall if another program is relying on it. If you know no other apps are using it, you can use msiexec to uninstall sqlncli.msi (the native client installer/uninstaller).

Thanks,
Sam Lester (MSFT)

|||Hm,
did a full search for "sqlncli.msi" after uninstalling SQL Server 2005 Express - including hidden files, but I cannot find it?

How to silently uninstall SQL Server 2005 Express

Hi.

I'm trying to perform a silent install/uninstall of SQL Server 2005 Express.
I manage to install silently, but when trying to uninstall with the following command line:

sqlexpr.exe /qb REMOVE=ALL INSTANCENAME=<INSTANCENAME>

there is one item left behind - "Microsoft SQL Server Native Client"

Does anyone know how to uninstall this as well?

Also, if MSXML 6.0 is already installed when installing SQL Server 2005 Express, when performing an uninstall with the above command line, MSXML 6.0 is also uninstalled, which may not be the desired outcome.

SQL Server Native Client remains on the box after SQL uninstall if another program is relying on it. If you know no other apps are using it, you can use msiexec to uninstall sqlncli.msi (the native client installer/uninstaller).

Thanks,
Sam Lester (MSFT)

|||Hm,
did a full search for "sqlncli.msi" after uninstalling SQL Server 2005 Express - including hidden files, but I cannot find it?

How to silently uninstall SQL Server 2005 Express

Hi.

I'm trying to perform a silent install/uninstall of SQL Server 2005 Express.
I manage to install silently, but when trying to uninstall with the following command line:

sqlexpr.exe /qb REMOVE=ALL INSTANCENAME=<INSTANCENAME>

there is one item left behind - "Microsoft SQL Server Native Client"

Does anyone know how to uninstall this as well?

Also, if MSXML 6.0 is already installed when installing SQL Server 2005 Express, when performing an uninstall with the above command line, MSXML 6.0 is also uninstalled, which may not be the desired outcome.

SQL Server Native Client remains on the box after SQL uninstall if another program is relying on it. If you know no other apps are using it, you can use msiexec to uninstall sqlncli.msi (the native client installer/uninstaller).

Thanks,
Sam Lester (MSFT)

|||Hm,
did a full search for "sqlncli.msi" after uninstalling SQL Server 2005 Express - including hidden files, but I cannot find it?
sql

how to shrink Trx Log

Shrink Log file does not seems to make the file size any smaller.
The file size remains the even after shrinking it a few time.
The GUI seems ez to use but I may be doing it wrongly.
How do I shink the Log file?
"TBoon" <allblacks15@.hotmail.com> wrote in message
news:esbf$KHIHHA.3676@.TK2MSFTNGP03.phx.gbl...
> Shrink Log file does not seems to make the file size any smaller.
> The file size remains the even after shrinking it a few time.
> The GUI seems ez to use but I may be doing it wrongly.
> How do I shink the Log file?
>
You can't shrink it if it's full of transactions. Perform a log backup to
copy the transactions out to a backup device, then you should be able to
shrink the transaction log. Then set up a maintence plan to perform
periodic transaction log backups to control the growth of the log file in
the future.
David

how to shrink Trx Log

Shrink Log file does not seems to make the file size any smaller.
The file size remains the even after shrinking it a few time.
The GUI seems ez to use but I may be doing it wrongly.
How do I shink the Log file?"TBoon" <allblacks15@.hotmail.com> wrote in message
news:esbf$KHIHHA.3676@.TK2MSFTNGP03.phx.gbl...
> Shrink Log file does not seems to make the file size any smaller.
> The file size remains the even after shrinking it a few time.
> The GUI seems ez to use but I may be doing it wrongly.
> How do I shink the Log file?
>
You can't shrink it if it's full of transactions. Perform a log backup to
copy the transactions out to a backup device, then you should be able to
shrink the transaction log. Then set up a maintence plan to perform
periodic transaction log backups to control the growth of the log file in
the future.
David|||... and some details at http://www.karaszi.com/SQLServer/in...ver/default.asp
http://www.solidqualitylearning.com/
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in messa
ge
news:Ouy%239MHIHHA.924@.TK2MSFTNGP02.phx.gbl...
>
> "TBoon" <allblacks15@.hotmail.com> wrote in message news:esbf$KHIHHA.3676@.T
K2MSFTNGP03.phx.gbl...
> You can't shrink it if it's full of transactions. Perform a log backup to
copy the transactions
> out to a backup device, then you should be able to shrink the transaction
log. Then set up a
> maintence plan to perform periodic transaction log backups to control the
growth of the log file
> in the future.
> David

how to shrink Trx Log

Shrink Log file does not seems to make the file size any smaller.
The file size remains the even after shrinking it a few time.
The GUI seems ez to use but I may be doing it wrongly.
How do I shink the Log file?"TBoon" <allblacks15@.hotmail.com> wrote in message
news:esbf$KHIHHA.3676@.TK2MSFTNGP03.phx.gbl...
> Shrink Log file does not seems to make the file size any smaller.
> The file size remains the even after shrinking it a few time.
> The GUI seems ez to use but I may be doing it wrongly.
> How do I shink the Log file?
>
You can't shrink it if it's full of transactions. Perform a log backup to
copy the transactions out to a backup device, then you should be able to
shrink the transaction log. Then set up a maintence plan to perform
periodic transaction log backups to control the growth of the log file in
the future.
David|||... and some details at http://www.karaszi.com/SQLServer/info_dont_shrink.asp
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in message
news:Ouy%239MHIHHA.924@.TK2MSFTNGP02.phx.gbl...
>
> "TBoon" <allblacks15@.hotmail.com> wrote in message news:esbf$KHIHHA.3676@.TK2MSFTNGP03.phx.gbl...
>> Shrink Log file does not seems to make the file size any smaller.
>> The file size remains the even after shrinking it a few time.
>> The GUI seems ez to use but I may be doing it wrongly.
>> How do I shink the Log file?
> You can't shrink it if it's full of transactions. Perform a log backup to copy the transactions
> out to a backup device, then you should be able to shrink the transaction log. Then set up a
> maintence plan to perform periodic transaction log backups to control the growth of the log file
> in the future.
> David

How to shrink transaction log

I have an application which is using SQL 2000 as
database. However, the transaction log is getting larger
and larger. I wouldn't reduce the size. The database
size is 300MB w/ used space 278MB and the transaction log
size is 6803MB w/ used space 290MB.
Is there any idea how to reduce the size of transaction
log?Quoted from Vishal :
You have various options to keep transaction log size manageable:
1.Keep taking transaction log backups at particular intervals this will help
you keep optimal transaction log size. When SQL Server finishes backing up
the transaction log, it automatically truncates the inactive portion of the
transaction log.
2.Keep transaction size as small as possible.
Refer to following urls
http://www.support.microsoft.com/?id=256650 INF: How to Shrink the SQL
Server 7.0 Tran Log
http://www.support.microsoft.com/?id=317375 Log File Grows too big
http://www.support.microsoft.com/?id=110139 Log file filling up
http://www.mssqlserver.com/faq/logs-shrinklog.asp Shrink File
http://www.support.microsoft.com/?id=315512 Considerations for Autogrow
and AutoShrink
http://www.support.microsoft.com/?id=272318 INF: Shrinking Log in SQL
Server 2000 with DBCC SHRINKFILE
http://www.support.microsoft.com/?id=317375 Log File Grows too big
http://www.support.microsoft.com/?id=110139 Log file filling up
http://www.mssqlserver.com/faq/logs-shrinklog.asp Shrink File
http://www.support.microsoft.com/?id=315512 Considerations for Autogrow
and AutoShrink
HTH, Jens Süßmeyer.
"Ling" <Ling.Ngai@.hk.standardchartered.com> schrieb im Newsbeitrag
news:07c001c37b78$e8e54f60$a101280a@.phx.gbl...
> I have an application which is using SQL 2000 as
> database. However, the transaction log is getting larger
> and larger. I wouldn't reduce the size. The database
> size is 300MB w/ used space 278MB and the transaction log
> size is 6803MB w/ used space 290MB.
> Is there any idea how to reduce the size of transaction
> log?|||I refer you to a useful post by Andrew kelly
Either change to Simple Mode or issue regular log backups. If your are
issuing timely log backups and it still continues to grow you most likely
have an open transaction stopping the truncation. Use DBCC OPENTRAN to see.
http://www.support.microsoft.com/?id=317375 Log File Grows too big
http://www.support.microsoft.com/?id=110139 Log file filling up
http://www.mssqlserver.com/faq/logs-shrinklog.asp Shrink File
http://www.support.microsoft.com/?id=315512 Considerations for Autogrow
and AutoShrink
http://www.support.microsoft.com/?id=256650 INF: How to Shrink the SQL
Server 7.0 Tran Log
http://www.support.microsoft.com/?id=272318 INF: Shrinking Log in SQL
Server 2000 with DBCC SHRINKFILE
--
Allan Mitchell (Microsoft SQL Server MVP)
MCSE,MCDBA
www.SQLDTS.com
I support PASS - the definitive, global community
for SQL Server professionals - http://www.sqlpass.org
"Ling" <Ling.Ngai@.hk.standardchartered.com> wrote in message
news:07c001c37b78$e8e54f60$a101280a@.phx.gbl...
> I have an application which is using SQL 2000 as
> database. However, the transaction log is getting larger
> and larger. I wouldn't reduce the size. The database
> size is 300MB w/ used space 278MB and the transaction log
> size is 6803MB w/ used space 290MB.
> Is there any idea how to reduce the size of transaction
> log?|||Try using this code.
You will need to change the <logname> and <databasename> with your
transaction log name and the database name. <Desired_final_size_in_MB> is
optional parameter. This script works great and I use it regularly.
DBCC SHRINKFILE (<logname> , <DESIRED_FINAL_SIZE_IN_MB>,TRUNCATEONLY )
go
BACKUP LOG <databasename> WITH TRUNCATE_ONLY
go
drop table mytable
go
CREATE TABLE MyTable (MyField VARCHAR(10), PK INT )
INSERT Mytable (PK) VALUES (1)
GO
SET NOCOUNT ON
DECLARE @.Index INT
SELECT @.Index = 0
WHILE (@.Index < 20000)
BEGIN
UPDATE MyTable
SET MyField = MyField
WHERE PK = 1 /* Some criteria to restrict to one row. */
SELECT @.Index = @.Index + 1
END
SET NOCOUNT OFF
go
DBCC SHRINKFILE ( <logname> ,<DESIRED_FINAL_SIZE_IN_MB>, truncateonly )
go
BACKUP LOG < databasename > WITH TRUNCATE_ONLY
go
"Ling" <Ling.Ngai@.hk.standardchartered.com> wrote in message
news:07c001c37b78$e8e54f60$a101280a@.phx.gbl...
> I have an application which is using SQL 2000 as
> database. However, the transaction log is getting larger
> and larger. I wouldn't reduce the size. The database
> size is 300MB w/ used space 278MB and the transaction log
> size is 6803MB w/ used space 290MB.
> Is there any idea how to reduce the size of transaction
> log?

how to shrink tran-logs of databases

hi,
work on sql server 2000.
I have serveral dbs that are either 'full' or 'simple',
their tran-logs have lots of empty space in it due to one-
time big processes, hence, I want to shrink their physical
size down.
I used the below query on several dbs, some log size got
shrinked, some didn't shrink at all. how come? I didn't
perform a backup log operation for any of the db before I
run this query:
use db1
DBCC SHRINKFILE (db1_log1_logical_name)
many thanks.
JJ
HI,
You need to perform a backup log if the database recovery model is FULL or
BULK_LOGGED. Otherwise the inactive trasnaction logs will not be removed and
though your shrinkfile command will shrink. in case if you donot need the
trasnactions you could truncate the logs
backup log <dbname> with truncate_only
go
DBCC SHRINKFILE (db1_log1_logical_name,truncateonly)
Now execute the below command to see log file size and usage.
DBCC SQLPERF(LOGSPACE)
Thanks
Hari
MCDBA
"JJ Wang" <anonymous@.discussions.microsoft.com> wrote in message
news:28e4a01c46551$c5a68f00$a601280a@.phx.gbl...
> hi,
> work on sql server 2000.
> I have serveral dbs that are either 'full' or 'simple',
> their tran-logs have lots of empty space in it due to one-
> time big processes, hence, I want to shrink their physical
> size down.
> I used the below query on several dbs, some log size got
> shrinked, some didn't shrink at all. how come? I didn't
> perform a backup log operation for any of the db before I
> run this query:
> use db1
> DBCC SHRINKFILE (db1_log1_logical_name)
> many thanks.
> JJ
|||hey Hari, it's you again! thanks for all the help. I
think you are definately right.
one more question, I also want to shrink this db's data
file which is total 21 gig, with over 9 gig free. I want
to shrink off those 9 gigs. I started by shrinking down
25 mg only, it took 11 minutes! my God, for 9 gig, it
will take 66 hrs then!
how come so slow?
thanks
JJ
>--Original Message--
>HI,
>You need to perform a backup log if the database recovery
model is FULL or
>BULK_LOGGED. Otherwise the inactive trasnaction logs will
not be removed and
>though your shrinkfile command will shrink. in case if
you donot need the
>trasnactions you could truncate the logs
>backup log <dbname> with truncate_only
>go
> DBCC SHRINKFILE (db1_log1_logical_name,truncateonly)
>Now execute the below command to see log file size and
usage.
>DBCC SQLPERF(LOGSPACE)
>Thanks
>Hari
>MCDBA
>
>"JJ Wang" <anonymous@.discussions.microsoft.com> wrote in
message[vbcol=seagreen]
>news:28e4a01c46551$c5a68f00$a601280a@.phx.gbl...
one-[vbcol=seagreen]
physical[vbcol=seagreen]
I
>
>.
>
|||JJ,
9GB of free spacein a 21GB DB is not too much. You need free space for a
database to operate properly anyway for things such as reindexing and you
always want room to add more data without growing. In any case the reason
it takes so long is that it shrinks from the end of the file and must move
any data that is near the end to a free spot at the beginning. This process
is resource intensive and can take a while depending on your hardware and if
the log files are on their own raid array or not. 11 minutes for only 25MB
does seem a little excessive but if your disks are poorly configured it's
not out of the question. Shrinking the file will also fragment your data
files and you may want to reindex afterwards. But if you shrink it too much
that will cause it to grow again<g>.
Andrew J. Kelly SQL MVP
"JJ Wang" <anonymous@.discussions.microsoft.com> wrote in message
news:2a18601c4655a$809f44c0$a301280a@.phx.gbl...[vbcol=seagreen]
> hey Hari, it's you again! thanks for all the help. I
> think you are definately right.
> one more question, I also want to shrink this db's data
> file which is total 21 gig, with over 9 gig free. I want
> to shrink off those 9 gigs. I started by shrinking down
> 25 mg only, it took 11 minutes! my God, for 9 gig, it
> will take 66 hrs then!
> how come so slow?
> thanks
> JJ
> model is FULL or
> not be removed and
> you donot need the
> usage.
> message
> one-
> physical
> I
|||Hi,
Add on to Andrews detailed description, please shrink the MDF file in Single
user mode and during reindexing make the database recovery model to
BULK_LOGGED.
Alter database <dbname> set single_user with rollback immediate
go
dbcc shrinkfile('logical_mdf_name',size_in_mb)
go
Now reindex
go
-- Multi user
Alter database <dbname> set multi_user
Thanks
Hari
MCDBA
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:#e9ZYzVZEHA.212@.TK2MSFTNGP12.phx.gbl...
> JJ,
> 9GB of free spacein a 21GB DB is not too much. You need free space for a
> database to operate properly anyway for things such as reindexing and you
> always want room to add more data without growing. In any case the reason
> it takes so long is that it shrinks from the end of the file and must move
> any data that is near the end to a free spot at the beginning. This
process
> is resource intensive and can take a while depending on your hardware and
if
> the log files are on their own raid array or not. 11 minutes for only 25MB
> does seem a little excessive but if your disks are poorly configured it's
> not out of the question. Shrinking the file will also fragment your data
> files and you may want to reindex afterwards. But if you shrink it too
much
> that will cause it to grow again<g>.
> --
> Andrew J. Kelly SQL MVP
>
> "JJ Wang" <anonymous@.discussions.microsoft.com> wrote in message
> news:2a18601c4655a$809f44c0$a301280a@.phx.gbl...
>
|||Don't EVER leave the transaction log on 'full' recovery - it will absolutely eat your disks. ALWAYS have the default for ALL databases to be on 'simple'.
"JJ Wang" wrote:

> hi,
> work on sql server 2000.
> I have serveral dbs that are either 'full' or 'simple',
> their tran-logs have lots of empty space in it due to one-
> time big processes, hence, I want to shrink their physical
> size down.
> I used the below query on several dbs, some log size got
> shrinked, some didn't shrink at all. how come? I didn't
> perform a backup log operation for any of the db before I
> run this query:
> use db1
> DBCC SHRINKFILE (db1_log1_logical_name)
> many thanks.
> JJ
>
|||That is absolutely wrong advise. The reason it eats up your disks is that
you are not doing regular log backups to keep it in check. If you need to
have the ability to recover up to the minute there is no choice but to use
Full recovery mode.
Andrew J. Kelly SQL MVP
"Beeeeeeeeeeeeves" <Beeeeeeeeeeeeves@.discussions.microsoft.com> wrote in
message news:EF5BCB8B-0C14-4728-A2B3-D3E8907D7171@.microsoft.com...
> Don't EVER leave the transaction log on 'full' recovery - it will
absolutely eat your disks. ALWAYS have the default for ALL databases to be
on 'simple'.[vbcol=seagreen]
> "JJ Wang" wrote:
|||hi Andrew,
you sure always have good advices. The reason I want to
shrink this files was that we had a one time temperaroy
data expansion, this db's size should stay pretty constant
because we have a history archive db for it. and this is
just one of the files taht we need to shrink, there are
more files that have more unneeded space.
we have tran-log disk (1,0) seperate from our data disk
(raid 5). what do you mean by 'if the disk is poorly
designed'? how to judge that?
also, you mentioned: 'it takes so long is that it shrinks
from the end of the file and must move any data that is
near the end to a free spot at the beginning. ', is there
an option that we don't move the data around when
shrinking?
many thanks.
JJ

>--Original Message--
>JJ,
>9GB of free spacein a 21GB DB is not too much. You need
free space for a
>database to operate properly anyway for things such as
reindexing and you
>always want room to add more data without growing. In
any case the reason
>it takes so long is that it shrinks from the end of the
file and must move
>any data that is near the end to a free spot at the
beginning. This process
>is resource intensive and can take a while depending on
your hardware and if
>the log files are on their own raid array or not. 11
minutes for only 25MB
>does seem a little excessive but if your disks are poorly
configured it's
>not out of the question. Shrinking the file will also
fragment your data
>files and you may want to reindex afterwards. But if you
shrink it too much
>that will cause it to grow again<g>.
>--
>Andrew J. Kelly SQL MVP
>
>"JJ Wang" <anonymous@.discussions.microsoft.com> wrote in
message[vbcol=seagreen]
>news:2a18601c4655a$809f44c0$a301280a@.phx.gbl...
want[vbcol=seagreen]
recovery[vbcol=seagreen]
will[vbcol=seagreen]
in[vbcol=seagreen]
or 'simple',[vbcol=seagreen]
got[vbcol=seagreen]
didn't[vbcol=seagreen]
before
>
>.
>
|||thanks Andrew.
backup the log doesn't shrink its physical size though.
is the best way to shrink a log is to do it right after
the backup log job (maybe as the next step in the backup
log job)?
JJ
>--Original Message--
>That is absolutely wrong advise. The reason it eats up
your disks is that
>you are not doing regular log backups to keep it in
check. If you need to
>have the ability to recover up to the minute there is no
choice but to use
>Full recovery mode.
>--
>Andrew J. Kelly SQL MVP
>
>"Beeeeeeeeeeeeves"
<Beeeeeeeeeeeeves@.discussions.microsoft.com> wrote in
>message news:EF5BCB8B-0C14-4728-A2B3-
D3E8907D7171@.microsoft.com...[vbcol=seagreen]
recovery - it will
>absolutely eat your disks. ALWAYS have the default for
ALL databases to be[vbcol=seagreen]
>on 'simple'.
or 'simple',[vbcol=seagreen]
one-[vbcol=seagreen]
physical[vbcol=seagreen]
got[vbcol=seagreen]
didn't[vbcol=seagreen]
before I
>
>.
>
|||I totally agree with Andrew. but thanks anyway. :-)
JJ
>--Original Message--
>Don't EVER leave the transaction log on 'full' recovery -
it will absolutely eat your disks. ALWAYS have the default
for ALL databases to be on 'simple'.[vbcol=seagreen]
>"JJ Wang" wrote:
one-[vbcol=seagreen]
physical[vbcol=seagreen]
got[vbcol=seagreen]
didn't[vbcol=seagreen]
I
>.
>
sql

how to shrink the size of tempdb other than reboot the server

Hi,
I work with sql server 2000.
Wondering what's the best practice to shrink the size of
tempdb other than reboot the server. I don't want to
interrupt productions db running on the server.
JJtry dbcc shrinkdatabase, dbcc shrinkfile
Richard
"JJ Wang" <jwang@.leapwireless.com> wrote in message
news:019401c39283$bc0aeeb0$a401280a@.phx.gbl...
> Hi,
> I work with sql server 2000.
> Wondering what's the best practice to shrink the size of
> tempdb other than reboot the server. I don't want to
> interrupt productions db running on the server.
> JJ|||Howdy
You can reduce the tempdb size, BUT, if the tempdb was created with
( say ) 1000 MB initial size, 1000 MB is the smallest you can
shrink it to.
Cheers
SG
Posted via http://dbforums.com

how to shrink the logfile

hi,
My log file is of more than 2gb,
i want to shrink the size of this , without taking my server offline.
i've tried
DBCC SHRINKFILE ( 2, EMPTYFILE )
still no use.
This is the info of my database
Status=ONLINE, Updateability=READ_WRITE, UserAccess=MULTI_USER,
Recovery=FULL, Version=539, Collation=SQL_Latin1_General_CP1_CI_AS,
SQLSortOrder=52, IsTornPageDetectionEnabled, IsAutoCreateStatistics,
IsAutoUpdateStatistics, IsFullTextEnabled
thanks in advance
regards
Lara
Hi
Check out
http://msdn.microsoft.com/library/de...r_da2_1uzr.asp
John
"Lara" wrote:

> hi,
> My log file is of more than 2gb,
> i want to shrink the size of this , without taking my server offline.
> i've tried
> DBCC SHRINKFILE ( 2, EMPTYFILE )
> still no use.
> This is the info of my database
> Status=ONLINE, Updateability=READ_WRITE, UserAccess=MULTI_USER,
> Recovery=FULL, Version=539, Collation=SQL_Latin1_General_CP1_CI_AS,
> SQLSortOrder=52, IsTornPageDetectionEnabled, IsAutoCreateStatistics,
> IsAutoUpdateStatistics, IsFullTextEnabled
>
> thanks in advance
> regards
> Lara
>
>
|||Hi,
How to reduce the Transaction log size
1. Backup the transaction log (Use BACKUP Log dbname to
disk='c:\backup\dbname.tr1' or use enterprise manager)
or (Since you done have the hard disk space truncate the trasaction log)
backup log <dbname> with truncate_only
3. Shrink the transaction log file.
DBCC SHRINKFILE('logical_transactioXn_log file name','truncateonly')
Have a look into the below link.
Shrinking the Transaction Log in SQL Server 2000 with DBCC
http://www.support.microsoft.com/?id=272318
Thanks
Hari
SQL Server MVP
"Lara" <lara@.hotmail.com> wrote in message
news:%23uQQ0KKqFHA.3404@.TK2MSFTNGP11.phx.gbl...
> hi,
> My log file is of more than 2gb,
> i want to shrink the size of this , without taking my server offline.
> i've tried
> DBCC SHRINKFILE ( 2, EMPTYFILE )
> still no use.
> This is the info of my database
> Status=ONLINE, Updateability=READ_WRITE, UserAccess=MULTI_USER,
> Recovery=FULL, Version=539, Collation=SQL_Latin1_General_CP1_CI_AS,
> SQLSortOrder=52, IsTornPageDetectionEnabled, IsAutoCreateStatistics,
> IsAutoUpdateStatistics, IsFullTextEnabled
>
> thanks in advance
> regards
> Lara
>

how to shrink the logfile

hi,
My log file is of more than 2gb,
i want to shrink the size of this , without taking my server offline.
i've tried
DBCC SHRINKFILE ( 2, EMPTYFILE )
still no use.
This is the info of my database
Status=ONLINE, Updateability=READ_WRITE, UserAccess=MULTI_USER,
Recovery=FULL, Version=539, Collation=SQL_Latin1_General_CP1_CI_AS,
SQLSortOrder=52, IsTornPageDetectionEnabled, IsAutoCreateStatistics,
IsAutoUpdateStatistics, IsFullTextEnabled
thanks in advance
regards
LaraHi
Check out
http://msdn.microsoft.com/library/d...r />
_1uzr.asp
John
"Lara" wrote:

> hi,
> My log file is of more than 2gb,
> i want to shrink the size of this , without taking my server offline.
> i've tried
> DBCC SHRINKFILE ( 2, EMPTYFILE )
> still no use.
> This is the info of my database
> Status=ONLINE, Updateability=READ_WRITE, UserAccess=MULTI_USER,
> Recovery=FULL, Version=539, Collation=SQL_Latin1_General_CP1_CI_AS,
> SQLSortOrder=52, IsTornPageDetectionEnabled, IsAutoCreateStatistics,
> IsAutoUpdateStatistics, IsFullTextEnabled
>
> thanks in advance
> regards
> Lara
>
>|||Hi,
How to reduce the Transaction log size
1. Backup the transaction log (Use BACKUP Log dbname to
disk='c:\backup\dbname.tr1' or use enterprise manager)
or (Since you done have the hard disk space truncate the trasaction log)
backup log <dbname> with truncate_only
3. Shrink the transaction log file.
DBCC SHRINKFILE('logical_transactio_n_log file name','truncateonly')
Have a look into the below link.
Shrinking the Transaction Log in SQL Server 2000 with DBCC
http://www.support.microsoft.com/?id=272318
Thanks
Hari
SQL Server MVP
"Lara" <lara@.hotmail.com> wrote in message
news:%23uQQ0KKqFHA.3404@.TK2MSFTNGP11.phx.gbl...
> hi,
> My log file is of more than 2gb,
> i want to shrink the size of this , without taking my server offline.
> i've tried
> DBCC SHRINKFILE ( 2, EMPTYFILE )
> still no use.
> This is the info of my database
> Status=ONLINE, Updateability=READ_WRITE, UserAccess=MULTI_USER,
> Recovery=FULL, Version=539, Collation=SQL_Latin1_General_CP1_CI_AS,
> SQLSortOrder=52, IsTornPageDetectionEnabled, IsAutoCreateStatistics,
> IsAutoUpdateStatistics, IsFullTextEnabled
>
> thanks in advance
> regards
> Lara
>