Showing posts with label space. Show all posts
Showing posts with label space. Show all posts

Friday, March 30, 2012

how to shrink tran-logs of databases

hi,
work on sql server 2000.
I have serveral dbs that are either 'full' or 'simple',
their tran-logs have lots of empty space in it due to one-
time big processes, hence, I want to shrink their physical
size down.
I used the below query on several dbs, some log size got
shrinked, some didn't shrink at all. how come? I didn't
perform a backup log operation for any of the db before I
run this query:
use db1
DBCC SHRINKFILE (db1_log1_logical_name)
many thanks.
JJ
HI,
You need to perform a backup log if the database recovery model is FULL or
BULK_LOGGED. Otherwise the inactive trasnaction logs will not be removed and
though your shrinkfile command will shrink. in case if you donot need the
trasnactions you could truncate the logs
backup log <dbname> with truncate_only
go
DBCC SHRINKFILE (db1_log1_logical_name,truncateonly)
Now execute the below command to see log file size and usage.
DBCC SQLPERF(LOGSPACE)
Thanks
Hari
MCDBA
"JJ Wang" <anonymous@.discussions.microsoft.com> wrote in message
news:28e4a01c46551$c5a68f00$a601280a@.phx.gbl...
> hi,
> work on sql server 2000.
> I have serveral dbs that are either 'full' or 'simple',
> their tran-logs have lots of empty space in it due to one-
> time big processes, hence, I want to shrink their physical
> size down.
> I used the below query on several dbs, some log size got
> shrinked, some didn't shrink at all. how come? I didn't
> perform a backup log operation for any of the db before I
> run this query:
> use db1
> DBCC SHRINKFILE (db1_log1_logical_name)
> many thanks.
> JJ
|||hey Hari, it's you again! thanks for all the help. I
think you are definately right.
one more question, I also want to shrink this db's data
file which is total 21 gig, with over 9 gig free. I want
to shrink off those 9 gigs. I started by shrinking down
25 mg only, it took 11 minutes! my God, for 9 gig, it
will take 66 hrs then!
how come so slow?
thanks
JJ
>--Original Message--
>HI,
>You need to perform a backup log if the database recovery
model is FULL or
>BULK_LOGGED. Otherwise the inactive trasnaction logs will
not be removed and
>though your shrinkfile command will shrink. in case if
you donot need the
>trasnactions you could truncate the logs
>backup log <dbname> with truncate_only
>go
> DBCC SHRINKFILE (db1_log1_logical_name,truncateonly)
>Now execute the below command to see log file size and
usage.
>DBCC SQLPERF(LOGSPACE)
>Thanks
>Hari
>MCDBA
>
>"JJ Wang" <anonymous@.discussions.microsoft.com> wrote in
message[vbcol=seagreen]
>news:28e4a01c46551$c5a68f00$a601280a@.phx.gbl...
one-[vbcol=seagreen]
physical[vbcol=seagreen]
I
>
>.
>
|||JJ,
9GB of free spacein a 21GB DB is not too much. You need free space for a
database to operate properly anyway for things such as reindexing and you
always want room to add more data without growing. In any case the reason
it takes so long is that it shrinks from the end of the file and must move
any data that is near the end to a free spot at the beginning. This process
is resource intensive and can take a while depending on your hardware and if
the log files are on their own raid array or not. 11 minutes for only 25MB
does seem a little excessive but if your disks are poorly configured it's
not out of the question. Shrinking the file will also fragment your data
files and you may want to reindex afterwards. But if you shrink it too much
that will cause it to grow again<g>.
Andrew J. Kelly SQL MVP
"JJ Wang" <anonymous@.discussions.microsoft.com> wrote in message
news:2a18601c4655a$809f44c0$a301280a@.phx.gbl...[vbcol=seagreen]
> hey Hari, it's you again! thanks for all the help. I
> think you are definately right.
> one more question, I also want to shrink this db's data
> file which is total 21 gig, with over 9 gig free. I want
> to shrink off those 9 gigs. I started by shrinking down
> 25 mg only, it took 11 minutes! my God, for 9 gig, it
> will take 66 hrs then!
> how come so slow?
> thanks
> JJ
> model is FULL or
> not be removed and
> you donot need the
> usage.
> message
> one-
> physical
> I
|||Hi,
Add on to Andrews detailed description, please shrink the MDF file in Single
user mode and during reindexing make the database recovery model to
BULK_LOGGED.
Alter database <dbname> set single_user with rollback immediate
go
dbcc shrinkfile('logical_mdf_name',size_in_mb)
go
Now reindex
go
-- Multi user
Alter database <dbname> set multi_user
Thanks
Hari
MCDBA
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:#e9ZYzVZEHA.212@.TK2MSFTNGP12.phx.gbl...
> JJ,
> 9GB of free spacein a 21GB DB is not too much. You need free space for a
> database to operate properly anyway for things such as reindexing and you
> always want room to add more data without growing. In any case the reason
> it takes so long is that it shrinks from the end of the file and must move
> any data that is near the end to a free spot at the beginning. This
process
> is resource intensive and can take a while depending on your hardware and
if
> the log files are on their own raid array or not. 11 minutes for only 25MB
> does seem a little excessive but if your disks are poorly configured it's
> not out of the question. Shrinking the file will also fragment your data
> files and you may want to reindex afterwards. But if you shrink it too
much
> that will cause it to grow again<g>.
> --
> Andrew J. Kelly SQL MVP
>
> "JJ Wang" <anonymous@.discussions.microsoft.com> wrote in message
> news:2a18601c4655a$809f44c0$a301280a@.phx.gbl...
>
|||Don't EVER leave the transaction log on 'full' recovery - it will absolutely eat your disks. ALWAYS have the default for ALL databases to be on 'simple'.
"JJ Wang" wrote:

> hi,
> work on sql server 2000.
> I have serveral dbs that are either 'full' or 'simple',
> their tran-logs have lots of empty space in it due to one-
> time big processes, hence, I want to shrink their physical
> size down.
> I used the below query on several dbs, some log size got
> shrinked, some didn't shrink at all. how come? I didn't
> perform a backup log operation for any of the db before I
> run this query:
> use db1
> DBCC SHRINKFILE (db1_log1_logical_name)
> many thanks.
> JJ
>
|||That is absolutely wrong advise. The reason it eats up your disks is that
you are not doing regular log backups to keep it in check. If you need to
have the ability to recover up to the minute there is no choice but to use
Full recovery mode.
Andrew J. Kelly SQL MVP
"Beeeeeeeeeeeeves" <Beeeeeeeeeeeeves@.discussions.microsoft.com> wrote in
message news:EF5BCB8B-0C14-4728-A2B3-D3E8907D7171@.microsoft.com...
> Don't EVER leave the transaction log on 'full' recovery - it will
absolutely eat your disks. ALWAYS have the default for ALL databases to be
on 'simple'.[vbcol=seagreen]
> "JJ Wang" wrote:
|||hi Andrew,
you sure always have good advices. The reason I want to
shrink this files was that we had a one time temperaroy
data expansion, this db's size should stay pretty constant
because we have a history archive db for it. and this is
just one of the files taht we need to shrink, there are
more files that have more unneeded space.
we have tran-log disk (1,0) seperate from our data disk
(raid 5). what do you mean by 'if the disk is poorly
designed'? how to judge that?
also, you mentioned: 'it takes so long is that it shrinks
from the end of the file and must move any data that is
near the end to a free spot at the beginning. ', is there
an option that we don't move the data around when
shrinking?
many thanks.
JJ

>--Original Message--
>JJ,
>9GB of free spacein a 21GB DB is not too much. You need
free space for a
>database to operate properly anyway for things such as
reindexing and you
>always want room to add more data without growing. In
any case the reason
>it takes so long is that it shrinks from the end of the
file and must move
>any data that is near the end to a free spot at the
beginning. This process
>is resource intensive and can take a while depending on
your hardware and if
>the log files are on their own raid array or not. 11
minutes for only 25MB
>does seem a little excessive but if your disks are poorly
configured it's
>not out of the question. Shrinking the file will also
fragment your data
>files and you may want to reindex afterwards. But if you
shrink it too much
>that will cause it to grow again<g>.
>--
>Andrew J. Kelly SQL MVP
>
>"JJ Wang" <anonymous@.discussions.microsoft.com> wrote in
message[vbcol=seagreen]
>news:2a18601c4655a$809f44c0$a301280a@.phx.gbl...
want[vbcol=seagreen]
recovery[vbcol=seagreen]
will[vbcol=seagreen]
in[vbcol=seagreen]
or 'simple',[vbcol=seagreen]
got[vbcol=seagreen]
didn't[vbcol=seagreen]
before
>
>.
>
|||thanks Andrew.
backup the log doesn't shrink its physical size though.
is the best way to shrink a log is to do it right after
the backup log job (maybe as the next step in the backup
log job)?
JJ
>--Original Message--
>That is absolutely wrong advise. The reason it eats up
your disks is that
>you are not doing regular log backups to keep it in
check. If you need to
>have the ability to recover up to the minute there is no
choice but to use
>Full recovery mode.
>--
>Andrew J. Kelly SQL MVP
>
>"Beeeeeeeeeeeeves"
<Beeeeeeeeeeeeves@.discussions.microsoft.com> wrote in
>message news:EF5BCB8B-0C14-4728-A2B3-
D3E8907D7171@.microsoft.com...[vbcol=seagreen]
recovery - it will
>absolutely eat your disks. ALWAYS have the default for
ALL databases to be[vbcol=seagreen]
>on 'simple'.
or 'simple',[vbcol=seagreen]
one-[vbcol=seagreen]
physical[vbcol=seagreen]
got[vbcol=seagreen]
didn't[vbcol=seagreen]
before I
>
>.
>
|||I totally agree with Andrew. but thanks anyway. :-)
JJ
>--Original Message--
>Don't EVER leave the transaction log on 'full' recovery -
it will absolutely eat your disks. ALWAYS have the default
for ALL databases to be on 'simple'.[vbcol=seagreen]
>"JJ Wang" wrote:
one-[vbcol=seagreen]
physical[vbcol=seagreen]
got[vbcol=seagreen]
didn't[vbcol=seagreen]
I
>.
>
sql

Wednesday, March 28, 2012

How to shrink 38 gig .ldf file

I have Disk Xtender 2000 which was made by OTG Software , Legato and now EMC. I have an NT 4.0 PC with Microsoft SQL 2000. I have a drive space problem and need to shrink a 38 gig .ldf file called OTG03.ldf I also have a 2 gig .mdf file called OTG03.mdf How can I shrink this .ldf file. I'm not a DBA so being specific is greatly appreciated.I got this from a download from a while back. It says 7.0 in the header, but has been modified for SQL 2000. It hads worked for me many, many times.

1. Create the proc in QA
2. Change to the database in which you want to shrink the log file and then execute the proc.

** WARNING **
Use at your own risk ... no warranty applies ... test it first on a database you can afford to trash and restore (the old liability lawyerese)


use master
go
if object_id( 'sp_force_shrink_log' ) is not null drop proc sp_force_shrink_log
go
create proc sp_force_shrink_log
------------------------
-- Purpose: Shrink transaction log of the current database in SQL Server 7.0.
-- Author: Andrew Zanevsky, AZ Databases, Inc., 12/25/1999, v5 - 08/18/2000
-- zanevsky@.azdatabases.com
------------------------
@.target_percent tinyint = 0,
@.target_size_MB int = 10,
@.max_iterations int = 1000,
@.backup_log_opt nvarchar(1000) = 'with truncate_only'
as
set nocount on
declare @.db sysname,
@.last_row int,
@.log_size decimal(15,2),
@.unused1 decimal(15,2),
@.unused decimal(15,2),
@.shrinkable decimal(15,2),
@.iteration int,
@.file_max int,
@.file int,
@.fileid varchar(5)
select @.db = db_name(),
@.iteration = 0
/*
FileId FileSize StartOffset FSeqNo Status Parity CreateLSN
---- ------- ------- ---- ---- -- --------
2 1245184 8192 925963 0 128 0
*/
create table #loginfo (
id int identity,
FileId int,
FileSize numeric(22,0),
StartOffset numeric(22,0),
FSeqNo int,
Status int,
Parity smallint,
CreateLSN varchar(32)
)
create unique clustered index loginfo_FSeqNo on #loginfo ( FSeqNo, StartOffset )
create table #logfiles ( id int identity(1,1), fileid varchar(5) not null )
insert #logfiles ( fileid ) select convert( varchar, fileid ) from sysfiles where status & 0x40 = 0x40
select @.file_max = @.@.rowcount
if object_id( 'table_to_force_shrink_log' ) is null
exec( 'create table table_to_force_shrink_log ( x nchar(3000) not null )' )
insert #loginfo ( FileId, FileSize, StartOffset, FSeqNo, Status, Parity, CreateLSN ) exec ( 'dbcc loginfo' )
select @.last_row = @.@.rowcount
select @.log_size = sum( FileSize ) / 1048576.00,
@.unused = sum( case when Status = 0 then FileSize else 0 end ) / 1048576.00,
@.shrinkable = sum( case when id < @.last_row - 1 and Status = 0 then FileSize else 0 end ) / 1048576.00
from #loginfo
select @.unused1 = @.unused -- save for later
select 'iteration' = @.iteration,
'log size, MB' = @.log_size,
'unused log, MB' = @.unused,
'shrinkable log, MB' = @.shrinkable,
'shrinkable %' = convert( decimal(6,2), @.shrinkable * 100 / @.log_size )
while @.shrinkable * 100 / @.log_size > @.target_percent
and @.shrinkable > @.target_size_MB
and @.iteration < @.max_iterations begin
select @.iteration = @.iteration + 1 -- this is just a precaution
exec( 'insert table_to_force_shrink_log select name from sysobjects
delete table_to_force_shrink_log')
select @.file = 0
while @.file < @.file_max begin
select @.file = @.file + 1
select @.fileid = fileid from #logfiles where id = @.file
exec( 'dbcc shrinkfile( ' + @.fileid + ' )' )
end
exec( 'backup log [' + @.db + '] ' + @.backup_log_opt )
truncate table #loginfo
insert #loginfo ( FileId, FileSize, StartOffset, FSeqNo, Status, Parity, CreateLSN ) exec ( 'dbcc loginfo' )
select @.last_row = @.@.rowcount
select @.log_size = sum( FileSize ) / 1048576.00,
@.unused = sum( case when Status = 0 then FileSize else 0 end ) / 1048576.00,
@.shrinkable = sum( case when id < @.last_row - 1 and Status = 0 then FileSize else 0 end ) / 1048576.00
from #loginfo
select 'iteration' = @.iteration,
'log size, MB' = @.log_size,
'unused log, MB' = @.unused,
'shrinkable log, MB' = @.shrinkable,
'shrinkable %' = convert( decimal(6,2), @.shrinkable * 100 / @.log_size )
end
if @.unused1 < @.unused
select 'After ' + convert( varchar, @.iteration ) +
' iterations the unused portion of the log has grown from ' +
convert( varchar, @.unused1 ) + ' MB to ' +
convert( varchar, @.unused ) + ' MB.'
union all
select 'Since the remaining unused portion is larger than 10 MB,' where @.unused > 10
union all
select 'you may try running this procedure again with a higher number of iterations.' where @.unused > 10
union all
select 'Sometimes the log would not shrink to a size smaller than several Megabytes.' where @.unused <= 10
else
select 'It took ' + convert( varchar, @.iteration ) +
' iterations to shrink the unused portion of the log from ' +
convert( varchar, @.unused1 ) + ' MB to ' +
convert( varchar, @.unused ) + ' MB'
exec( 'drop table table_to_force_shrink_log' )
go|||Mike,

Backing up your database should shrink your transaction log for you but if you would like to force a shrink on just the ldf file you can use the following:

DBCC SHRINKFILE (OTG03.ldf)

Hope this helps.|||Mike,

Just a clarification, you should replace "OTG03.ldf" with the sql file name. You can find this by right clicking on the database going to properties and on the "Transaction Log" tab.|||I tried DBCC SHRINKFILE (OTG03Log) and got a syntax error.
and also I tried DBCC SHRINKFILE (OTG03.ldf) and got a syntax error.|||I tried DBCC SHRINKFILE (OTG03Log) and got a syntax error.
and also I tried DBCC SHRINKFILE (OTG03.ldf) and got a syntax error.

Use exec sp_helpfile in the database to find the correct value to use with
DBCC SHRINKFILE (?)

Tim S|||Mike,

DBCC SHRINKFILE (OTG03Log) should do the trick for you. Make sure you are running it on your OTG03 database.

Try this:

USE OTG03
GO
DBCC SHRINKFILE (OTG03Log)
GO|||Well ..
If you have no need for the transaction log then you can set the recovery mode to simple and use the following commands
------------
backup log dbname with no_log
go
use OTG03
go
DBCC SHRINKFILE (OTG03Log)
GO

------------

This should do the trick ...
Or for another option ... open the EM ... right click on database name ... select all tasks ... shrink database and then explicitly shrink the log file from there ...

In case you get any error .. revert back with the error text ...

Wednesday, March 7, 2012

How to set empty a space from a derived column expression?

hi,

This field comes from a flat file and sometimes own zero and sometimes own a empty position (after 'EUR'):

2006053000499236000005307700108287457080200408287452006052953990000000010000EUR
2006053000499236004414989200101423426004400501423422006052953990000000010000EUR0

[Column 12] == "0" ? [Column 12] : ?

TIA

What is the data type of [Column 12]?

-Jamie

|||

At destination?

[char](1) COLLATE Modern_Spanish_CI_AS NULL,

|||

No. In the pipeline.

If it is DT_STR then your expression will be:

[Column 12] == "0" ? [Column 12] : NULL(DT_STR,1,1252)

If it is DT_WSTR then it'll be

[Column 12] == "0" ? [Column 12] : NULL(DT_WSTR,1)

-Jamie

|||

I've got string [DT_STR] as datatype in my flat file definition and string [DT_WSTR] as datatype column on the right 'Expression' field (Derived Column)

I've allocated as you said this line:

[Column 12] == "0" ? [Column 12] : NULL(DT_WSTR,1)

And remains in black, ok but when I launch the package appears this:

Error at Camara Recibida [OLE DB Destination [9902]]: Columns "Derived Column 1_12" and "Rein" cannot convert between unicode and non-unicode string data types.

Error at Camara Recibida [DTS.Pipeline]: "component "OLE DB Destination" (9902)" failed validation and returned validation status "VS_ISBROKEN".

Error at Camara Recibida [DTS.Pipeline]: One or more component failed validation.

Error at Camara Recibida: There were errors during task validation.

|||

So you are inputting a DT_STR but outputting a DT_WSTR? Is that correct? That seems like a strange thing to do to me but of course, it is entirely up to you. If it were me I would use the same type on the output as on the input - but it is your choice.

If you want to output a DT_STR then the expression should be:

[Column 12] == "0" ? [Column 12] : NULL(DT_STR,1,1252)

If you want to output a DT_WSTR then the expression should be:

[Column 12] == "0" ? (DT_WSTR, 1)[Column 12] : NULL(DT_WSTR,1)

-Jamie

P.S. Please try and include all relevant information when posting a problem. If you are trying to do some data conversion then one of the most obvious things to state is the types of the input columns and what types you require the output to have.

|||

That's fine now.

Thanks a lot indeed for your help.