Showing posts with label tables. Show all posts
Showing posts with label tables. Show all posts

Friday, March 30, 2012

how to shrink a file fast

I had a 100GB database and i truncated some few tables and now the database
used is around 20GB.I ran the dbcc shrinkfile to shrink the file to around
40GB and its over 1 hr now and its still running . It is running and not
being blocked. Is there any way to make this run faster ?
Using SQL 2K
Hassan,
If you use TRUNCATEONLY clause with the DBCC command, it will free up
the space but won't move any data around. The drawback is that you might
not meet your shrink target. Worth a try anyway.
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602m.html
Hassan wrote:
> I had a 100GB database and i truncated some few tables and now the database
> used is around 20GB.I ran the dbcc shrinkfile to shrink the file to around
> 40GB and its over 1 hr now and its still running . It is running and not
> being blocked. Is there any way to make this run faster ?
> Using SQL 2K
>

how to shrink a file fast

I had a 100GB database and i truncated some few tables and now the database
used is around 20GB.I ran the dbcc shrinkfile to shrink the file to around
40GB and its over 1 hr now and its still running . It is running and not
being blocked. Is there any way to make this run faster ?
Using SQL 2KHassan,
If you use TRUNCATEONLY clause with the DBCC command, it will free up
the space but won't move any data around. The drawback is that you might
not meet your shrink target. Worth a try anyway.
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602m.html
Hassan wrote:
> I had a 100GB database and i truncated some few tables and now the databas
e
> used is around 20GB.I ran the dbcc shrinkfile to shrink the file to around
> 40GB and its over 1 hr now and its still running . It is running and not
> being blocked. Is there any way to make this run faster ?
> Using SQL 2K
>sql

Wednesday, March 28, 2012

how to shrink a file fast

I had a 100GB database and i truncated some few tables and now the database
used is around 20GB.I ran the dbcc shrinkfile to shrink the file to around
40GB and its over 1 hr now and its still running . It is running and not
being blocked. Is there any way to make this run faster ?
Using SQL 2KHassan,
If you use TRUNCATEONLY clause with the DBCC command, it will free up
the space but won't move any data around. The drawback is that you might
not meet your shrink target. Worth a try anyway.
--
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602m.html
Hassan wrote:
> I had a 100GB database and i truncated some few tables and now the database
> used is around 20GB.I ran the dbcc shrinkfile to shrink the file to around
> 40GB and its over 1 hr now and its still running . It is running and not
> being blocked. Is there any way to make this run faster ?
> Using SQL 2K
>

how to show records with JOIN?

Hi ,

I've got two tables.. the first table carried a ProductID, and amongst other things a TradePrice

The other tbl carries a ProductID, a IndivPrice and a CustomerID

The second tbl lists prices for products for indiv Customers.

My Query needs to bring back ALL the products from the first tbl...

It also needs to show the TradePrice for that product.

I need to join my query to the second tbl...

And finally, if the second tbl has a price for that product AND the customerID is the same as one I pass into the query.. show that price also..

So here's my first query:

SELECT dbo.Products.ProductID, ProductName, ProductTradePrice, IndivPrice, dbo.Trade_PriceLists.CustomerID AS PLCustomerID FROM dbo.Products LEFT OUTER JOIN dbo.Trade_PriceLists ON dbo.Products.ProductID = dbo.Trade_PriceLists.ProductID WHERE (ProductType = 'Trade' OR ProductType = 'Both') AND (Replace(Lower(ProductBrand),' ','') = 'brandname') AND (CustomerID IS NULL OR CustomerID = 'teste' OR CustomerID = '') ORDER BY TradeOrder

I thought that would work, but what happens is that, if that particular customer has no indiv prices set.. then it only shows the ones that have no records at all in that second tbl..

So unless there is a record for a particular product in that second tbl and it doesn't have a CustomerID assigned to (which would never happen as that tbl is only every for indiv customer prices) then it doesn't show.

Examples:

First Tbl

ProductID Name TradePrice

1 Jumper £1.00

2 Jeans £3.00

3 Shoes £5.00

4 Hat £2.00

Second Tbl

ProductID CustomerID IndivPrice

1 teste £0.50

2 othercustomer £2.50

3 teste £4.50

What I want in the results is:

ProductID ProductName TradePrice IndivPrice CustomerID (PLCustomerID)

1 Jumper £1.00 £0.50 teste

2 Jeans £3.00

3 Shoes £5.00 £4.50 teste

4 Hat £2.00

See? - The 2nd product should not get an indiv price as although it's in that second tbl, the customerID assigned to it is different. The 4th product should not get an indiv price as it's not in that second tbl at all.

however, with my query above I'd only get Products 1and 3... and if I did a query on a customer with no indiv prices I'd only get product 4 as it's not in the indiv at all...

HELP!!!!!

Give a look to the SELECT article in books online. Maybe something like:

declare @.firstTbl table
( ProductId integer,
Name varchar(10),
Price money
)
insert into @.firstTbl
select 1, 'Jumper', $1.0 union all
select 2, 'Jeans', $3.0 union all
select 3, 'Shoes', $5.0 union all
select 4, 'Hat', $2.0
--select * from @.firstTbl

declare @.secondTbl table
( ProductId integer,
CustomerId varchar(15),
IndivPrice money
)
insert into @.secondTbl
select 1, 'teste', $0.5 union all
select 2, 'othercustomer', $2.5 union all
select 3, 'teste', $4.5
--select * from @.secondTbl


select a.productId,
a.name as [ProductName],
a.Price as [Trade Price],
coalesce(convert(varchar(21),b.IndivPrice), '') as IndivPrice,
coalesce(convert(varchar(21),b.CustomerId), '') as [CustomerId (PlCustomerId)]
from @.firstTbl a
left join @.secondTbl b
on a.productId = b.productId

/*
productId ProductName Trade Price IndivPrice CustomerId (PlCustomerId)
-- -- -- -- -
1 Jumper 1.0000 0.50 teste
2 Jeans 3.0000 2.50 othercustomer
3 Shoes 5.0000 4.50 teste
4 Hat 2.0000
*/

|||

Hi Kent,

Thanks for the reply..

Your select statement looks virtually the same as mine!

However your's is missing the WHERE clause - and this is what is causing the problem.

My WHERE clause needs to bring back records that either have no CustomerID, or the specific CustomeID I ask for in the WHERE statement.

However the WHERE clause I have inserted (see my first post) simply does not do it. If I put in a Customer ID into my where clause that doesn't have an indivprice records.. then it simply returns nothing...

whereas it should return all of the records... but obivously these would not have an indivprice for them

|||

Is this better:

declare @.firstTbl table
( ProductId integer,
Name varchar(10),
Price money
)
insert into @.firstTbl
select 1, 'Jumper', $1.0 union all
select 2, 'Jeans', $3.0 union all
select 3, 'Shoes', $5.0 union all
select 4, 'Hat', $2.0
--select * from @.firstTbl

declare @.secondTbl table
( ProductId integer,
CustomerId varchar(15),
IndivPrice money
)
insert into @.secondTbl
select 1, 'teste', $0.5 union all
select 2, 'othercustomer', $2.5 union all
select 3, 'teste', $4.5
--select * from @.secondTbl


select a.productId,
a.name as [ProductName],
a.Price as [Trade Price],
coalesce(convert(varchar(21),b.IndivPrice), '') as IndivPrice,
coalesce(convert(varchar(21),b.CustomerId), '') as [CustomerId (PlCustomerId)]
from @.firstTbl a
left join @.secondTbl b
on a.productId = b.productId
and customerId = 'teste'

/*
productId ProductName Trade Price IndivPrice CustomerId (PlCustomerId)
-- -- -- -- -
1 Jumper 1.0000 0.50 teste
2 Jeans 3.0000
3 Shoes 5.0000 4.50 teste
4 Hat 2.0000
*/

I have a question: Are you wanting results 2 and 4 removed?

|||

thanks!! - that's perfect!!

Can you let me know what is different about your query.. I see that indivprice has a convert on it and there is a Coalesce statement..

What are these for?

|||

Hang on and I'll give it a go.

The COALESCE / VARCHAR business is to replace an output that would otherwise say NULL with blank space.

After I reworked your original query and used your table names what I got was:

SELECT dbo.Products.ProductID,
ProductName,
ProductTradePrice,
coalesce(convert(varchar(21), IndivPrice), '') as IndivPrice,
coalesce(dbo.Trade_PriceLists.CustomerID, '') AS PLCustomerID
FROM dbo.Products
LEFT OUTER JOIN dbo.Trade_PriceLists
ON dbo.Products.ProductID = dbo.Trade_PriceLists.ProductID
WHERE (ProductType = 'Trade'
OR ProductType = 'Both')
AND (Replace(Lower(ProductBrand),' ','') = 'brandname')
AND (CustomerID IS NULL OR CustomerID = 'teste' OR CustomerID = '')

/*
ProductID ProductName ProductTradePrice IndivPrice PLCustomerID
-- --
1 Jumper 1.0000 0.50 teste
3 Shoes 5.0000 4.50 teste
4 Hat 2.0000
*/

|||

- excellent - thank you very much...

|||

one final question.

If I use your query - then the Indiv price is now converted into a varchar and no longer is a money field.

I need this to be a money field for my code (it's used in a shopping basket...

How would I do this..

(I've tried converting it in my Server Side ASP using CLng, but it won't do it..

|||Then don't use the VARCHAR / COALESCE on this particular field -- provided that your application can handle the situation when the field is a NULL output. I just realized that when I reworked your query I dropped a row; is that wanted?|||

just noticed that myself... no I need all results back..

why did it drop a record?

|||

To avoid dropping that record one of the lineds needs to be moved up and become part of the JOIN instead of part of the WHERE clause. Also, I removed the conversion of the PRICE field:

SELECT dbo.Products.ProductID,
ProductName,
ProductTradePrice,
IndivPrice,
coalesce(dbo.Trade_PriceLists.CustomerID, '') AS PLCustomerID
FROM dbo.Products
LEFT OUTER JOIN dbo.Trade_PriceLists
ON dbo.Products.ProductID = dbo.Trade_PriceLists.ProductID
AND (CustomerID IS NULL OR CustomerID = 'teste' OR CustomerID = '')
WHERE (ProductType = 'Trade'
OR ProductType = 'Both')
AND (Replace(Lower(ProductBrand),' ','') = 'brandname')

/*
ProductID ProductName ProductTradePrice IndivPrice PLCustomerID
-- --
1 Jumper 1.0000 .5000 teste
2 Jeans 3.0000 NULL
3 Shoes 5.0000 4.5000 teste
4 Hat 2.0000 NULL
*/

how to show data from multiple tables in one report

Hi,
How to show data from multiple tables which are linked in one report. I will
elaborate the problem little bit, please bear with me -
I have a db which contains three tables - ServerInfo, ServiceInfo, TaskInfo
The relation among them is
One server can have multiple services
One server can have multiple task
There is no relation betwen task and services. There could be 'n' services
and 'm' tasks running on the same server.
Moreover, there could be n servers in the system.
I want to create a all server health reports which displays status of the
services and tasks per server. Each server info comes on a separate page.
I tried to do this using data region. But the problem is data region are
bound to one dataset, which doesnt work in my case.
Please help me out in creating this reportRead up on subreports. What you want is perfect to solve this.
A subreport is just a regular report with a parameter that you drag and drop
onto the main report. Then do a right mouse click on the subreport and map
the parameter to the appropriate field. In your case you will have two
subreport. First develop and test the report individually.
When you deploy the reports you can go to report manager, properties for the
subreport and hide them in list view so that your users don't see
unnecessary reports.
Bruce Loehle-Conger
MVP SQL Server Reporting Services
"Sachin Laddha" <SachinLaddha@.discussions.microsoft.com> wrote in message
news:CB5A3CB1-48D8-4844-A71B-ACBC2C0E261B@.microsoft.com...
> Hi,
> How to show data from multiple tables which are linked in one report. I
> will
> elaborate the problem little bit, please bear with me -
> I have a db which contains three tables - ServerInfo, ServiceInfo,
> TaskInfo
> The relation among them is
> One server can have multiple services
> One server can have multiple task
> There is no relation betwen task and services. There could be 'n' services
> and 'm' tasks running on the same server.
> Moreover, there could be n servers in the system.
> I want to create a all server health reports which displays status of the
> services and tasks per server. Each server info comes on a separate page.
> I tried to do this using data region. But the problem is data region are
> bound to one dataset, which doesnt work in my case.
> Please help me out in creating this report|||Hi Bruce:
I have same problem with this case!
I try to use subreport to show (for this example, Server Info (1), Service
Info (n) and Tasks (m). It looks very good so far. However when I try to
export it to Excel. All subreport contents cannot be shown. (note that,
preview and export to pdf format is normal)!
please help!
Tony
"Bruce L-C [MVP]" wrote:
> Read up on subreports. What you want is perfect to solve this.
> A subreport is just a regular report with a parameter that you drag and drop
> onto the main report. Then do a right mouse click on the subreport and map
> the parameter to the appropriate field. In your case you will have two
> subreport. First develop and test the report individually.
> When you deploy the reports you can go to report manager, properties for the
> subreport and hide them in list view so that your users don't see
> unnecessary reports.
>
> --
> Bruce Loehle-Conger
> MVP SQL Server Reporting Services
> "Sachin Laddha" <SachinLaddha@.discussions.microsoft.com> wrote in message
> news:CB5A3CB1-48D8-4844-A71B-ACBC2C0E261B@.microsoft.com...
> > Hi,
> >
> > How to show data from multiple tables which are linked in one report. I
> > will
> > elaborate the problem little bit, please bear with me -
> >
> > I have a db which contains three tables - ServerInfo, ServiceInfo,
> > TaskInfo
> > The relation among them is
> > One server can have multiple services
> > One server can have multiple task
> > There is no relation betwen task and services. There could be 'n' services
> > and 'm' tasks running on the same server.
> > Moreover, there could be n servers in the system.
> >
> > I want to create a all server health reports which displays status of the
> > services and tasks per server. Each server info comes on a separate page.
> >
> > I tried to do this using data region. But the problem is data region are
> > bound to one dataset, which doesnt work in my case.
> >
> > Please help me out in creating this report
>
>|||Thanks Bruce !!!
I want to show information about all servers in one report. So this report
does not take any parameter. Moreover I want to group related servers info
per page.
Can I do this using subreports?
What I understood is I will create one master report which will have 'n'
subreports in it. Each subreport will show information about one server.
If above understanding is correct. I have one more question
How the subreport will know the server name for which information is to be
retrieved.
regards,
Sachin.
"Bruce L-C [MVP]" wrote:
> Read up on subreports. What you want is perfect to solve this.
> A subreport is just a regular report with a parameter that you drag and drop
> onto the main report. Then do a right mouse click on the subreport and map
> the parameter to the appropriate field. In your case you will have two
> subreport. First develop and test the report individually.
> When you deploy the reports you can go to report manager, properties for the
> subreport and hide them in list view so that your users don't see
> unnecessary reports.
>
> --
> Bruce Loehle-Conger
> MVP SQL Server Reporting Services
> "Sachin Laddha" <SachinLaddha@.discussions.microsoft.com> wrote in message
> news:CB5A3CB1-48D8-4844-A71B-ACBC2C0E261B@.microsoft.com...
> > Hi,
> >
> > How to show data from multiple tables which are linked in one report. I
> > will
> > elaborate the problem little bit, please bear with me -
> >
> > I have a db which contains three tables - ServerInfo, ServiceInfo,
> > TaskInfo
> > The relation among them is
> > One server can have multiple services
> > One server can have multiple task
> > There is no relation betwen task and services. There could be 'n' services
> > and 'm' tasks running on the same server.
> > Moreover, there could be n servers in the system.
> >
> > I want to create a all server health reports which displays status of the
> > services and tasks per server. Each server info comes on a separate page.
> >
> > I tried to do this using data region. But the problem is data region are
> > bound to one dataset, which doesnt work in my case.
> >
> > Please help me out in creating this report
>
>|||Hi Sachin Laddha,
Subreport is work but I get the following problem:
1, When export to Excel, get error for the subreport
2, When export to pdf, I get the layout problem. If you interest in this
case, pls read the question I ask "Bruce". The question is "Question want to
ask Bruce L-C for report layout!".
Tony
"Sachin Laddha" wrote:
> Thanks Bruce !!!
> I want to show information about all servers in one report. So this report
> does not take any parameter. Moreover I want to group related servers info
> per page.
> Can I do this using subreports?
> What I understood is I will create one master report which will have 'n'
> subreports in it. Each subreport will show information about one server.
> If above understanding is correct. I have one more question
> How the subreport will know the server name for which information is to be
> retrieved.
> regards,
> Sachin.
> "Bruce L-C [MVP]" wrote:
> > Read up on subreports. What you want is perfect to solve this.
> >
> > A subreport is just a regular report with a parameter that you drag and drop
> > onto the main report. Then do a right mouse click on the subreport and map
> > the parameter to the appropriate field. In your case you will have two
> > subreport. First develop and test the report individually.
> >
> > When you deploy the reports you can go to report manager, properties for the
> > subreport and hide them in list view so that your users don't see
> > unnecessary reports.
> >
> >
> > --
> > Bruce Loehle-Conger
> > MVP SQL Server Reporting Services
> >
> > "Sachin Laddha" <SachinLaddha@.discussions.microsoft.com> wrote in message
> > news:CB5A3CB1-48D8-4844-A71B-ACBC2C0E261B@.microsoft.com...
> > > Hi,
> > >
> > > How to show data from multiple tables which are linked in one report. I
> > > will
> > > elaborate the problem little bit, please bear with me -
> > >
> > > I have a db which contains three tables - ServerInfo, ServiceInfo,
> > > TaskInfo
> > > The relation among them is
> > > One server can have multiple services
> > > One server can have multiple task
> > > There is no relation betwen task and services. There could be 'n' services
> > > and 'm' tasks running on the same server.
> > > Moreover, there could be n servers in the system.
> > >
> > > I want to create a all server health reports which displays status of the
> > > services and tasks per server. Each server info comes on a separate page.
> > >
> > > I tried to do this using data region. But the problem is data region are
> > > bound to one dataset, which doesnt work in my case.
> > >
> > > Please help me out in creating this report
> >
> >
> >

Monday, March 26, 2012

How to show all tables info in Task Pad?

In the table view in the Task Pad view, it lists the number of rows and size of each table, which is great, however it only lists the first 25 tables or so and there is no scroll function.

1. Does anyone know how I can see this info in Task Pad for all tables, without having to use the search function and look up 200+ tables one-by-one?

2. Does anyone know of another utility or statement to run against the DB which will return this info all at once for all the tables?

Thanks.Task pad should show Next and Last options in the bottom of the page.
You can also get all user table info by executing this sql..
select * from information_schema.tables where table_type like 'BASE TABLE'|||First, in TaskPad, there is no next or last button, second that line of code you gave:

select * from information_schema.tables where table_type like 'BASE TABLE'

Did not return the # of rows and KB size of all of my tables.

THis is what I am looking for.

Anyone else know?

I ran the "SP_help" and "SP_tables" stored procedures, but they don't return the table row count or size.|||I'd suggest:SELECT CAST(Coalesce(Sum(si.reserved) / 128.0, 0) AS DECIMAL(5, 2)) AS total_mb
, CAST(Coalesce(Sum(CASE WHEN si.indid IN (0, 1) THEN si.reserved END)
/ 128.0, 0) AS DECIMAL(5, 2)) AS data_mb
, CAST(Coalesce(Sum(CASE WHEN si.indid = 255 THEN si.reserved END)
/ 128.0, 0) AS DECIMAL(5, 2)) AS blob_mb
, CAST(Coalesce(Sum(CASE WHEN si.indid NOT IN (0, 1, 255) THEN
si.reserved END) / 128.0, 0) AS DECIMAL(5, 2)) AS index_mb
, Object_Name(si.id)
FROM dbo.sysindexes AS si
GROUP BY si.id-PatP|||Pat,

What does that code do. Here was my output:

(20 row(s) affected)

Server: Msg 8115, Level 16, State 8, Line 1
Arithmetic overflow error converting numeric to data type numeric.
Warning: Null value is eliminated by an aggregate or other SET operation.|||sp_spaceused?

EDIT: Found this...

USE Northwind
GO

SET NOCOUNT ON
GO

CREATE TABLE #SpaceUsed (
[name] varchar(255)
, [rows] varchar(25)
, [reserved] varchar(25)
, [data] varchar(25)
, [index_size] varchar(25)
, [unused] varchar(25)
)
GO

DECLARE @.tablename nvarchar(128)
, @.maxtablename nvarchar(128)
, @.cmd nvarchar(1000)
SELECT @.tablename = ''
, @.maxtablename = MAX(name)
FROM sysobjects
WHERE xtype='u'

WHILE @.tablename < @.maxtablename
BEGIN
SELECT @.tablename = MIN(name)
FROM sysobjects
WHERE xtype='u' and name > @.tablename

SET @.cmd='exec sp_spaceused['+@.tablename+']'
INSERT INTO #SpaceUsed EXEC sp_executesql @.cmd
END

SET NOCOUNT OFF
GO

SELECT * FROM #SpaceUsed
GO

DROP TABLE #SpaceUSed
GO|||Pat,

What does that code do. Here was my output:

(20 row(s) affected)

Server: Msg 8115, Level 16, State 8, Line 1
Arithmetic overflow error converting numeric to data type numeric.
Warning: Null value is eliminated by an aggregate or other SET operation.Change the 5s to 15s and try again.

It shows some interesting space observations, by table.

-PatP|||pat,

That returned data, but the data_mb figures seem to be close to half of the actual size. For example, the size of a table from TaskPad is 79656 KB and your query generates 38.94 MB.

Is this what is expected? Is the data_mb column the table size?

Thanks for the help, it is greatly appreciated.|||Brett,

Wonderful!!!!!!!!!!

That was it!!!!!!

Thanks a million!!!!!!|||Does my total match the taskpad total?

-PatP

Friday, March 23, 2012

How to setup profile, roles, etc in my hosted sqlserver?

I've been working with godaddy.com, and when I setup there the sqlserver databases it tells me if I want to create the tables and stored procedures that conform the typical profile, roles and users features... the same as if I execute the aspnet_regsql.exe

Now I'm working with anotherhoster (aruba) and I've setup the database OK, but i didn't found any way to setup the profile, roles, users, etc features.
Aruba uses the mssql server web admin.

How can I setup this features? Of course, aruba doesn't let me to execute the aspnet_regsql.exe...

thanksNo idea??

I think it's an important problem, isn't it??

ASP.NET 2.0 without the specific tables and sps for users, roles, profiles, etc, aren't powerful!!!

Please, an answer!!!|||I would ask aruba to do it for you. I'm sure they have a way if they support hosting ASP.NET 2.0 apps.|||Yes Motley, I'm doing it... the problem is that aruba isn't well enougth solving things like thisSmile [:)]

On the other hand, with your post I see that there is no way to make it by myself, isn't it?

thanks!
|||Can you run scripts against the database? All the SQL scripts for ASP.NET application services are in the %windows%\microsoft.net\framework\v2.0.50727 directory.|||Wow, I didn't realize they gave us the .SQL files. I guess I just assumed they were encoded/encrypted in the aspnet_regsql.exe file. Very nice.|||ok, I've found those .sql files andthey can be runned against the database.

They are:
- InstallCommon.sql
- InstalMembership.sql
- InstalPersistSqlState.sql
- InstallPersonalization.sql
- InstallProfile.sql
- InstallRoles.sql
- InstallSqlState.sql
- InstallSqlStateTemplate.sql
- InstallWebEventSqlProvider.sql

Should I run all??

On other hand, I've tried the first one and I've found a lot of problems. The first was the name of the database, taht I've changed, but the next was this error:

"Specified owner name 'dbo' either does not exist or you do not have permission to use it."

What should I do? Remove all references to dbo?

Thanks in advance.|||Yes, remove all references to dbo.

How to setup Parent>>Child>>Child relationship...?

Hello,

SQL newby looking for some advice. I have created the three tables below. XXParent is the master table, XXParentChild is the child table to XXParent and it should have a one-to-many relation to its parent. XXParentChildChild is the child table to XXParentChild, and it will likewise have a one to many relation to XXParentChild. In effect one XXParent row can have many XXParentChild rows assigned to it and one XXParentChild row can have many XXParentChildChild rows assigned to it.

What I'm missing is how to create the table so that once I've entered a row in XXParent, I can insert multiple rows in XXParentChild and subsequently insert multiple rows in XXParentChildChild for each of its parent rows, while maintaining referential integrity.

First, not sure what record id style to use, whether IDENTITY, or UNIQUEID, etc..
Second, not sure how to set up the FK's and Relationships between the tables.

Any advice appreciated greatly!!

Thanks in advance!

CREATE TABLE [XXParent] (

[XXSuiteID] [int] IDENTITY (1, 1) NOT NULL ,

[XXDateRun] [datetime] NULL ,

[XXStartTime] [datetime] NULL ,

[XXEndTime] [datetime] NULL ,

[XXsSucceeded] [int] NULL ,

[XXsWarned] [int] NULL ,

[XXsFailed] [int] NULL ,

[XXMachine] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,

[XXClientMachine] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,

[XXLogin] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,

[XXLabel] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,

CONSTRAINT [PK_XXSuite] PRIMARY KEY CLUSTERED

(

[XXSuiteID]

) ON [PRIMARY]

) ON [PRIMARY]

GO

CREATE TABLE [XXParentChild] (
[XXSuiteID] [int] NOT NULL ,
[XXID] [int] IDENTITY (1, 1) NOT NULL ,
[XXIDInternal] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[XXName] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[XXDescription] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[XXTier] [text] COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[XXNo] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[XXStart] [datetime] NULL ,
[XXEnd] [datetime] NULL ,
[XXWFBTime] [datetime] NULL ,
[XXWFBCalled] [int] NULL ,
[XXSearches] [int] NULL ,
[XXSearchesTime] [datetime] NULL ,
[XXResult] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

CREATE TABLE [XXParentChildChild] (

[XXID] [int] NOT NULL ,

[XXMssgType] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,

[XXMessage] [varchar] (8000) COLLATE SQL_Latin1_General_CP1_CI_AS NULL

) ON [PRIMARY]

GOAnswered my own question:

CREATE TABLE XXParent (
XXSuiteID int IDENTITY (1, 1) NOT NULL,
XXDateRun datetime NULL ,
XXStartTime datetime NULL ,
XXEndTime datetime NULL ,
XXsSucceeded int NULL ,
XXsWarned int NULL ,
XXsFailed int NULL ,
XXMachine varchar (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
XXClientMachine varchar (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
XXLogin varchar (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
XXLabel varchar (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
CONSTRAINT PK_XXSuite PRIMARY KEY CLUSTERED
(
XXSuiteID
) ON PRIMARY
) ON PRIMARY
GO

CREATE TABLE XXParentChild (
XXID int IDENTITY (1, 1) NOT NULL PRIMARY KEY ,
XXSuiteID int NOT NULL ,
XXIDInternal varchar (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
XXName varchar (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
XXDescription varchar (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
XXTier text COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
XXNo varchar (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
XXStart datetime NULL ,
XXEnd datetime NULL ,
XXWFBTime datetime NULL ,
XXWFBCalled int NULL ,
XXSearches int NULL ,
XXSearchesTime datetime NULL ,
XXResult varchar (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
FOREIGN KEY (XXSuiteID) REFERENCES XXParent(XXSuiteID)
) ON PRIMARY TEXTIMAGE_ON PRIMARY
GO

CREATE TABLE XXParentChildChild (
XXCHILDID int IDENTITY (1, 1) NOT NULL PRIMARY KEY,
XXID int NOT NULL ,
XXMssgType varchar (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
XXMessage varchar (8000) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
FOREIGN KEY (XXID) REFERENCES XXParentChild(XXID)
) ON PRIMARY
GOsql

How to set user who cant modify any StoredProc and data in Tables but can read data in Tab

Please give me advise ????

Big SmileBig SmileBig Smile

Go into Management Studio, connect to your database and go to Security-->Users from the treeview. Right click and select New User. Create the use based on the relevant login and then go to the Securables option where you can add tables and stored procedures and select which rights the user will have.

Wednesday, March 21, 2012

how to setup db replication with objects owned by user other than dbo?

Hi, I was hoping someone could tell me how I am to replicate a database
between servers. The many of the tables, stored procedures and views are
owned by a user (e.g. USER1). The version running is SQL 2000. Lets take for
instance snapshot replication from Server1 to Server2.
Now I know that many people are having problems with replicating a database
and ending up realising that the tables, stored procs and views are now
owned by dbo. I want the replicated database objects owned by USER1. What do
I have to do? Do I:
1) create USER1 on the Server2,
2) and then start snapshot replication from Server1 assigning "Publication
properties > Articles > Article Defaults > Destination Table Owner" = USER1
?
3) then move across USER1's permissions to the replicated objects (not sure
how to proceed with this, I think i have to use ).
or do I:
1) create USER1 on the Server2,
2) use DTS to create empty table structures on Server2
3) and then assign USER1 the permissions
I am somewhat confused because USER1's permissions are dependent on the
tables being defined, and the tables are dependent on USER1 having ownership
on them. I dont know what to set up first.
Simply stated, how do I simply "set up a replication from scratch for a
database with objects owned by a user other than dbo"
Could anyone direct me?
Thanks!
PeterGenerate SQL Script of login,
Execute it on new server (create login + permissions +
add_role logins... )
then DTS the objects of the server (what you want) to the
other server...
>--Original Message--
>Hi, I was hoping someone could tell me how I am to
replicate a database
>between servers. The many of the tables, stored
procedures and views are
>owned by a user (e.g. USER1). The version running is SQL
2000. Lets take for
>instance snapshot replication from Server1 to Server2.
>Now I know that many people are having problems with
replicating a database
>and ending up realising that the tables, stored procs and
views are now
>owned by dbo. I want the replicated database objects
owned by USER1. What do
>I have to do? Do I:
>1) create USER1 on the Server2,
>2) and then start snapshot replication from Server1
assigning "Publication
>properties > Articles > Article Defaults > Destination
Table Owner" = USER1
>?
>3) then move across USER1's permissions to the replicated
objects (not sure
>how to proceed with this, I think i have to use ).
>or do I:
>1) create USER1 on the Server2,
>2) use DTS to create empty table structures on Server2
>3) and then assign USER1 the permissions
>I am somewhat confused because USER1's permissions are
dependent on the
>tables being defined, and the tables are dependent on
USER1 having ownership
>on them. I dont know what to set up first.
>Simply stated, how do I simply "set up a replication from
scratch for a
>database with objects owned by a user other than dbo"
>Could anyone direct me?
>Thanks!
>Peter
>
>
>.
>

How to setup a repeated update to a table in SQL?

Hi:

I am fairly new to SQL Server 2005 and before now, I have only had to restore databases, and connect to tables via ODBC connection in a reference (read only) setup. Today, I have a very small project to set up using the server.

I have a userlogon.csv file that the network stores on a file server in a hidden share \logon$. It has 4 columns, UserID, Computer, Date, Time.

I was able to create a database called UserLogon and import the file as it was today. I want to create a scheduled update so the server would go to this file perhaps 4 times a day (or more) and grab any new logins that have appended itself to this CSV file.

So, as a newbie with a 1,900 page SQL Server 2005 unleashed manual at my side, could someone outline what the steps are in general I should follow to set this up?

I have the process laid out in my mind, but I don't know how to translate in into a scheduled task of the SQL Server :

1. Create DB and import the table (done)

2. create a stored procedure that connects to the CSV file and evaluates date and time stamps then appends any new records into the SQL db table. (appending records would be achieved by using the INSERT and WHERE statements?)

3. Schedule a job to perform this task on a routine basis.

It appears that the file connection portion of this set up is defined outside the evaluation and append record procedure? (not in the same stored procedure). Perhaps I tie the whole process together using the Job Manager, selecting the file settings, and then the stored procedure to be performed on the file.?

I hope I have been descriptive enough to ask if someone could outline the modules/features/processes involved so I can read up on them and figure them out using the book.

Thank you in advance.

David

David:

It sounds to me like you need a good book on the subject of "Database Design". I would suggest Pro SQL Server 2005 Database Design and Optimization by MVP Louis Davidson.

Maybe since you are just getting started a better choice for a first book would be Data Modeling Essentials by Graeme Simison

Kent

How to set user who cant modify any StoredProc and data in Tables but can read data in Tab

Please give me advise ????

Big SmileBig SmileBig Smile

Go into Management Studio, connect to your database and go to Security-->Users from the treeview. Right click and select New User. Create the use based on the relevant login and then go to the Securables option where you can add tables and stored procedures and select which rights the user will have.

Monday, March 19, 2012

How to set up related tables

I have a bunch of data related to a property. On that property, we have
several measurement requirements to perform several calculations. For
instance, I may have data that currently am planning to set up as follows:
PropertyInformation table:
PropertyAddressID, ResidentialPropertyTypeID, PropertyOwner,
CorporatePropertyTypeID
PropertyMeasurement table
PropertyInformationID
Measurement1 (related to Corporate property only)
Measurement 2 (related to Corporate property only)
Measurement 3 (related to Residential property only)
Measurement 4 (related to Residential property only)
Measurement 5 (related to both Corporate and Residential)
Measurement 6 (related to both Corporate and Residential)
Calculation table:
PropertyMeasurementID (FK)
Total (result of Measurements 2, 4, 5, 6)
Date (date calculation done)
In order to perform the final calculations, we must have one residential
property type and one corporate type (which comes from other lookup tables.
I originally was going to base the 2 latter tables on the
PropertyInformation; however, we may not have any property information
whatsoever when the measurements are recorded, all we know is we have one
ResidentialPropertyType and one CorporatePropertyType, but we may not even
know what that type is. My thought is to force an "Unknown" for property
type for the ResidentialPropertyTypeID (the CorporatePropertyTypeID is the
only information we collect on the property, all other fields in that table
relate to the ResidentialProperty, but the combination of the Corporate and
Residential measurements are required for the calculations). And we may
have Property data but no calculations are being done at this time.
My question is not really what to do with the PropertyInformation table
(although I'm open to suggestions), it's what to do with the Measurement
information -- should I leave all measurement information in one table,
requiring 2 fields for Measurements 5 and 6 because there will be
information on both, or should I split out the CorporateMeasurements and the
ResidentialMeasurements? I'm concerned that down the road the measurement
information will change, and it will become more important what properttype
that came from. If I split it out, then my calculations become more
complicated -- do I relate each calculation to the ID of both tables, or do
I fall back on the PropertyInformation to begin with? Or should I split out
only the duplicated fields, then the calculation table would still need to
pull information from both measurement tables...?
I hope this all makes sense, I suspect I may simply have to choose one way
and then stick with it. Any thoughts?--BEGIN PGP SIGNED MESSAGE--
Hash: SHA1
I'd set up the PropertyMeasurement table like this:
CREATE TABLE PropertyMeasurements (
PropertyInformationID INTEGER NOT NULL
REFERENCES PropertyInformation ,
MeasureTypeID TINYINT NOT NULL
REFERENCES MeasureTypes ,
Measure SMALLINT NOT NULL ,
CONSTRAINT PK_PM PRIMARY KEY (PropertyInformationID, MeasureType)
)
I'm not sure where the PropertyInformationID came from, but guessed the
PropertyInformation table. The PK allows only one MeasureType per
PropertyInformationID. If you want to allow more than one MeasureType
per PropertyInformationID you will have to add another attribute to the
table, like a date column.
You didn't say what type of metric unit would be used for the Measure so
I just put SMALLINT. Change to whatever data type you require.
To allow the further expansion of measurement types you should have a
table of MeasureTypes:
CREATE TABLE MeasureTypes (
MeasureTypeID TINYINT IDENTITY(1,1) UNIQUE ,
MeasureTypeName VARCHAR(20) NOT NULL PRIMARY KEY
)
I wouldn't have a calculation table; rather, I'd have a view or stored
procedure to return those calculations.
MGFoster:::mgf00 <at> earthlink <decimal-point> net
Oakland, CA (USA)
--BEGIN PGP SIGNATURE--
Version: PGP for Personal Privacy 5.0
Charset: noconv
iQA/ AwUBQjCstIechKqOuFEgEQIO+ACbBGgH6AnoXuC1
kmWwAvoJ1uzEQX0Aniuj
3NTuWjMlvl+f8EDFzU9op+bA
=Wmkm
--END PGP SIGNATURE--
Iams wrote:
> I have a bunch of data related to a property. On that property, we have
> several measurement requirements to perform several calculations. For
> instance, I may have data that currently am planning to set up as follows:
> PropertyInformation table:
> PropertyAddressID, ResidentialPropertyTypeID, PropertyOwner,
> CorporatePropertyTypeID
> PropertyMeasurement table
> PropertyInformationID
> Measurement1 (related to Corporate property only)
> Measurement 2 (related to Corporate property only)
> Measurement 3 (related to Residential property only)
> Measurement 4 (related to Residential property only)
> Measurement 5 (related to both Corporate and Residential)
> Measurement 6 (related to both Corporate and Residential)
> Calculation table:
> PropertyMeasurementID (FK)
> Total (result of Measurements 2, 4, 5, 6)
> Date (date calculation done)
> In order to perform the final calculations, we must have one residential
> property type and one corporate type (which comes from other lookup tables
.
> I originally was going to base the 2 latter tables on the
> PropertyInformation; however, we may not have any property information
> whatsoever when the measurements are recorded, all we know is we have one
> ResidentialPropertyType and one CorporatePropertyType, but we may not even
> know what that type is. My thought is to force an "Unknown" for property
> type for the ResidentialPropertyTypeID (the CorporatePropertyTypeID is the
> only information we collect on the property, all other fields in that tabl
e
> relate to the ResidentialProperty, but the combination of the Corporate an
d
> Residential measurements are required for the calculations). And we may
> have Property data but no calculations are being done at this time.
> My question is not really what to do with the PropertyInformation table
> (although I'm open to suggestions), it's what to do with the Measurement
> information -- should I leave all measurement information in one table,
> requiring 2 fields for Measurements 5 and 6 because there will be
> information on both, or should I split out the CorporateMeasurements and t
he
> ResidentialMeasurements? I'm concerned that down the road the measurement
> information will change, and it will become more important what properttyp
e
> that came from. If I split it out, then my calculations become more
> complicated -- do I relate each calculation to the ID of both tables, or d
o
> I fall back on the PropertyInformation to begin with? Or should I split o
ut
> only the duplicated fields, then the calculation table would still need to
> pull information from both measurement tables...?
> I hope this all makes sense, I suspect I may simply have to choose one way
> and then stick with it. Any thoughts?

How to set up my tables

Hi everyone,

I am very new to Sql Server and Visual Basic. I am trying to put together a football game. I need to set up a database full of teams for the game to choose from and match up against one another. I need mulitple teams, each with individual players, positions and ratings. No matter what I do I just cant seem to get this to set up the way I want it to. I want to be able to pick the team from a combobox and the players and ratings for each player appear in the textboxes I have provided. I know this shouldn't be very hard, but I have been stuck on this for months now. Please help!

P.S. I do know how to basically set up a database, assign textboxes to database values and all that. I just basically can't figure out the relationships of tables needed in the database to make this work. Thanks again!

You want 2 tables:

Table 1: tblTeams

-

TeamName Primary Key

TeamID Pimary Key identity

Table 2: tblPlayers

-

PlayerName Primary Key

TeamID

Ratings

Once you have the tables designed, the dropdown can ask the team and display the players or display the player to identify the team.

This will display all players on whichever team is selected

SELECT p.Players FROM tblTeams t

JOIN tblPlayers p on t.TeamID = p.TeamID

WHERE t.TeamName = txtTeam.text

This will display the team that the player is on:

SELECT t.TeamName FROM tblTeams t

JOIN tblPlayers p on t.TeamID = p.TeamID

WHERE p.PlayerName= txtPlayer.text

Hope this helps,

Adamus

|||I'll try that out, thanks.|||

Adding to Adam's presentation, if this is a multi-user 'fantasy football' type simulation, you will be presented with multiple users creating 'their' individual Teams. As well as each Team containing multiple Players, A particular Player can exists on multiple Teams.

In that situation, it may be helpful to allow Players to be select to multiple teams, so in addition to Teams and Players, a Team-Player table would likely be useful.

Friday, March 9, 2012

How to set relationship level when creating diagram?

lets say I have 3 levels of relational table

Level 1 is the master table

Level 2 has 10 tables and all are children table of level 1

level 3 has 100 tables and all are children of tables in level 2

And I am trying to use MS SQL 2005 to create a diagram.

In SQL2000 I used to able to set the relationship level. For example, if I set the level = 0 it will only add the master table. If I set the level = 1 it will add the master table and those 10 tables links to the master table.... so on so forth.

But now in SQL2005 when I choose "Add related tables" I will have to let those 112 tables all squashed in a tiny 17" screen and it become very unmanagable.

Is there any way I can set the relationship level again in SQL2005?

It appears that we did not implement this behavior. Could you file this as an issue on Microsoft Connect? http://connect.microsoft.com/SQLServer/. We will triage it and let you know if/when we can put it into a future release of the product.

Thanks,

Paul A. Mestemaker II
Program Manager
Microsoft SQL Server Manageability
http://blogs.msdn.com/sqlrem/

Wednesday, March 7, 2012

How to set FILLFACTOR

I need some help on how to improve performance when loading tables.
We have a datawarehouse and sometimes it is needed that the entire
datawarehouse is refreshed. (like when test is refreshed from production)
The datawarehouse tables are refreshed by a set of sql statements. Take for
example there is a COMPANY table, the data is denormalized with the address,
and user information stored along with company information. In order to do
that, initiallly company information is populated and then address & user
information is updated to it. In this entire process, all indexes except
the clustered index is dropped at the beginning of the process and recreated
at the end of the data load. In order to improve the process, if I set the
FILLFACTOR to 10 or 20 on the clustered index, will it really help ? I
understand from the documentation that FILLFACTOR is used only when the
index is created, so if I loading data to an existing table, FILLFACTOR will
be of no use, correct ?
Is there anything else that I can do to imporve the performance of such
table loads. All loads are with first inserting core data and then updating
the related data.
The database is already set to SIMPLE mode.
-Nags
best way:
delete all indexes, fill the table, recreate the indexes.
second option:
disable constraint validation, load the table, defrag your indexes after
loading the table
disabling the constraint like foreign key check etc... improove the
performance, but the index creation time degrade the performance. (but make
sure you load valid data in your table)
"Nags" <nags@.RemoveThishotmail.com> wrote in message
news:ufTYvko7EHA.2124@.TK2MSFTNGP14.phx.gbl...
>I need some help on how to improve performance when loading tables.
> We have a datawarehouse and sometimes it is needed that the entire
> datawarehouse is refreshed. (like when test is refreshed from production)
> The datawarehouse tables are refreshed by a set of sql statements. Take
> for
> example there is a COMPANY table, the data is denormalized with the
> address,
> and user information stored along with company information. In order to do
> that, initiallly company information is populated and then address & user
> information is updated to it. In this entire process, all indexes except
> the clustered index is dropped at the beginning of the process and
> recreated
> at the end of the data load. In order to improve the process, if I set
> the
> FILLFACTOR to 10 or 20 on the clustered index, will it really help ? I
> understand from the documentation that FILLFACTOR is used only when the
> index is created, so if I loading data to an existing table, FILLFACTOR
> will
> be of no use, correct ?
> Is there anything else that I can do to imporve the performance of such
> table loads. All loads are with first inserting core data and then
> updating
> the related data.
> The database is already set to SIMPLE mode.
> -Nags
>
|||All the datawarehouse tables do not have any constraints. Only Indexes.
This is what we do :
Drop all indexes (except clustered indexes)
Load data
Recreate all indexes.
The total process itself take about 20 hours. Looking for options to
improve performance.
-Nags
"Jj" <willgart@.BBBhotmailAAA.com> wrote in message
news:OnjM5Up7EHA.2540@.TK2MSFTNGP09.phx.gbl...
> best way:
> delete all indexes, fill the table, recreate the indexes.
> second option:
> disable constraint validation, load the table, defrag your indexes after
> loading the table
> disabling the constraint like foreign key check etc... improove the
> performance, but the index creation time degrade the performance. (but
make[vbcol=seagreen]
> sure you load valid data in your table)
> "Nags" <nags@.RemoveThishotmail.com> wrote in message
> news:ufTYvko7EHA.2124@.TK2MSFTNGP14.phx.gbl...
production)[vbcol=seagreen]
do[vbcol=seagreen]
user[vbcol=seagreen]
except
>
|||Hi Nags
What utility do you use to load the data into the tables?
In order of load performance the follwing are the best utilities for SQL
2000
- BULK INSERT command
- DTS data pump with the correct settings
- BCP
Though you would expect BCP to perform very good you can gain better
performance by using the BULK INSERT command - it seems that this command
uses the netlib to bulk insert whereas BCP uses OLEDB
Yours sincerely
Thomas Kejser
|||I use procedures to load the data and I use SQL Commands, like INSERT &
UPDATE statements.
-Nags
"Thomas Kejser" <thomas@.kejser.org> wrote in message
news:OMA0vDN8EHA.824@.TK2MSFTNGP11.phx.gbl...
> Hi Nags
> What utility do you use to load the data into the tables?
> In order of load performance the follwing are the best utilities for SQL
> 2000
> - BULK INSERT command
> - DTS data pump with the correct settings
> - BCP
> Though you would expect BCP to perform very good you can gain better
> performance by using the BULK INSERT command - it seems that this command
> uses the netlib to bulk insert whereas BCP uses OLEDB
> Yours sincerely
> Thomas Kejser
>
>
|||I fogot to mention. The source data is being brought thru replicated
tables.
-Nags
"Nags" <nags@.RemoveThishotmail.com> wrote in message
news:Ol09ZHa8EHA.2572@.tk2msftngp13.phx.gbl...[vbcol=seagreen]
> I use procedures to load the data and I use SQL Commands, like INSERT &
> UPDATE statements.
> -Nags
> "Thomas Kejser" <thomas@.kejser.org> wrote in message
> news:OMA0vDN8EHA.824@.TK2MSFTNGP11.phx.gbl...
command
>
|||how many rows are inserted?
standard inserts ALWAYS use the log and this slow down your system. its not
recommended when you have to load a large amount of data.
the bulk inserts allow you to load your data by batch, for example by 50
000rows
another way to watch: your HD config! are you in Raid? (which one; how many
drives)
and how your HD buffer is configured?
have you a dedicated HD raid for your tempdb database?
your bottleneck can come another hardware limitation (not enough memory,
slow proc...)
"Nags" <nags@.RemoveThishotmail.com> wrote in message
news:Ol09ZHa8EHA.2572@.tk2msftngp13.phx.gbl...
>I use procedures to load the data and I use SQL Commands, like INSERT &
> UPDATE statements.
> -Nags
> "Thomas Kejser" <thomas@.kejser.org> wrote in message
> news:OMA0vDN8EHA.824@.TK2MSFTNGP11.phx.gbl...
>
|||All the data comes from another set of tables. So, we do not use BULK
INSERT at all.
-Nags
"Jj" <willgart@.BBBhotmailAAA.com> wrote in message
news:#B9J2Cb8EHA.2876@.TK2MSFTNGP12.phx.gbl...
> how many rows are inserted?
> standard inserts ALWAYS use the log and this slow down your system. its
not
> recommended when you have to load a large amount of data.
> the bulk inserts allow you to load your data by batch, for example by 50
> 000rows
> another way to watch: your HD config! are you in Raid? (which one; how
many[vbcol=seagreen]
> drives)
> and how your HD buffer is configured?
> have you a dedicated HD raid for your tempdb database?
> your bottleneck can come another hardware limitation (not enough memory,
> slow proc...)
> "Nags" <nags@.RemoveThishotmail.com> wrote in message
> news:Ol09ZHa8EHA.2572@.tk2msftngp13.phx.gbl...
SQL
>
|||Hi Nags
If you copy the data from another table using a SELECT INTO statement is a
VERY fast way to move data. try this:
SELECT col1, col2, ..., col1
INTO TargetTable
FROM SourceTable
Be aware that SELECT INTO create the TargetTable, so you must delete it
first if it already exists
Hope that helps
Yours sincerely
Thomas Kejser
M.Sc, MCDBA
"Nags" <nags@.RemoveThishotmail.com> wrote in message
news:eLve5Nb8EHA.2180@.TK2MSFTNGP12.phx.gbl...
> All the data comes from another set of tables. So, we do not use BULK
> INSERT at all.
> -Nags
> "Jj" <willgart@.BBBhotmailAAA.com> wrote in message
> news:#B9J2Cb8EHA.2876@.TK2MSFTNGP12.phx.gbl...
> not
> many
> SQL
>

How to set FILLFACTOR

I need some help on how to improve performance when loading tables.
We have a datawarehouse and sometimes it is needed that the entire
datawarehouse is refreshed. (like when test is refreshed from production)
The datawarehouse tables are refreshed by a set of sql statements. Take for
example there is a COMPANY table, the data is denormalized with the address,
and user information stored along with company information. In order to do
that, initiallly company information is populated and then address & user
information is updated to it. In this entire process, all indexes except
the clustered index is dropped at the beginning of the process and recreated
at the end of the data load. In order to improve the process, if I set the
FILLFACTOR to 10 or 20 on the clustered index, will it really help ? I
understand from the documentation that FILLFACTOR is used only when the
index is created, so if I loading data to an existing table, FILLFACTOR will
be of no use, correct ?
Is there anything else that I can do to imporve the performance of such
table loads. All loads are with first inserting core data and then updating
the related data.
The database is already set to SIMPLE mode.
-Nagsbest way:
delete all indexes, fill the table, recreate the indexes.
second option:
disable constraint validation, load the table, defrag your indexes after
loading the table
disabling the constraint like foreign key check etc... improove the
performance, but the index creation time degrade the performance. (but make
sure you load valid data in your table)
"Nags" <nags@.RemoveThishotmail.com> wrote in message
news:ufTYvko7EHA.2124@.TK2MSFTNGP14.phx.gbl...
>I need some help on how to improve performance when loading tables.
> We have a datawarehouse and sometimes it is needed that the entire
> datawarehouse is refreshed. (like when test is refreshed from production)
> The datawarehouse tables are refreshed by a set of sql statements. Take
> for
> example there is a COMPANY table, the data is denormalized with the
> address,
> and user information stored along with company information. In order to do
> that, initiallly company information is populated and then address & user
> information is updated to it. In this entire process, all indexes except
> the clustered index is dropped at the beginning of the process and
> recreated
> at the end of the data load. In order to improve the process, if I set
> the
> FILLFACTOR to 10 or 20 on the clustered index, will it really help ? I
> understand from the documentation that FILLFACTOR is used only when the
> index is created, so if I loading data to an existing table, FILLFACTOR
> will
> be of no use, correct ?
> Is there anything else that I can do to imporve the performance of such
> table loads. All loads are with first inserting core data and then
> updating
> the related data.
> The database is already set to SIMPLE mode.
> -Nags
>|||All the datawarehouse tables do not have any constraints. Only Indexes.
This is what we do :
Drop all indexes (except clustered indexes)
Load data
Recreate all indexes.
The total process itself take about 20 hours. Looking for options to
improve performance.
-Nags
"Jj" <willgart@.BBBhotmailAAA.com> wrote in message
news:OnjM5Up7EHA.2540@.TK2MSFTNGP09.phx.gbl...
> best way:
> delete all indexes, fill the table, recreate the indexes.
> second option:
> disable constraint validation, load the table, defrag your indexes after
> loading the table
> disabling the constraint like foreign key check etc... improove the
> performance, but the index creation time degrade the performance. (but
make
> sure you load valid data in your table)
> "Nags" <nags@.RemoveThishotmail.com> wrote in message
> news:ufTYvko7EHA.2124@.TK2MSFTNGP14.phx.gbl...
production)[vbcol=seagreen]
do[vbcol=seagreen]
user[vbcol=seagreen]
except[vbcol=seagreen]
>|||Hi Nags
What utility do you use to load the data into the tables?
In order of load performance the follwing are the best utilities for SQL
2000
- BULK INSERT command
- DTS data pump with the correct settings
- BCP
Though you would expect BCP to perform very good you can gain better
performance by using the BULK INSERT command - it seems that this command
uses the netlib to bulk insert whereas BCP uses OLEDB
Yours sincerely
Thomas Kejser|||I use procedures to load the data and I use SQL Commands, like INSERT &
UPDATE statements.
-Nags
"Thomas Kejser" <thomas@.kejser.org> wrote in message
news:OMA0vDN8EHA.824@.TK2MSFTNGP11.phx.gbl...
> Hi Nags
> What utility do you use to load the data into the tables?
> In order of load performance the follwing are the best utilities for SQL
> 2000
> - BULK INSERT command
> - DTS data pump with the correct settings
> - BCP
> Though you would expect BCP to perform very good you can gain better
> performance by using the BULK INSERT command - it seems that this command
> uses the netlib to bulk insert whereas BCP uses OLEDB
> Yours sincerely
> Thomas Kejser
>
>|||I fogot to mention. The source data is being brought thru replicated
tables.
-Nags
"Nags" <nags@.RemoveThishotmail.com> wrote in message
news:Ol09ZHa8EHA.2572@.tk2msftngp13.phx.gbl...
> I use procedures to load the data and I use SQL Commands, like INSERT &
> UPDATE statements.
> -Nags
> "Thomas Kejser" <thomas@.kejser.org> wrote in message
> news:OMA0vDN8EHA.824@.TK2MSFTNGP11.phx.gbl...
command[vbcol=seagreen]
>|||how many rows are inserted?
standard inserts ALWAYS use the log and this slow down your system. its not
recommended when you have to load a large amount of data.
the bulk inserts allow you to load your data by batch, for example by 50
000rows
another way to watch: your HD config! are you in Raid? (which one; how many
drives)
and how your HD buffer is configured?
have you a dedicated HD raid for your tempdb database?
your bottleneck can come another hardware limitation (not enough memory,
slow proc...)
"Nags" <nags@.RemoveThishotmail.com> wrote in message
news:Ol09ZHa8EHA.2572@.tk2msftngp13.phx.gbl...
>I use procedures to load the data and I use SQL Commands, like INSERT &
> UPDATE statements.
> -Nags
> "Thomas Kejser" <thomas@.kejser.org> wrote in message
> news:OMA0vDN8EHA.824@.TK2MSFTNGP11.phx.gbl...
>|||All the data comes from another set of tables. So, we do not use BULK
INSERT at all.
-Nags
"Jj" <willgart@.BBBhotmailAAA.com> wrote in message
news:#B9J2Cb8EHA.2876@.TK2MSFTNGP12.phx.gbl...
> how many rows are inserted?
> standard inserts ALWAYS use the log and this slow down your system. its
not
> recommended when you have to load a large amount of data.
> the bulk inserts allow you to load your data by batch, for example by 50
> 000rows
> another way to watch: your HD config! are you in Raid? (which one; how
many
> drives)
> and how your HD buffer is configured?
> have you a dedicated HD raid for your tempdb database?
> your bottleneck can come another hardware limitation (not enough memory,
> slow proc...)
> "Nags" <nags@.RemoveThishotmail.com> wrote in message
> news:Ol09ZHa8EHA.2572@.tk2msftngp13.phx.gbl...
SQL[vbcol=seagreen]
>|||Hi Nags
If you copy the data from another table using a SELECT INTO statement is a
VERY fast way to move data. try this:
SELECT col1, col2, ..., col1
INTO TargetTable
FROM SourceTable
Be aware that SELECT INTO create the TargetTable, so you must delete it
first if it already exists
Hope that helps
Yours sincerely
Thomas Kejser
M.Sc, MCDBA
"Nags" <nags@.RemoveThishotmail.com> wrote in message
news:eLve5Nb8EHA.2180@.TK2MSFTNGP12.phx.gbl...
> All the data comes from another set of tables. So, we do not use BULK
> INSERT at all.
> -Nags
> "Jj" <willgart@.BBBhotmailAAA.com> wrote in message
> news:#B9J2Cb8EHA.2876@.TK2MSFTNGP12.phx.gbl...
> not
> many
> SQL
>

How to set douplicate id in sql server 2005?

Hi There,

Some one please help me to achieve this task.

I have task to join 2 tables and insert values.The table which i am inserting values has typeid column which is primary key column.I supposed to insert values except this column(TypeId).When i m trying insert values its throw an error saying

Error: Cannot insert the value NULL into column column does not allow nulls. INSERT fails.

Please let me know ther is a way to set duplicate id for this rows?

Thanks in advance.

Don't select the typeID column in your OLE DB destination.|||I don't understand; how is that a column is a PK but you don't want to insert any value on it? a PK has to have a value, right?|||

Rafael Salas wrote:

I don't understand; how is that a column is a PK but you don't want to insert any value on it? a PK has to have a value, right?

It could either be an identity, or he needs to assign a value to it.|||

Hi ,

Yes i want to assign a value automatically for the TypeId for inserting rows.

|||See if this helps you. Take a look at my post:
http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1211015&SiteID=1|||

Hi Phil,

Thanks for response.I have gone thru your post.Is there any tutorial or sample available for this?

|||

Tamizhan wrote:

Hi Phil,

Thanks for response.I have gone thru your post.Is there any tutorial or sample available for this?


I wrote one just for you... http://www.ssistalk.com/2007/02/20/generating-surrogate-keys/
Also, a more simplified version can be found at SQLIS.com. It does not start where the table left off. (It assumes starting at 1 always). http://www.sqlis.com/37.aspx
|||

Hi Phil,

I have gone thru your articles i got the ideas to do. But i am facing problem while using theis scripts.

i have no.of sql statements i have droped no.of execute sql tasks for each statements.Please find the below script,

select MaxKey = case
when TypeId is null then 0
else TypeId
end
from tblType

__

Imports System

Imports System.Data

Imports System.Math

Imports Microsoft.SqlServer.Dts.Pipeline.Wrapper

Imports Microsoft.SqlServer.Dts.Runtime.Wrapper

Public Class ScriptMain

Inherits UserComponent

Private NextKey As Int32 = 0

Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer)

Dim MaximumKey As Int32 = Me.Variables.MaxKey ' Grab value of MaxKey which was passed in

' NextKey will always be zero when we start the package.

' This will set up the counter accordingly

If (NextKey = 0) Then

' Use MaximumKey +1 here because we already have data

' and we need to start with the next available key

NextKey = MaximumKey + 1

Else

' Use NextKey +1 here because we are now relying on

' our counter within this script task.

NextKey = NextKey + 1

End If

Row.TypeId = NextKey ' Assign NextKey to our ClientKey field on our data row

End Sub

End Class

I followe all your steps as mentioned.Please Advice me.

|||Does your query return more than one row? Notice that you aren't selecting the max(typeid) from the table, tblType.

select MaxKey = case
when TypeId is null then 0
else TypeId
end
from tblType|||

Hi Phil,

Yes you are right.It returns more than one row.Please advice how to achieve this?

Please correct the script if there is a problem.

Many thanks in advance.

|||

Tamizhan wrote:

Hi Phil,

Yes you are right.It returns more than one row.Please advice how to achieve this?

Please correct the script if there is a problem.

Many thanks in advance.

Change your query to select the max(typeid) from the table. Your query isn't the same as mine.

Sunday, February 19, 2012

how to separate date & time during export

I would like to export sql server 2005 tables to a text file using SSIS. It is easy to do this as a direct export. However, how could I separate the datetime field into two fields: one for date only and one for time only?

Sample:

select getdate() ==> 2007-08-30 14:42:11.870

V

select convert(char, getdate(), 111) as date, convert(char, getdate(), 108) as [time]

==> Date = 2007/08/30

==> Time = 14:44:50

Others datetime format

century century
(yy) (yyyy) standard output format
- - --
- 0 or 100 (*) default mon dd yyyy hh:miAM (or PM)
1 101 USA mm/dd/yy
2 102 ANSI yy.mm.dd
3 103 British/French dd/mm/yy
4 104 German dd.mm.yy
5 105 Italian dd-mm-yy
6 106 - dd mon yy
7 107 - mon dd, yy
8 108 - hh:mmTongue Tieds
- 9 or 109 (*) default mon dd yyyy
milliseconds hh:miTongue Tieds:mmmAM (or PM)
10 110 USA mm-dd-yy
11 111 Japan yy/mm/dd
12 112 ISO yymmdd
- 13 or 113 (*) Europe default dd mon yyyy
milliseconds hh:mmTongue Tieds:mmm(24h)
14 114 - hh:miTongue Tieds:mmm(24h)

(*) the default values (style 0 or 100, 9 or 109, and
13 or 113) always return the century (yyyy)

|||

You have to use Derived Column Transformation of Data Flow.

So:

1. Add a Data Flow task to Control Flow window

2.Add OleDB Source then Derived Column and Finaly Flat File Destination linked with arrow

3. Double Click Derived Column and

write Derived Column Name desired then <add as new column> to Derived Column then in expresion : (DT_DBTIME)YourDataColumnName and Data Type database time[DT_DBTIME]

in second line write Derived Column Name desired then Replace 'YourDataColumnName' and in expression (DT_DBDATE)YourDataColumnName and Data Type databasedate[DT_DBDATE]

|||

I would use this expressions in a derived column:

For Date:

RIGHT("00" + (DT_WSTR,2)MONTH(MyDate),2) + "-" + RIGHT("00" + (DT_WSTR,2)DAY(MyDate),2) + "-" + (DT_WSTR,4)YEAR(MyDate)

For time:

(DT_DBTIME)MyDate