Showing posts with label code. Show all posts
Showing posts with label code. Show all posts

Tuesday, March 27, 2012

Difference between indexing txt or word doc files, help!

Hi,
I'm putting together a system and one of the requirements is to have a
searchable CV function.
I've got all the code to load the files on to the image fields, I've indexed
and got it kinda working.
Before I go to far down the road what is your opinion on having txt files
instead of doc files held on the table search? The SQL seems to be more
flexible on
searches rather than on the binary files and the index files themselves are
smaller
..i.e. when I tried a like clause it told me this would only work against a
varchar field
(i'm thinking this may be a schoolboy error so forgive me)
My main concern is I'd have to do the text conversion automatically, any
pointers on this?
Does anyone have any views on the best way to go about this or views on
holding and full searches again word files
Many thanks for any help you can give
Jim Florence
Text means faster indexing times, but not by much. With text you can query
the columns and read the contents, you can't do this with binary.
Search SQL is equally as flexible with text and binary. You can only do a
like against text or char columns.
To do the conversion use filtdump -b (you can get this from the Platform
SDK), or you can use ole-automation against the word documents to extract
the text paragraph by paragraph.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Jim Florence" <florence_james@.hotmail.com> wrote in message
news:5IKdncUNWPRLowLeRVnyiQ@.pipex.net...
> Hi,
> I'm putting together a system and one of the requirements is to have a
> searchable CV function.
> I've got all the code to load the files on to the image fields, I've
> indexed
> and got it kinda working.
> Before I go to far down the road what is your opinion on having txt files
> instead of doc files held on the table search? The SQL seems to be more
> flexible on
> searches rather than on the binary files and the index files themselves
> are
> smaller
> .i.e. when I tried a like clause it told me this would only work against a
> varchar field
> (i'm thinking this may be a schoolboy error so forgive me)
> My main concern is I'd have to do the text conversion automatically, any
> pointers on this?
> Does anyone have any views on the best way to go about this or views on
> holding and full searches again word files
> Many thanks for any help you can give
> Jim Florence
>
>
|||Hilary,
Many thanks for that, very, very useful. I've started playing with the
indexing service as well to try and find a best fit.
I'll give this a go
many thanks for such a quick and informative response
Regards
Jim
"Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
news:%236F1hKaAGHA.216@.TK2MSFTNGP15.phx.gbl...
> Text means faster indexing times, but not by much. With text you can query
> the columns and read the contents, you can't do this with binary.
> Search SQL is equally as flexible with text and binary. You can only do a
> like against text or char columns.
> To do the conversion use filtdump -b (you can get this from the Platform
> SDK), or you can use ole-automation against the word documents to extract
> the text paragraph by paragraph.
> --
> Hilary Cotter
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
> "Jim Florence" <florence_james@.hotmail.com> wrote in message
> news:5IKdncUNWPRLowLeRVnyiQ@.pipex.net...
>

Wednesday, March 21, 2012

diferent way to a query

is theres a way to return the same result of this query without making the inner joins?

Code Snippet

create view missEeCuts
as
select distinct e.*
from events e
inner join (
select eventid, sum(px) as sum_px
from isolatedLeptons
group by eventid
) l
on e.idevent=l.eventid
inner join (
select eventid, sum(py) as sum_py
from isolatedLeptons
group by eventid
) l2
on e.idevent=l2.eventid
where
dbo.module(e.PxMiss,e.PyMiss)>=40 AND
dbo.effectiveMass(e.PxMiss,e.PyMiss,l.sum_px,l2.sum_py)<= 150.0

GO


Maybe this, Luis:

Code Snippet

selectdistinct e.*

from events e

where

dbo.module(e.PxMiss,e.PyMiss)>=40 AND

dbo.effectiveMass(e.PxMiss,e.PyMiss,l.sum_px,l2.sum_py)<= 150.0

ANDEXISTS(

select eventid,sum(px)as sum_px

from isolatedLeptons l

where e.idevent=l.eventid

groupby eventid

)

ANDEXISTS(

select eventid,sum(py)as sum_py

from isolatedLeptons l2

where e.idevent=l2.eventid

groupby eventid

)

Why do you not want the joins?

|||

Luis:

Why do you take "sum_px" and "sum_py" from different joins in:

Code Snippet

inner join (
select eventid, sum(px) as sum_px
from isolatedLeptons
group by eventid
) l
on e.idevent=l.eventid
inner join (
select eventid, sum(py) as sum_py
from isolatedLeptons
group by eventid
) l2
on e.idevent=l2.eventid

Can these two sums potentially be computed in a single join such as:

Code Snippet

inner join (
select eventid, sum(px) as sum_px, sum(py) as sum_py
from isolatedLeptons
group by eventid
) l
on e.idevent=l.eventid

If so, you might be able to eliminate the last INNER JOIN

|||

Dale:

Are "l.sum_px" "l2.sum_py" in scope in this line?

dbo.effectiveMass(e.PxMiss,e.PyMiss,l.sum_px,l2.sum_py)<= 150.0

I thought these would be out of scope

|||

Ah man. I completely missed that.

Still early...where'd that coffee pot go?

Thanks Kent

Monday, March 19, 2012

Did I get current source code?

Hi,
I have been working with a programmer overseas for the past six months.
Every so often I request and get the source code for a Delphi program
running on MSSQL. My question please is, how can I tell if I am getting
valid or current source code?
regards,
s
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.805 / Virus Database: 547 - Release Date: 03-Dec-04Use Delphi to compile and test it?
This isn't a SQL Server question.
David Portas
SQL Server MVP
--

Friday, March 9, 2012

Devide by zero error

help please.
I have sql statement that cast two date range into decimal and then devive
by one another.
When I compile this code I get error: Source: Microsoft OLE DB provider for
sql
Error desc: Devide by zero error encountered.
How would I resolve this issue?Please disregard. I found a thread that already had the answer.
search for : Divide by Zero
"ITDUDE27" wrote:

> help please.
> I have sql statement that cast two date range into decimal and then devive
> by one another.
> When I compile this code I get error: Source: Microsoft OLE DB provider fo
r
> sql
> Error desc: Devide by zero error encountered.
> How would I resolve this issue?

device is ambiguous in the namespace microsoft.directx.direc3d

Can anybody tell me what this error message means. I get it when trying to use directx3d. The code is included below.

I have checked and double checked that I have all the necessary references and includes. Obviously I am missing something.
Thanks Dick

Imports System

Imports System.Drawing

Imports System.Windows.Forms

Imports Microsoft.DirectX

Imports Microsoft.DirectX.Direct3D

Public Class Form1

Inherits System.Windows.Forms.Form

Dim device As Device ' Our rendering device

#Region "Windows Form Designer generated code "

Public Sub New()

MyBase.New()

'This call is required by the Windows Form Designer.

InitializeComponent()

End Sub

'Form overrides dispose to clean up the component list.

Protected Overloads Overrides Sub Dispose(ByVal Disposing As Boolean)

If Disposing Then

If Not components Is Nothing Then

components.Dispose()

End If

End If

MyBase.Dispose(Disposing)

End Sub

'Required by the Windows Form Designer

#End Region

Public Function InitializeGraphics() As Boolean

Try

' Now let's setup our D3D stuff

Dim presentParams As PresentParameters = New PresentParameters()

presentParams.Windowed = True

presentParams.SwapEffect = SwapEffect.Discard

device = New Device(0, DeviceType.Hardware, this, CreateFlags.SoftwareVertexProcessing, presentParams)

Return True

Catch

Return False

End Try

End Function

Private Sub Render()

If device Is Nothing Then

Exit Sub

End If

'Clear the backbuffer to a blue color

device.Clear(ClearFlags.Target, System.Drawing.Color.Blue, 1.0F, 0)

'Begin the scene

device.BeginScene()

' Rendering of scene objects can happen here

'End the scene

device.EndScene()

device.Present()

End Sub

Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load

If Not InitializeGraphics() Then

MessageBox.Show("Could not initialize Direct3D. This tutorial will exit.")

Exit Sub

End If

Me.Show()

' While the form is still valid, render and process messages

Do While Me.Created

Render()

Application.DoEvents()

Loop

End Sub

End Class

This is a SQL Server forum. I think you've accidentally posted to the wrong place...

Wednesday, March 7, 2012

device is ambiguous in the namespace microsoft.directx.direc3d

Can anybody tell me what this error message means. I get it when trying to use directx3d. The code is included below.

I have checked and double checked that I have all the necessary references and includes. Obviously I am missing something.
Thanks Dick

Imports System

Imports System.Drawing

Imports System.Windows.Forms

Imports Microsoft.DirectX

Imports Microsoft.DirectX.Direct3D

Public Class Form1

Inherits System.Windows.Forms.Form

Dim device As Device ' Our rendering device

#Region "Windows Form Designer generated code "

Public Sub New()

MyBase.New()

'This call is required by the Windows Form Designer.

InitializeComponent()

End Sub

'Form overrides dispose to clean up the component list.

Protected Overloads Overrides Sub Dispose(ByVal Disposing As Boolean)

If Disposing Then

If Not components Is Nothing Then

components.Dispose()

End If

End If

MyBase.Dispose(Disposing)

End Sub

'Required by the Windows Form Designer

#End Region

Public Function InitializeGraphics() As Boolean

Try

' Now let's setup our D3D stuff

Dim presentParams As PresentParameters = New PresentParameters()

presentParams.Windowed = True

presentParams.SwapEffect = SwapEffect.Discard

device = New Device(0, DeviceType.Hardware, this, CreateFlags.SoftwareVertexProcessing, presentParams)

Return True

Catch

Return False

End Try

End Function

Private Sub Render()

If device Is Nothing Then

Exit Sub

End If

'Clear the backbuffer to a blue color

device.Clear(ClearFlags.Target, System.Drawing.Color.Blue, 1.0F, 0)

'Begin the scene

device.BeginScene()

' Rendering of scene objects can happen here

'End the scene

device.EndScene()

device.Present()

End Sub

Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load

If Not InitializeGraphics() Then

MessageBox.Show("Could not initialize Direct3D. This tutorial will exit.")

Exit Sub

End If

Me.Show()

' While the form is still valid, render and process messages

Do While Me.Created

Render()

Application.DoEvents()

Loop

End Sub

End Class

This is a SQL Server forum. I think you've accidentally posted to the wrong place...

Device error or device off-line. ?

Hi to All

when i m trying to execute following code

backup database web
to disk = 'c:\inetpub\wwwroot\backup\mybakup.bak'
with format

I m Getting Error like :

Server: Msg 3201, Level 16, State 1, Line 1
Cannot open backup device 'c:\inetpub\wwwroot\backup\mybakup.bak'. Device error or device off-line. See the SQL Server error log for more details.
Server: Msg 3013, Level 16, State 1, Line 1
BACKUP DATABASE is terminating abnormally.

This error is Generated only when i m trying to access folders within "wwwroot" but not in any other folders , even command runs success fully for "wwwroot" folder . !!

but not for any subfolders of wwwroot.

Can Any One Help Me ?

Please follow the post in the other group, you don′t have to mulitpost as we the groups are all monitored.


HTH, Jens Suessmeyer.

http://www.sqlserver2005.de

Device error or device off-line ?

Hi to All

when i m trying to execute following code

backup database web
to disk = 'c:\inetpub\wwwroot\backup\mybakup.bak'
with format

I m Getting Error like :

Server: Msg 3201, Level 16, State 1, Line 1
Cannot open backup device 'c:\inetpub\wwwroot\backup\mybakup.bak'. Device error or device off-line. See the SQL Server error log for more details.
Server: Msg 3013, Level 16, State 1, Line 1
BACKUP DATABASE is terminating abnormally.

This error is Generated only when i m trying to access folders within "wwwroot" but not in any other folders , even command runs success fully for "wwwroot" folder . !!

but not for any subfolders of wwwroot.

Can Any One Help Me ?

What does show

xp_cmdshell "dir c:\inetpub\wwwroot\backup"

?

|||

DO the subfolders inherit the permission from the wwwroot folder ? Make sure that this isn′t a permission problem. What does (like the message says the error log tell you about the problem?

HTH, Jens Suessmeyer.

http://www.sqlserver2005.de

|||

i m using Sql Server 2000 (sp3);

command

xp_cmdshell "dir c:\inetpub\wwwroot\backup"

gives following error:

Server: Msg 2812, Level 16, State 62, Line 1
Could not find stored procedure 'xp_cmdshell'.

|||

All Permission have been set properly.

but error is still as it is.

|||

the xp_cmdshell is located in the master database, so you have to prefix the database name or need to have your curretn context in the master database.

master.xp_cmdshell "dir c:\inetpub\wwwroot\backup"

or

Use master
GO
xp_cmdshell "dir c:\inetpub\wwwroot\backup"

HTH, JEns Suessmeyer.

http://www.sqlserver2005.de

|||

Hi,

sometime people think that the path within any TSQL commands can be also used locally, so if you are working in a client server scenario and you issue the command BACKUP DATABASE with some path and filename (with absolute drive letters), the drive letters apply to the server not the client. If you want to reach a destination on the client machine, you have to copy it via a UNC share.

Don′t know if that applies to you, just to let you know. If this is not the case, please see if the listing of the directory comes back with a listing of files (then it really HAS to be a permission problem) or it comes back with an error that the directory doesn′t exists.

HTH, Jens Suessmeyer.

http://www.sqlserver2005.de

Development/Production Environment with Visual Source Safe

Am new to RS. Installed it on test server and was quickly able to
generate a report with multiple datasources and even write some code to
render it directly to a PDF file. Quite a useful and efficient tool.
Now, we want to bring RS into our development environment. We have
multiple developers working on an internal web app in VS.NET that we
want to incorporate reports into. Developers use Visual Source Safe to
bring project files from the development server to their own machines
to do development and then check changes back into the development
server. Periodically, our app is release from development to
production. Pretty straightforward.
What I'm curious about is:
(1) Does SourceSafe version control the report definitions? They look
like files in VS .NET but they reside in the RS database, so I'm
uncertain.
(2) Can developers install RS on their machines for developing reports
under VS.NET while using the RS server/database on the development
server for previewing?
Thanks.
JeffAnswers to your questions:
1. You can (and should) check reports, report projects, report
solutions--however you want to organize it--into VSS. But it is a separate
process that you have to enforce with policy and procedure as VSS will not
reach into the report catalog (database) and handle versioning there.
2. Yes. If your developers have VS.NET 2003, then just install any version
of Reporting Services on their workstations, just unselect and server
components if they are offered in the setup.
--
Douglas McDowell douglas@.nospam.solidqualitylearning.com
"JeffW" <jwilson@.telnetww.com> wrote in message
news:1110499289.694022.73370@.g14g2000cwa.googlegroups.com...
> Am new to RS. Installed it on test server and was quickly able to
> generate a report with multiple datasources and even write some code to
> render it directly to a PDF file. Quite a useful and efficient tool.
> Now, we want to bring RS into our development environment. We have
> multiple developers working on an internal web app in VS.NET that we
> want to incorporate reports into. Developers use Visual Source Safe to
> bring project files from the development server to their own machines
> to do development and then check changes back into the development
> server. Periodically, our app is release from development to
> production. Pretty straightforward.
> What I'm curious about is:
> (1) Does SourceSafe version control the report definitions? They look
> like files in VS .NET but they reside in the RS database, so I'm
> uncertain.
> (2) Can developers install RS on their machines for developing reports
> under VS.NET while using the RS server/database on the development
> server for previewing?
> Thanks.
> Jeff
>

Saturday, February 25, 2012

Developing new Task in SSIS

Hi All:

For making a custom Task in SSIS, is it possible to reuse the existing code base? For e.g. If i need to append some functionalities to LookUp Transform. Can I inherit the lookUp transform class?

Thanks,

Vipul

Vipul123 wrote:

Hi All:

For making a custom Task in SSIS, is it possible to reuse the existing code base? For e.g. If i need to append some functionalities to LookUp Transform. Can I inherit the lookUp transform class?

Thanks,

Vipul

No. Unfortunately not.

-Jamie

Friday, February 17, 2012

dev to UAT to Live using linked server catalog to default the database

Hi all,

Im trying to have one set of TSQL code that uses global settings in 'linked servers' so code can migrated from development, to test to UAT and finally to live without change. e.g.

select *
from [oursystem].ourDB.dbo.table
join [linked_server_othersystem]..dbo.tablename ...

By using no database name (and expecting the catalog set at the linked server to be used) the same code is equivilent to:

select *
from [oursystem].ourDB.dbo.tablefrom [oursystem].ourDB.dbo.table
join [linked_server_othersystem].developmentDB.dbo.tablename

select *
from [oursystem].ourDB.dbo.table
join [linked_server_othersystem].testDB.dbo.tablename ...

select *
from [oursystem].ourDB.dbo.table
join [linked_server_othersystem].UATDB.dbo.tablename ...

select *
from [oursystem].ourDB.dbo.table
join [linked_server_othersystem].LiveDB.dbo.tablename ...

Nice. One code set over multiple environments thereby preserving numerous audit and 'best practices' guidlines for source control. (And stopping our live system from accidentially linking to a UAT or test 'othersystem' database. )

Problem is, try as I might, the catalog seems to be REQUIRED when the provider is SQLNCLI. I.e. it must be specified in the TSQL code. Thus any attempt to use a default catalog (database) fails with the following error:

Msg 7313, Level 16, State 1, Line 1
An invalid schema or catalog was specified for the provider "SQLNCLI" for linked server "test".

Linked server set up as follows:

EXEC master.dbo.sp_addlinkedserver @.server = N'LINKED_SERVER_OTHERSYSTEM', @.srvproduct=N'SQLNCLI', @.provider=N'SQLNCLI', @.datasrc=N'servername', @.catalog=N'testdatabase'

Is this bahaviour 'by design'?

Is there a workaround? (e.g. another provider to SQL Server 2005 that respects the catalog value)

Hotfix?

Any help - from someone who has actually done this (or not) much appreciated. And yes, I've read the doco and it suggests it should work and does not suggest it will not work.

Cheers,
Belgarion

Were you able to solve this problem? I have a similiar case with the same results.

Thanks,

Chris

|||

Hi Chris,

Alas no. I've concluded it's 'by design', and I might add, poorly documented as there appears to be nothing in the documentation that says this won't work.

Anybody tried this and suceeded? A chocolate fish awaits ...

Cheers,
Belgarion.

Tuesday, February 14, 2012

Determining what to input as server

I'm trying to connect to a sql database, but I don't know what myserver is in the following code.
Dim strConn As String = "server=myserver;database=Northwind"

I can't get the code to link up with my Northwind database.

I'm running everything locally if that helps.

Thanks!

JonYou mean what to use instead of "myserver"? It is the machine name on the network where the SQL Server is located. If it is on the same machine as where you are running the ASP.NET app, you can either use the machine name or "(local)"

Don|||Oh, and you can also use an IP address (such as 10.12.15.16) if you have it, or an internet address (theserver.mydomain.com).

Don

Determining the most appropriate conversion for a string

Hi,

Is there some kind of code snippet to determine (a) the precise data type (and precision if appropriate) of a given table.columnname in the database, and (b) the most suitable conversion for a varchar that is being made to hold various types of data.

Presumably based on the above one could have some sort of CASE WHEN to cast the data to different types accordingly.

Thanks for any help on this.

You need to use the system tables. Check out sysobjects:

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_sys-t_2983.asp

syscolumns:

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_sys-c_5mur.asp

and systypes:

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_sys-t_2983.asp

This should get you part of the way:

declare @.tablename varchar(256)
declare @.colname varchar(256)

set @.tablename = .....
set @.colname = .....

SELECT obj.name as 'Table', col.name as 'Column', typ.name as 'DataType', col.length, col.xprec as 'Precision', col.xscale as 'Scale'
FROM sysobjects obj
join syscolumns col
on col.id = obj.id
join systypes typ
on typ.xtype = col.xtype
where obj.type = 'U'
and obj.name = @.tablename
and col.name = @.colname

Also, check out my blog for some information on system views:

http://blogs.claritycon.com/blogs/the_englishman/archive/2006/02/09/197.aspx

HTH

|||Thanks, that's really helpful...do you have any insight on determining the most appropriate conversion for a string piece of data? I.e. "23/10/04", "1.01F", "0.68", "0.00021", "1", etc.
|||

There are three system functions which would be useful here:

ISNUMERIC

ISDATE

ISNULL

See:

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_fa-fz_79f7.asp

This should get you started:

CASE

WHEN ISNUMERIC(column) = 1 THEN ...... Process Number

WHEN ISDATE(COLUMN) = 1 THEN ... Process Date

ELSE ... Process string\

END

Also, you can use the LEN() and DATALENGTH() functions to determine the size of the value passed. If it is a number, you can also use CHARINDEX to look for a decimal point in the string, and return its value or not. That should help you determine whether the number is an integer, decimal or a money field, and help you work out the precision and size of the data.

HTH

|||ISNUMERIC will check for conversion to any of the numeric data types including integer, float, numeric and money. So if you need stricter checks you need to implement yourself. Another option is to use sql_variant but this will complicate your logic due to the conversions. On the other hand you can retain the base type of the value you are specifying.|||Many thanks for your replies. The requirement has actually now been made simpler. In short, I need to have an additional case to logic like the following (I have asterisked the key line):-

if @.stkData like '%+%'
begin
exec dbo.spNeoStackEvaluate @.stkLeft output
exec dbo.spNeoStackEvaluate @.stkRight output

***set @.stkDataOut = cast(@.stkRight as int) + cast(@.stkLeft as int)
return @.stkDataOut
end

Which should handle cases where the strings originated as money datatypes. The problem is that a simple rewrite of the above, casting @.stkRight as money, is not acceptable in T-SQL, but falls over.

So while replies so far have been very helpful in setting up some logic to know what kind of data I am dealing with, I would also like to know if a simple rewrite of the above to handle money-like strings is in some way possible?

-Thanks.
|||I should point out that all the variables here -- @.stkDataOut, @.stkRight, @.stkLeft, are originally declared as varchar(100) type.
|||

I am not quite sure what exacty your problem is here. As I suggested earlier, you can test the varchar value to seeif it is a decimal (or money) or an int by using charindex, and looking for the presence of a '.' character. By getting the position of the decimal point using charindex, you can then determine the size of the decimal or money value to cast it as. This should then be able to handle any case.

HTH

|||The problem is that even assuming a simple case, for test purposes -- e.g. '0.68' as the varchar data, casting it to money throws an error. So for example assuming a value of '20', casting it as an int is fine, but cast(@.somedata as money) where @.somedata is '0.68' fails. Now this piece of data originated in a field of money datatype. Are you saying that my cast statement must be more specific?
|||

I thought the problem was not specifying the size of the data type, but that does not appear to be the issue. Test code works fine:

declare @.char varchar(20)

set @.char = '0.68'

select cast(@.char as money)

-- returns 0.68

So I am not sure what your issue if. Are you sure that it is the value '0.68' that is throwing the error? I suggest you put a select @.somedata value before the cast to get the value of the varchar before the error is thrown, to double check you are not getting any characters in there. Also, what error are you getting? Can you post the code snippet along with the syntax error? What version of SQL Server are you running? You could try casting it as a decimal instead, but make sure you specify the size and precision, otherwise it will default to an integer value.

|||

Though this will work fine (i.e. accessing system tables to get column data types, attributes, etc.), I'd probably recommend you make use of the SQL-92 compliant INFORMATION_SCHEMA views or system provided functions when possible to retrieve the data, given that you may get different results and unpredictable updates across SQL Server versions, SKU's, and hotfixes/service packs.

In this case, the INFORMATION_SCHEMA.COLUMNS view provides you with all the things you are looking for and probably more: name, data type, precision, scale, schema, default, position, nullability, etc., etc.

Additionally, these system functions could help out as well:

COLUMNPROPERTY()
COL_NAME()
COL_LENGTH()

As for the money conversion, the posting above by Shughes should work fine as he mentions...does for me also :-)...

|||Please indicate if Chad and I solved your problem.|||Will do. I am evaluating it at the moment. It's one of a number of problems that I'm working through. I will post in greater detail later on.
|||Just briefly, this is definitely still erroring. The problem seems to consist in going from varchar to money (in order to carry out a valid calculation) and then going back to varchar.

I will post on this in more detail once I get various other problems sorted out.

Determining single-column candidate keys

This script will tell you all of the unique columns (except BLOBs) in
all of the tables in a database.
I made a post last night with some VBA code for exporting data from
Access. I've been given a 2 GB Access database with scores of tables,
some of them having hundreds of columns, millions of rows, and no
constraints. I also know very little about the underlying data. After
getting the data into SQL Server, I wanted to learn more about the
nature of the data. The first thing I wanted to find was which columns
could be single-column candidate keys.
One of the issues I ran into is that my generated SQL often exceeded
4000 characters, so I needed to come up with a way of executing more
than 4000 characters. Also, y'all'll notice that I'm using cursors
quite a bit here. These cursors only iterate through a few rows. The
heavy stuff is pretty efficient SQL.
-Alan
SET NOCOUNT ON
CREATE TABLE #UniqueColumns
(
TableName sysname
, ColumnName sysname
, IsUnique CHAR(1)
, PRIMARY KEY (TableName, ColumnName)
)
DECLARE @.TableName sysname
, @.ColumnName sysname
DECLARE cTables CURSOR
LOCAL
FAST_FORWARD
FOR
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
CREATE TABLE #SQLFragments
(
RowNumber INT
IDENTITY
, SQL NVARCHAR(4000)
, Type VARCHAR(6)
NOT NULL
CHECK (Type IN ('SELECT', 'UPDATE'))
)
OPEN cTables
WHILE 1 = 1 BEGIN
FETCH NEXT FROM cTables INTO @.TableName
IF @.@.FETCH_STATUS <> 0 BREAK
DECLARE @.SQL NVARCHAR(4000)
, @.SQLUpdateUnique NVARCHAR(4000)
SET @.SQL = N'SELECT '
SET @.SQLUpdateUnique = N''
DECLARE c CURSOR
LOCAL
FAST_FORWARD
FOR
SELECT COLUMN_NAME
FROM INFORMATION_SCHEMA.columns
WHERE TABLE_NAME = @.TableName
AND DATA_TYPE NOT IN ('text', 'ntext', 'image')
ORDER BY ORDINAL_POSITION
OPEN c
WHILE 1 = 1 BEGIN
FETCH NEXT FROM c INTO @.ColumnName
IF @.@.FETCH_STATUS <> 0 BREAK
INSERT #UniqueColumns (TableName, ColumnName)
SELECT @.TableName, @.ColumnName
SET @.SQL = @.SQL + '
CASE WHEN COUNT (DISTINCT ' + @.ColumnName + ') = COUNT(*) THEN ''Y''
ELSE ''N'' END ' + @.ColumnName + ','
SET @.SQLUpdateUnique = @.SQLUpdateUnique + 'UPDATE #UniqueColumns SET
IsUnique = ' + @.ColumnName + ' FROM ##Temp WHERE TableName = ''' +
@.TableName + ''' AND ColumnName = ''' + @.ColumnName + '''
'
IF LEN(@.SQL) > 3000 BEGIN
INSERT #SQLFragments (SQL, Type)
SELECT @.SQL, 'SELECT'
SET @.SQL = ''
END
IF LEN(@.SQLUpdateUnique) > 3000 BEGIN
INSERT #SQLFragments (SQL, Type)
SELECT @.SQLUpdateUnique, 'UPDATE'
SET @.SQLUpdateUnique = ''
END
END
CLOSE c
DEALLOCATE c
SET @.SQL = LEFT(@.SQL, LEN(@.SQL) - 1) + ' INTO ##Temp FROM ' +
@.TableName
INSERT #SQLFragments (SQL, Type)
SELECT @.SQL, 'SELECT'
INSERT #SQLFragments (SQL, Type)
SELECT @.SQLUpdateUnique, 'UPDATE'
DECLARE @.FragmentCount INT
SELECT @.FragmentCount = COUNT(*) FROM #SQLFragments WHERE Type =
'SELECT'
DECLARE @.SQLEXEC NVARCHAR(4000)
, @.Fragment NVARCHAR(4000)
, @.RowNumber VARCHAR(10)
SET @.SQL = N''
SET @.SQLEXEC = N'EXEC ('
DECLARE cFragments CURSOR
LOCAL
FAST_FORWARD
FOR
SELECT SQL, RowNumber
FROM #SQLFragments
WHERE Type = 'SELECT'
ORDER BY RowNumber
OPEN cFragments
WHILE 1 = 1 BEGIN
FETCH NEXT FROM cFragments INTO @.Fragment, @.RowNumber
IF @.@.FETCH_STATUS <> 0 BREAK
SET @.SQL = @.SQL + N'DECLARE @.SQL' + @.RowNumber + N' NVARCHAR(4000)
SELECT @.SQL' + @.RowNumber + N' = SQL FROM #SQLFragments WHERE RowNumber
= ' + @.RowNumber + N'
'
SET @.SQLEXEC = @.SQLEXEC + N' @.SQL' + @.RowNumber + N' +'
END
CLOSE cFragments
DEALLOCATE cFragments
SET @.SQLEXEC = LEFT(@.SQLEXEC, LEN(@.SQLEXEC) - 1) + ')'
SET @.SQL = @.SQL + N' ' + @.SQLEXEC
EXEC sp_executesql @.SQL
SET @.SQL = N''
SET @.SQLEXEC = N'EXEC ('
DECLARE cFragments CURSOR
LOCAL
FAST_FORWARD
FOR
SELECT SQL, RowNumber
FROM #SQLFragments
WHERE Type = 'UPDATE'
ORDER BY RowNumber
OPEN cFragments
WHILE 1 = 1 BEGIN
FETCH NEXT FROM cFragments INTO @.Fragment, @.RowNumber
IF @.@.FETCH_STATUS <> 0 BREAK
SET @.SQL = @.SQL + N'DECLARE @.SQL' + @.RowNumber + N' NVARCHAR(4000)
SELECT @.SQL' + @.RowNumber + N' = SQL FROM #SQLFragments WHERE RowNumber
= ' + @.RowNumber + N'
'
SET @.SQLEXEC = @.SQLEXEC + N' @.SQL' + @.RowNumber + N' +'
END
CLOSE cFragments
DEALLOCATE cFragments
SET @.SQL = @.SQL + N' ' + LEFT(@.SQLEXEC, LEN(@.SQLEXEC) - 1) + N')'
EXEC sp_executesql @.SQL
DELETE #SQLFragments
DROP TABLE ##Temp
END
CLOSE cTables
DEALLOCATE cTables
DELETE #UniqueColumns WHERE IsUnique = 'N'
SELECT * FROM #UniqueColumns
DROP TABLE #UniqueColumns
DROP TABLE #SQLFragmentsYou can analyze the data and make assumptions about keys and relationships,
but can the source of the data also provide a data dictionary?
"Alan Samet" <alansamet@.gmail.com> wrote in message
news:1141922084.124478.54740@.j52g2000cwj.googlegroups.com...
> This script will tell you all of the unique columns (except BLOBs) in
> all of the tables in a database.
> I made a post last night with some VBA code for exporting data from
> Access. I've been given a 2 GB Access database with scores of tables,
> some of them having hundreds of columns, millions of rows, and no
> constraints. I also know very little about the underlying data. After
> getting the data into SQL Server, I wanted to learn more about the
> nature of the data. The first thing I wanted to find was which columns
> could be single-column candidate keys.
> One of the issues I ran into is that my generated SQL often exceeded
> 4000 characters, so I needed to come up with a way of executing more
> than 4000 characters. Also, y'all'll notice that I'm using cursors
> quite a bit here. These cursors only iterate through a few rows. The
> heavy stuff is pretty efficient SQL.
> -Alan
> SET NOCOUNT ON
> CREATE TABLE #UniqueColumns
> (
> TableName sysname
> , ColumnName sysname
> , IsUnique CHAR(1)
> , PRIMARY KEY (TableName, ColumnName)
> )
> DECLARE @.TableName sysname
> , @.ColumnName sysname
> DECLARE cTables CURSOR
> LOCAL
> FAST_FORWARD
> FOR
> SELECT TABLE_NAME
> FROM INFORMATION_SCHEMA.TABLES
> CREATE TABLE #SQLFragments
> (
> RowNumber INT
> IDENTITY
> , SQL NVARCHAR(4000)
> , Type VARCHAR(6)
> NOT NULL
> CHECK (Type IN ('SELECT', 'UPDATE'))
> )
> OPEN cTables
> WHILE 1 = 1 BEGIN
> FETCH NEXT FROM cTables INTO @.TableName
> IF @.@.FETCH_STATUS <> 0 BREAK
> DECLARE @.SQL NVARCHAR(4000)
> , @.SQLUpdateUnique NVARCHAR(4000)
> SET @.SQL = N'SELECT '
> SET @.SQLUpdateUnique = N''
> DECLARE c CURSOR
> LOCAL
> FAST_FORWARD
> FOR
> SELECT COLUMN_NAME
> FROM INFORMATION_SCHEMA.columns
> WHERE TABLE_NAME = @.TableName
> AND DATA_TYPE NOT IN ('text', 'ntext', 'image')
> ORDER BY ORDINAL_POSITION
> OPEN c
> WHILE 1 = 1 BEGIN
> FETCH NEXT FROM c INTO @.ColumnName
> IF @.@.FETCH_STATUS <> 0 BREAK
> INSERT #UniqueColumns (TableName, ColumnName)
> SELECT @.TableName, @.ColumnName
> SET @.SQL = @.SQL + '
> CASE WHEN COUNT (DISTINCT ' + @.ColumnName + ') = COUNT(*) THEN ''Y''
> ELSE ''N'' END ' + @.ColumnName + ','
> SET @.SQLUpdateUnique = @.SQLUpdateUnique + 'UPDATE #UniqueColumns SET
> IsUnique = ' + @.ColumnName + ' FROM ##Temp WHERE TableName = ''' +
> @.TableName + ''' AND ColumnName = ''' + @.ColumnName + '''
> '
> IF LEN(@.SQL) > 3000 BEGIN
> INSERT #SQLFragments (SQL, Type)
> SELECT @.SQL, 'SELECT'
> SET @.SQL = ''
> END
> IF LEN(@.SQLUpdateUnique) > 3000 BEGIN
> INSERT #SQLFragments (SQL, Type)
> SELECT @.SQLUpdateUnique, 'UPDATE'
> SET @.SQLUpdateUnique = ''
> END
> END
> CLOSE c
> DEALLOCATE c
> SET @.SQL = LEFT(@.SQL, LEN(@.SQL) - 1) + ' INTO ##Temp FROM ' +
> @.TableName
> INSERT #SQLFragments (SQL, Type)
> SELECT @.SQL, 'SELECT'
> INSERT #SQLFragments (SQL, Type)
> SELECT @.SQLUpdateUnique, 'UPDATE'
> DECLARE @.FragmentCount INT
> SELECT @.FragmentCount = COUNT(*) FROM #SQLFragments WHERE Type =
> 'SELECT'
> DECLARE @.SQLEXEC NVARCHAR(4000)
> , @.Fragment NVARCHAR(4000)
> , @.RowNumber VARCHAR(10)
> SET @.SQL = N''
> SET @.SQLEXEC = N'EXEC ('
> DECLARE cFragments CURSOR
> LOCAL
> FAST_FORWARD
> FOR
> SELECT SQL, RowNumber
> FROM #SQLFragments
> WHERE Type = 'SELECT'
> ORDER BY RowNumber
> OPEN cFragments
> WHILE 1 = 1 BEGIN
> FETCH NEXT FROM cFragments INTO @.Fragment, @.RowNumber
> IF @.@.FETCH_STATUS <> 0 BREAK
> SET @.SQL = @.SQL + N'DECLARE @.SQL' + @.RowNumber + N' NVARCHAR(4000)
> SELECT @.SQL' + @.RowNumber + N' = SQL FROM #SQLFragments WHERE RowNumber
> = ' + @.RowNumber + N'
> '
> SET @.SQLEXEC = @.SQLEXEC + N' @.SQL' + @.RowNumber + N' +'
> END
> CLOSE cFragments
> DEALLOCATE cFragments
> SET @.SQLEXEC = LEFT(@.SQLEXEC, LEN(@.SQLEXEC) - 1) + ')'
> SET @.SQL = @.SQL + N' ' + @.SQLEXEC
> EXEC sp_executesql @.SQL
> SET @.SQL = N''
> SET @.SQLEXEC = N'EXEC ('
> DECLARE cFragments CURSOR
> LOCAL
> FAST_FORWARD
> FOR
> SELECT SQL, RowNumber
> FROM #SQLFragments
> WHERE Type = 'UPDATE'
> ORDER BY RowNumber
> OPEN cFragments
> WHILE 1 = 1 BEGIN
> FETCH NEXT FROM cFragments INTO @.Fragment, @.RowNumber
> IF @.@.FETCH_STATUS <> 0 BREAK
> SET @.SQL = @.SQL + N'DECLARE @.SQL' + @.RowNumber + N' NVARCHAR(4000)
> SELECT @.SQL' + @.RowNumber + N' = SQL FROM #SQLFragments WHERE RowNumber
> = ' + @.RowNumber + N'
> '
> SET @.SQLEXEC = @.SQLEXEC + N' @.SQL' + @.RowNumber + N' +'
> END
> CLOSE cFragments
> DEALLOCATE cFragments
> SET @.SQL = @.SQL + N' ' + LEFT(@.SQLEXEC, LEN(@.SQLEXEC) - 1) + N')'
> EXEC sp_executesql @.SQL
> DELETE #SQLFragments
> DROP TABLE ##Temp
> END
> CLOSE cTables
> DEALLOCATE cTables
> DELETE #UniqueColumns WHERE IsUnique = 'N'
> SELECT * FROM #UniqueColumns
> DROP TABLE #UniqueColumns
> DROP TABLE #SQLFragments
>|||I can't get a data dictionary. This data is only in intermediate form
anyways. I'm writing my own database for the front-end application, but
I need some of the data that's in this database. A lot of it's not
normalized, et cetera. Once I know exactly what I need for my
application, the database I write will be properly constrained. The
purpose of the script was to reduce the amount of time it takes me to
understand the underlying data.
-Alan|||Alan,
I believe it's possible to execute longer strings by combining them and
using EXEC instead of sp_executesql eg:
EXEC( @.select + ' ' + @.from )
However, that should probably telling you there might be another way to do
it. How about capturing your information first?
ie have two tables, one to represent the tables in your Access database and
one to represent the columns ( or attributes Joe )? Something like:
Table: access_tables
at_id
table_name
rows
Table: access_fields
af_id
at_id
field_name
unique values
rows
Run one script which captures the data required, then you can run nice fast
SELECTs on real data:
SELECT *
FROM access_tables t
INNER JOIN access_fields f ON t.at_id = f.at_id
WHERE t.rows = f.unique_values
It's sometimes frowned upon to gather metadata like this in your database
but I see what you're trying to do.
Have fun!
Damien
"Alan Samet" wrote:

> This script will tell you all of the unique columns (except BLOBs) in
> all of the tables in a database.
> I made a post last night with some VBA code for exporting data from
> Access. I've been given a 2 GB Access database with scores of tables,
> some of them having hundreds of columns, millions of rows, and no
> constraints. I also know very little about the underlying data. After
> getting the data into SQL Server, I wanted to learn more about the
> nature of the data. The first thing I wanted to find was which columns
> could be single-column candidate keys.
> One of the issues I ran into is that my generated SQL often exceeded
> 4000 characters, so I needed to come up with a way of executing more
> than 4000 characters. Also, y'all'll notice that I'm using cursors
> quite a bit here. These cursors only iterate through a few rows. The
> heavy stuff is pretty efficient SQL.
> -Alan
> SET NOCOUNT ON
> CREATE TABLE #UniqueColumns
> (
> TableName sysname
> , ColumnName sysname
> , IsUnique CHAR(1)
> , PRIMARY KEY (TableName, ColumnName)
> )
> DECLARE @.TableName sysname
> , @.ColumnName sysname
> DECLARE cTables CURSOR
> LOCAL
> FAST_FORWARD
> FOR
> SELECT TABLE_NAME
> FROM INFORMATION_SCHEMA.TABLES
> CREATE TABLE #SQLFragments
> (
> RowNumber INT
> IDENTITY
> , SQL NVARCHAR(4000)
> , Type VARCHAR(6)
> NOT NULL
> CHECK (Type IN ('SELECT', 'UPDATE'))
> )
> OPEN cTables
> WHILE 1 = 1 BEGIN
> FETCH NEXT FROM cTables INTO @.TableName
> IF @.@.FETCH_STATUS <> 0 BREAK
> DECLARE @.SQL NVARCHAR(4000)
> , @.SQLUpdateUnique NVARCHAR(4000)
> SET @.SQL = N'SELECT '
> SET @.SQLUpdateUnique = N''
> DECLARE c CURSOR
> LOCAL
> FAST_FORWARD
> FOR
> SELECT COLUMN_NAME
> FROM INFORMATION_SCHEMA.columns
> WHERE TABLE_NAME = @.TableName
> AND DATA_TYPE NOT IN ('text', 'ntext', 'image')
> ORDER BY ORDINAL_POSITION
> OPEN c
> WHILE 1 = 1 BEGIN
> FETCH NEXT FROM c INTO @.ColumnName
> IF @.@.FETCH_STATUS <> 0 BREAK
> INSERT #UniqueColumns (TableName, ColumnName)
> SELECT @.TableName, @.ColumnName
> SET @.SQL = @.SQL + '
> CASE WHEN COUNT (DISTINCT ' + @.ColumnName + ') = COUNT(*) THEN ''Y''
> ELSE ''N'' END ' + @.ColumnName + ','
> SET @.SQLUpdateUnique = @.SQLUpdateUnique + 'UPDATE #UniqueColumns SET
> IsUnique = ' + @.ColumnName + ' FROM ##Temp WHERE TableName = ''' +
> @.TableName + ''' AND ColumnName = ''' + @.ColumnName + '''
> '
> IF LEN(@.SQL) > 3000 BEGIN
> INSERT #SQLFragments (SQL, Type)
> SELECT @.SQL, 'SELECT'
> SET @.SQL = ''
> END
> IF LEN(@.SQLUpdateUnique) > 3000 BEGIN
> INSERT #SQLFragments (SQL, Type)
> SELECT @.SQLUpdateUnique, 'UPDATE'
> SET @.SQLUpdateUnique = ''
> END
> END
> CLOSE c
> DEALLOCATE c
> SET @.SQL = LEFT(@.SQL, LEN(@.SQL) - 1) + ' INTO ##Temp FROM ' +
> @.TableName
> INSERT #SQLFragments (SQL, Type)
> SELECT @.SQL, 'SELECT'
> INSERT #SQLFragments (SQL, Type)
> SELECT @.SQLUpdateUnique, 'UPDATE'
> DECLARE @.FragmentCount INT
> SELECT @.FragmentCount = COUNT(*) FROM #SQLFragments WHERE Type =
> 'SELECT'
> DECLARE @.SQLEXEC NVARCHAR(4000)
> , @.Fragment NVARCHAR(4000)
> , @.RowNumber VARCHAR(10)
> SET @.SQL = N''
> SET @.SQLEXEC = N'EXEC ('
> DECLARE cFragments CURSOR
> LOCAL
> FAST_FORWARD
> FOR
> SELECT SQL, RowNumber
> FROM #SQLFragments
> WHERE Type = 'SELECT'
> ORDER BY RowNumber
> OPEN cFragments
> WHILE 1 = 1 BEGIN
> FETCH NEXT FROM cFragments INTO @.Fragment, @.RowNumber
> IF @.@.FETCH_STATUS <> 0 BREAK
> SET @.SQL = @.SQL + N'DECLARE @.SQL' + @.RowNumber + N' NVARCHAR(4000)
> SELECT @.SQL' + @.RowNumber + N' = SQL FROM #SQLFragments WHERE RowNumber
> = ' + @.RowNumber + N'
> '
> SET @.SQLEXEC = @.SQLEXEC + N' @.SQL' + @.RowNumber + N' +'
> END
> CLOSE cFragments
> DEALLOCATE cFragments
> SET @.SQLEXEC = LEFT(@.SQLEXEC, LEN(@.SQLEXEC) - 1) + ')'
> SET @.SQL = @.SQL + N' ' + @.SQLEXEC
> EXEC sp_executesql @.SQL
> SET @.SQL = N''
> SET @.SQLEXEC = N'EXEC ('
> DECLARE cFragments CURSOR
> LOCAL
> FAST_FORWARD
> FOR
> SELECT SQL, RowNumber
> FROM #SQLFragments
> WHERE Type = 'UPDATE'
> ORDER BY RowNumber
> OPEN cFragments
> WHILE 1 = 1 BEGIN
> FETCH NEXT FROM cFragments INTO @.Fragment, @.RowNumber
> IF @.@.FETCH_STATUS <> 0 BREAK
> SET @.SQL = @.SQL + N'DECLARE @.SQL' + @.RowNumber + N' NVARCHAR(4000)
> SELECT @.SQL' + @.RowNumber + N' = SQL FROM #SQLFragments WHERE RowNumber
> = ' + @.RowNumber + N'
> '
> SET @.SQLEXEC = @.SQLEXEC + N' @.SQL' + @.RowNumber + N' +'
> END
> CLOSE cFragments
> DEALLOCATE cFragments
> SET @.SQL = @.SQL + N' ' + LEFT(@.SQLEXEC, LEN(@.SQLEXEC) - 1) + N')'
> EXEC sp_executesql @.SQL
> DELETE #SQLFragments
> DROP TABLE ##Temp
> END
> CLOSE cTables
> DEALLOCATE cTables
> DELETE #UniqueColumns WHERE IsUnique = 'N'
> SELECT * FROM #UniqueColumns
> DROP TABLE #UniqueColumns
> DROP TABLE #SQLFragments
>|||>> .. and one to represent the columns ( or attributes Joe )? <<
LOL! Eitgher, but not "fields", please !
We can avoid the metadata with a little work. Pull out the table and
column names then use a text editor to generate queries with this
template
/*
DROP TABLE Foobar;
CREATE TABLE Foobar
(known_key CHAR(2) NOT NULL PRIMARY KEY,
actual_key CHAR(2) NOT NULL,
non_key CHAR(2) NOT NULL,
non_key_null CHAR(2));
INSERT INTO Foobar VALUES ('K1', 'a', 'x', 'a');
INSERT INTO Foobar VALUES ('K2', 'b', 'x', NULL);
INSERT INTO Foobar VALUES ('K3', 'c', 'x', 'b');
INSERT INTO Foobar VALUES ('K4', 'd', 'y', 'c');
INSERT INTO Foobar VALUES ('K5', 'e', 'y', 'd');
INSERT INTO Foobar VALUES ('K6', 'f', 'y', 'e');
*/
SELECT 'Foobar' AS table_name,
CASE WHEN COUNT(DISTINCT known_key) = COUNT(*)
THEN 'Y' ELSE 'N' END AS known_key,
CASE WHEN COUNT(DISTINCT actual_key) = COUNT(*)
THEN 'Y' ELSE 'N' END AS actual_key,
CASE WHEN COUNT(DISTINCT non_key) = COUNT(*)
THEN 'Y' ELSE 'N' END AS non_key,
CASE WHEN COUNT(DISTINCT non_key_null) = COUNT(*)
THEN 'Y' ELSE 'N' END AS non_key_null
FROM Foobar
These should run faster than a cursor. NULLs and dups will cause
COUNT(*) and COUNT(DISTINCT col) to be unequal. COUNT(*) should be
computed only once, and the optimizer should catch NOT NULL UNIQUE
constraints to short-cut the equality test.
No need for loops.|||Joe, that's almost exactly what I did. The cursors were used for two
things:
building dynamic SQL statements for the select case when count(...)
statements and the update statement to update my result table. I
prioritized execution efficiency of the queries that determined
uniqueness over query generation.
In more detail, the technique I used is as follows.
Here is your table w/ unknown constraints:
CREATE TABLE Foobar
(
Column0 DATATYPE
, Column1 DATATYPE
..
, Column 499 DATATYPE
)
First, I created a table of all table names and column names with a
column to indicate whether the value was unique.
CREATE TABLE #UniqueColumns
(
TableName sysname
, ColumnName sysname
, IsUnique CHAR(1)
, PRIMARY KEY (TableName, ColumnName)
)
Next, for each table I determined which columns were unique. Since I'm
dealing with lots of records, I wanted this to be the most efficient.
I'd say this query did the trick.
SELECT CASE WHEN COUNT(DISTINCT Column0) = COUNT(*) THEN 'Y' ELSE 'N'
END Column0
, CASE WHEN COUNT(DISTINCT Column1) = COUNT(*) THEN 'Y' ELSE 'N' END
Column1
, CASE WHEN COUNT(DISTINCT Column2) = COUNT(*) THEN 'Y' ELSE 'N' END
Column2
... et cetera
INTO ##Temp
FROM TableName
I then updated my table with table and column names:
UPDATE #UniqueColumns SET IsUnique = Column0 FROM ##Temp WHERE
TableName = 'TableName' AND ColumnName = 'Column0'
UPDATE #UniqueColumns SET IsUnique = Column1 FROM ##Temp WHERE
TableName = 'TableName' AND ColumnName = 'Column1'
The only issue here was when my SQL exceeded 4000 characters. So, I
went ahead and created a temporary table to store the fragments. This
is one of the few cases where the IDENTITY keyword is actually useful.
The latter part of the query would take and create a script that looked
like the following and execute it:
DECLARE @.SQL1 NVARCHAR(4000)
SELECT @.SQL1 = SQL FROM #SQLFragments WHERE RowNumber = 1
DECLARE @.SQL2 NVARCHAR(4000)
SELECT @.SQL2 = SQL FROM #SQLFragments WHERE RowNumber = 2
EXEC (@.SQL1 + @.SQL2)
et cetera. This was my dynamic workaround for the 4000 character
limitation. In reality, I doubt anything would gone over 12,000, so I
could've probably gotten away with just a couple variables, but this
was way more fun.
-Alan
--CELKO-- wrote:
> LOL! Eitgher, but not "fields", please !
>
> We can avoid the metadata with a little work. Pull out the table and
> column names then use a text editor to generate queries with this
> template
> /*
> DROP TABLE Foobar;
> CREATE TABLE Foobar
> (known_key CHAR(2) NOT NULL PRIMARY KEY,
> actual_key CHAR(2) NOT NULL,
> non_key CHAR(2) NOT NULL,
> non_key_null CHAR(2));
> INSERT INTO Foobar VALUES ('K1', 'a', 'x', 'a');
> INSERT INTO Foobar VALUES ('K2', 'b', 'x', NULL);
> INSERT INTO Foobar VALUES ('K3', 'c', 'x', 'b');
> INSERT INTO Foobar VALUES ('K4', 'd', 'y', 'c');
> INSERT INTO Foobar VALUES ('K5', 'e', 'y', 'd');
> INSERT INTO Foobar VALUES ('K6', 'f', 'y', 'e');
> */
> SELECT 'Foobar' AS table_name,
> CASE WHEN COUNT(DISTINCT known_key) = COUNT(*)
> THEN 'Y' ELSE 'N' END AS known_key,
> CASE WHEN COUNT(DISTINCT actual_key) = COUNT(*)
> THEN 'Y' ELSE 'N' END AS actual_key,
> CASE WHEN COUNT(DISTINCT non_key) = COUNT(*)
> THEN 'Y' ELSE 'N' END AS non_key,
> CASE WHEN COUNT(DISTINCT non_key_null) = COUNT(*)
> THEN 'Y' ELSE 'N' END AS non_key_null
> FROM Foobar
> These should run faster than a cursor. NULLs and dups will cause
> COUNT(*) and COUNT(DISTINCT col) to be unequal. COUNT(*) should be
> computed only once, and the optimizer should catch NOT NULL UNIQUE
> constraints to short-cut the equality test.
> No need for loops.|||Alan,
Neat stuff!
My simpleminded approach to the same sort of problem follows. It
doesn't use the owner or schema, as it should, and doesn't filter on
datatypes, as it should, but the general idea is clear enough I think.
The first column of the result tells you the table name and the number
of rows, and I scan visually for the candidate keys. On the other
hand, it gives me a complete picture of how many values per column,
and that can be invaluable to sorting out mysterious data.
Roy Harvey
Beacon Falls, CT
SELECT CASE WHEN ORDINAL_POSITION = 1
THEN 'SELECT COUNT(*) as ' + TABLE_NAME + ',' +
char(13) + char(10)
ELSE ''
END +
' COUNT(distinct ' + COLUMN_NAME + ') as ' + COLUMN_NAME
+
CASE WHEN ORDINAL_POSITION <
(select max(ORDINAL_POSITION)
from INFORMATION_SCHEMA.COLUMNS as L
where C.TABLE_NAME = L.TABLE_NAME)
THEN ','
ELSE char(13) + char(10) +
' FROM ' + TABLE_NAME
END
from INFORMATION_SCHEMA.COLUMNS as C
order by TABLE_NAME, ORDINAL_POSITION