SQL Server 2005
.TXT file exported from Access 2007 using TransferText
Format file created using BCP
Sample row from data file:
2009,14012.00,"012-004-10044",4,"01200410044","008100009.000",14,12,"DOE
JOHN","DOE NORMA","81 SPRINGHILL RD",,"ANYTOWN","ST","02922","LAND &
BUILDINGS","SPRINGHILL
RD",1,"R1","T",5.30,284700.00,284700.00,0.00,84800.00,199900.00,0.00,,0.00,"Y",284700.00,264900.00,0.00,,,,,0.00,0.00,0.00,0.00,2847.00,0.00
Format file:
9.0
43
1 SQLCHAR 0 6 "," 1 YEARGL
SQL_Latin1_General_CP1_CI_AS
2 SQLCHAR 0 25 "," 2 TCODEGL
SQL_Latin1_General_CP1_CI_AS
3 SQLCHAR 0 26 "," 3 SPANGL
SQL_Latin1_General_CP1_CI_AS
4 SQLCHAR 0 7 "," 4 SCHIDGL
SQL_Latin1_General_CP1_CI_AS
5 SQLCHAR 0 22 "," 5 SPANALT
SQL_Latin1_General_CP1_CI_AS
6 SQLCHAR 0 60 "," 6 PARCID
SQL_Latin1_General_CP1_CI_AS
7 SQLCHAR 0 5 "," 7 CNTYGL
SQL_Latin1_General_CP1_CI_AS
8 SQLCHAR 0 7 "," 8 ASMTCODE
SQL_Latin1_General_CP1_CI_AS
9 SQLCHAR 0 80 "," 9 OWNER1
SQL_Latin1_General_CP1_CI_AS
10 SQLCHAR 0 80 "," 10 OWNER2
SQL_Latin1_General_CP1_CI_AS
11 SQLCHAR 0 80 "," 11 ADDRGL1
SQL_Latin1_General_CP1_CI_AS
12 SQLCHAR 0 80 "," 12 ADDRGL2
SQL_Latin1_General_CP1_CI_AS
13 SQLCHAR 0 50 "," 13 CITYGL
SQL_Latin1_General_CP1_CI_AS
14 SQLCHAR 0 20 "," 14 STGL
SQL_Latin1_General_CP1_CI_AS
15 SQLCHAR 0 20 "," 15 ZIPGL
SQL_Latin1_General_CP1_CI_AS
16 SQLCHAR 0 80 "," 16 DESCPROP
SQL_Latin1_General_CP1_CI_AS
17 SQLCHAR 0 80 "," 17 LOCAPROP
SQL_Latin1_General_CP1_CI_AS
18 SQLCHAR 0 7 "," 18 CATCOGL
SQL_Latin1_General_CP1_CI_AS
19 SQLCHAR 0 8 "," 19 CATABRV
SQL_Latin1_General_CP1_CI_AS
20 SQLCHAR 0 8 "," 20 RESCODE
SQL_Latin1_General_CP1_CI_AS
21 SQLCHAR 0 25 "," 21 ACRESGL
SQL_Latin1_General_CP1_CI_AS
22 SQLCHAR 0 25 "," 22 REAL_FLV
SQL_Latin1_General_CP1_CI_AS
23 SQLCHAR 0 25 "," 23 HSTED_FLV
SQL_Latin1_General_CP1_CI_AS
24 SQLCHAR 0 25 "," 24 NRES_FLV
SQL_Latin1_General_CP1_CI_AS
25 SQLCHAR 0 25 "," 25 LAND_LV
SQL_Latin1_General_CP1_CI_AS
26 SQLCHAR 0 25 "," 26 IMPRV_LV
SQL_Latin1_General_CP1_CI_AS
27 SQLCHAR 0 25 "," 27 EQUIPVAL
SQL_Latin1_General_CP1_CI_AS
28 SQLCHAR 0 2 "," 28 EQUIPCODE
SQL_Latin1_General_CP1_CI_AS
29 SQLCHAR 0 25 "," 29 INVENVAL
SQL_Latin1_General_CP1_CI_AS
30 SQLCHAR 0 2 "," 30 HSDECL
SQL_Latin1_General_CP1_CI_AS
31 SQLCHAR 0 25 "," 31 HSTEDVAL
SQL_Latin1_General_CP1_CI_AS
32 SQLCHAR 0 25 "," 32 HSITEVAL
SQL_Latin1_General_CP1_CI_AS
33 SQLCHAR 0 25 "," 33 VETEXAMT
SQL_Latin1_General_CP1_CI_AS
34 SQLCHAR 0 7 "," 34 EXCLASS
SQL_Latin1_General_CP1_CI_AS
35 SQLCHAR 0 7 "," 35 EXTYPE
SQL_Latin1_General_CP1_CI_AS
36 SQLCHAR 0 24 "," 36 ENDDATE
SQL_Latin1_General_CP1_CI_AS
37 SQLCHAR 0 40 "," 37 STATUTE
SQL_Latin1_General_CP1_CI_AS
38 SQLCHAR 0 25 "," 38 EXAMT_HS
SQL_Latin1_General_CP1_CI_AS
39 SQLCHAR 0 25 "," 39 EXAMT_NR
SQL_Latin1_General_CP1_CI_AS
40 SQLCHAR 0 25 "," 40 UVREDUC_HS
SQL_Latin1_General_CP1_CI_AS
41 SQLCHAR 0 25 "," 41 UVREDUC_NR
SQL_Latin1_General_CP1_CI_AS
42 SQLCHAR 0 25 "," 42 GLVAL_HS
SQL_Latin1_General_CP1_CI_AS
43 SQLCHAR 0 25 "\n" 43 GLVAL_NR
SQL_Latin1_General_CP1_CI_AS
Database table in SQLServer has the same structure - but data types vary
from format file. Early attempts did not include a format file at all. I
found a disscussion group post that indicated a format file as defined above
would be useful. From that post, the author says "no matter the datatypes in
the destination table, the source is just a plain ascii file, so specify all
columns in the format file as SQLCHAR, all prefixes as zero, and the length
as the actual charachter length, not the bytelength for the datatype. (ie a
datetime becomes 26 instead of 8 etc)"
An except from my stored proc/bulk insert statement:
ALTER PROCEDURE [dbo].[usp_LoadGL]
@PARCOUNT int,
@COMPNAME varchar(10),
@YEARGL smallint,
@TCODEGL int
AS
BEGIN TRY
BEGIN TRAN
-- DELETE EXISTING RECORDS
DELETE FROM [GLDATA]
WHERE YEARGL = @YEARGL AND TCODEGL = @TCODEGL
BULK INSERT DataGL.dbo.[GLDATA]
FROM '\\WEY192\C$\WeyApps\Pvr\NewGL.txt'
WITH
( BATCHSIZE = 12000 ,
CHECK_CONSTRAINTS ,
DATAFILETYPE = 'char' ,
FIELDTERMINATOR = ',' ,
FORMATFILE='\\WEY192\C$\WeyApps\Pvr\gldata.fmt' ,
KEEPNULLS ,
ORDER ( OWNER1 ASC ) ,
ROWTERMINATOR = '\n'
)
RETURN 0
COMMIT TRAN
END TRY
I've checked the data file for anomalies. No problems there -- except that I
don't know how to determine for sure exactly what ROWTERMINATOR has been used
by the TransferText export out of Access. So in my bulk insert statement I've
gone with the default '\n' -- but have changed it out to '\0' and '\r'
and '\r\n' for testing, but no luck. I get the same error.
From the server, I've confirmed that the text file path is valid, and can
see that the bulk insert statement is executing (rollback is functioning
correctly in try/catch).
In my error log I see that the error is raised on line 18 of the SP. But I'm
not sure how to determine which line is #18 - because, looking at the ALTER
PROC statement, I'm not sure which line is being counted first.
Several days into this research and troubleshooting. Eeek! Thanks for any
help!
The format file does not look entirely correct, although I'm not sure
that is why you get the error.
First of all, I would change change the numbers in the fourth column to 0.
Not that I know whether it matters. My impression is that if you specify
both a delimiter and a length, that only the delimiter matters. But I could
be wrong.
In any case, your delimiters are probably not correct. I notice that
some fields in the data file are quote-delimited. BULK INSERT will
insert those quotes as they are data, unless you explicitly specify
them as delimiters in the file. So if a field is quote-delimited, the
field before should have ",\"" as the delimiter, and the field itself
should have "\"," - or "\",\"" if the next field is also quote-delimited.
There are also some things in your BULK INSERT statement that you can
take out:
> BULK INSERT DataGL.dbo.[GLDATA]
> FROM '\\WEY192\C$\WeyApps\Pvr\NewGL.txt'
> WITH
> ( BATCHSIZE = 12000 ,
> CHECK_CONSTRAINTS ,
> DATAFILETYPE = 'char' ,
> FIELDTERMINATOR = ',' ,
> FORMATFILE='\\WEY192\C$\WeyApps\Pvr\gldata.fmt' ,
> KEEPNULLS ,
> ORDER ( OWNER1 ASC ) ,
> ROWTERMINATOR = '\n'
> )
Since you use a format file, FIELDTERMINATOR and ROWTERMINATOR are
superfluous. ORDER looks innocent, but take it out.
> I've checked the data file for anomalies. No problems there -- except
> that I don't know how to determine for sure exactly what ROWTERMINATOR
> has been used by the TransferText export out of Access.
Try opening the file in a hex editor. The most likely bet is \r\n.
--
Erland Sommarskog, SQL Server MVP, esq...@sommarskog.se
Links for SQL Server Books Online:
SQL 2008: http://msdn.microsoft.com/en-us/sqlserver/cc514207.aspx
SQL 2005: http://msdn.microsoft.com/en-us/sqlserver/bb895970.aspx
SQL 2000: http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
Regarding quotes in the data file, those are functioning as text qualifiers.
All fields are comma-delimited. The text qualifier allows for the use of
commas within a field. Double quotes indicate that a character string
follows, within which commas are ignored. But perhaps BULK INSERT can't
handle the text qualifier?
The error message you get means that data in some field is longer that what
fits in the table. It is likely that this is because of the quotes, so
fix that part first.
--
Erland Sommarskog, SQL Server MVP, esq...@sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
Next, I'll double check the size of all data types in the destination table.
Joel
Error 0xc020901c: Data Flow Task: There was an error with input column
"Column 33" (304) on input "Destination Input" (152). The column status
returned was: "The value could not be converted because of a potential loss
of data.".
(SQL Server Import and Export Wizard)
In the data file, I don't see a problem with column 33 -- but column 34 is
defined as smallint (allow null). And in most records the column is Null. In
the delimited file, Col 34 is first numeric column in the row that is null
(an empty delimiter) -- so I wonder if that could be a problem. (In the data
file sample below, it's the first delimiter in that string of 4 empty
delimiters positioned about 10 columns from the row end.)
A sample row from the data file:
2009,14012.00,"012-004-10037",4,"01200410037","002000022.000",14,12,"CONHILL
PROPERTIES INC",,"32 SO VIEW TERR",,"ANY TOWN","ST","90049","LAND &
DWELLING","OLD BOYTON HILL
RD",2,"R2","NS",10.20,564200.00,0.00,564200.00,190500.00,373700.00,0.00,,0.00,"N",564200.00,503700.00,0.00,,,,,0.00,0.00,0.00,0.00,0.00,5642.00
In the destination table - before I forget to mention it - there is a column
at the end of each row of data type TIMESTAMP. The data file does not have a
corresponding field. My understand is that SQL Server/bulk insert would
ignore this column. If not true, please let me know.
And here's the current status of my BULK INSERT, in case you see any problem
here:
BULK INSERT DataGL.dbo.[GLDATA]
FROM '\\WEY192\C$\WeyApps\Pvr\NewGL.txt'
WITH
( BATCHSIZE = 12000 ,
CHECK_CONSTRAINTS ,
DATAFILETYPE = 'char' ,
FIRSTROW = 1,
FORMATFILE='\\TAX192\C$\WeyApps\Pvr\gldata.fmt',
KEEPNULLS ,
TABLOCK
);
The revised format file looks like this:
9.0
43
1 SQLCHAR 0 0 "," 1 YEARGL
SQL_Latin1_General_CP1_CI_AS
2 SQLCHAR 0 0 ",\"" 2 TCODEGL
SQL_Latin1_General_CP1_CI_AS
3 SQLCHAR 0 0 "\"," 3 SPANGL
SQL_Latin1_General_CP1_CI_AS
4 SQLCHAR 0 0 ",\"" 4 SCHIDGL
SQL_Latin1_General_CP1_CI_AS
5 SQLCHAR 0 0 "\",\"" 5 SPANALT
SQL_Latin1_General_CP1_CI_AS
6 SQLCHAR 0 0 "\"," 6 PARCID
SQL_Latin1_General_CP1_CI_AS
7 SQLCHAR 0 0 "," 7 CNTYGL
SQL_Latin1_General_CP1_CI_AS
8 SQLCHAR 0 0 ",\"" 8 ASMTCODE
SQL_Latin1_General_CP1_CI_AS
9 SQLCHAR 0 0 "\",\"" 9 OWNER1
SQL_Latin1_General_CP1_CI_AS
10 SQLCHAR 0 0 "\",\"" 10 OWNER2
SQL_Latin1_General_CP1_CI_AS
11 SQLCHAR 0 0 "\"," 11 ADDRGL1
SQL_Latin1_General_CP1_CI_AS
12 SQLCHAR 0 0 ",\"" 12 ADDRGL2
SQL_Latin1_General_CP1_CI_AS
13 SQLCHAR 0 0 "\",\"" 13 CITYGL
SQL_Latin1_General_CP1_CI_AS
14 SQLCHAR 0 0 "\",\"" 14 STGL
SQL_Latin1_General_CP1_CI_AS
15 SQLCHAR 0 0 "\",\"" 15 ZIPGL
SQL_Latin1_General_CP1_CI_AS
16 SQLCHAR 0 0 "\",\"" 16 DESCPROP
SQL_Latin1_General_CP1_CI_AS
17 SQLCHAR 0 0 "\"," 17 LOCAPROP
SQL_Latin1_General_CP1_CI_AS
18 SQLCHAR 0 0 ",\"" 18 CATCOGL
SQL_Latin1_General_CP1_CI_AS
19 SQLCHAR 0 0 "\",\"" 19 CATABRV
SQL_Latin1_General_CP1_CI_AS
20 SQLCHAR 0 0 "\"," 20 RESCODE
SQL_Latin1_General_CP1_CI_AS
21 SQLCHAR 0 0 "," 21 ACRESGL
SQL_Latin1_General_CP1_CI_AS
22 SQLCHAR 0 0 "," 22 REAL_FLV
SQL_Latin1_General_CP1_CI_AS
23 SQLCHAR 0 0 "," 23 HSTED_FLV
SQL_Latin1_General_CP1_CI_AS
24 SQLCHAR 0 0 "," 24 NRES_FLV
SQL_Latin1_General_CP1_CI_AS
25 SQLCHAR 0 0 "," 25 LAND_LV
SQL_Latin1_General_CP1_CI_AS
26 SQLCHAR 0 0 "," 26 IMPRV_LV
SQL_Latin1_General_CP1_CI_AS
27 SQLCHAR 0 0 "," 27 EQUIPVAL
SQL_Latin1_General_CP1_CI_AS
28 SQLCHAR 0 0 "," 28 EQUIPCODE
SQL_Latin1_General_CP1_CI_AS
29 SQLCHAR 0 0 ",\"" 29 INVENVAL
SQL_Latin1_General_CP1_CI_AS
30 SQLCHAR 0 0 "\"," 30 HSDECL
SQL_Latin1_General_CP1_CI_AS
31 SQLCHAR 0 0 "," 31 HSTEDVAL
SQL_Latin1_General_CP1_CI_AS
32 SQLCHAR 0 0 "," 32 HSITEVAL
SQL_Latin1_General_CP1_CI_AS
33 SQLCHAR 0 0 "," 33 VETEXAMT
SQL_Latin1_General_CP1_CI_AS
34 SQLCHAR 0 0 "," 34 EXCLASS
SQL_Latin1_General_CP1_CI_AS
35 SQLCHAR 0 0 "," 35 EXTYPE
SQL_Latin1_General_CP1_CI_AS
36 SQLCHAR 0 0 "," 36 ENDDATE
SQL_Latin1_General_CP1_CI_AS
37 SQLCHAR 0 0 "," 37 STATUTE
SQL_Latin1_General_CP1_CI_AS
38 SQLCHAR 0 0 "," 38 EXAMT_HS
SQL_Latin1_General_CP1_CI_AS
39 SQLCHAR 0 0 "," 39 EXAMT_NR
SQL_Latin1_General_CP1_CI_AS
40 SQLCHAR 0 0 "," 40 UVREDUC_HS
SQL_Latin1_General_CP1_CI_AS
41 SQLCHAR 0 0 "," 41 UVREDUC_NR
SQL_Latin1_General_CP1_CI_AS
42 SQLCHAR 0 0 "," 42 GLVAL_HS
SQL_Latin1_General_CP1_CI_AS
43 SQLCHAR 0 0 "\r\n" 43 GLVAL_NR
SQL_Latin1_General_CP1_CI_AS
If I can't get this to work, would you have other suggestions - other ways
to doing a bulk insert? Thanks.
Joel
Does the problem occur, if you have a file with this sole line of data?
I ask for two reasons. One, it is something for you to test. Two, if I
am to take a stab at this, I need data for which the problem occurs.
And while I'm at it, I would also need the CREATE TABLE statement
for the table.
> In the destination table - before I forget to mention it - there is a
> column at the end of each row of data type TIMESTAMP. The data file does
> not have a corresponding field. My understand is that SQL Server/bulk
> insert would ignore this column. If not true, please let me know.
If that is column 44 in the table, that should be alright.
> And here's the current status of my BULK INSERT, in case you see any
> problem here:
The only suggestion I can give at this point is to add the ERRORFILE
option to get errors written to a file. That may reveal something.
Also, you could try using BCP instead, in case there is a quirk with
OLE DB. If nothing else, you may get a different error message. (Though
necessary one more comprehensible; error messages around BULK INSERT/BCP
are often cryptic.)
I could have a look at it, but I as I mentioned I would need the CREATE
TABLE statement, and a sample data file for which the problem appears.
Put the data file in an attachment, or make it available for download
at a website, so it does not get distorted.
--
Erland Sommarskog, SQL Server MVP, esq...@sommarskog.se
Links for SQL Server Books Online:
Yes. the problem occurs when the datafile contains that single line of data.
Timestamp is in column 44 - so that should be ok.
You can find the related files, includIng the table create statement at:
http://www.state.vt.us/tax/pdf.word.excel/pvr/txrpdfs/Files.zip
Thanks taking a look.
Thanks for the files! Unfortunately I have some bad news for you: it's
bedtime for me, so I cannot look at your files now. But I will try to
give them an eye tomorrow.
The first is that, yes, BEGIN-TRY is great, but when you get problems
with BULK INSERT you should run the statement on it own. When I run
the statement alone I got this output:
Msg 4832, Level 16, State 1, Line 1
Bulk load: An unexpected end of file was encountered in the data file.
Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "BULK" for linked server "(null)" reported an error.
The provider did not give any information about the error.
Msg 7330, Level 16, State 2, Line 1
Cannot fetch a row from OLE DB provider "BULK" for linked server
"(null)".
No, I don't expect that you would have understood this. There is a
still a high mumbo-jumbo factor here. But the first message tells me
who have fought with BULK INSERT and BCP in the past, that there is a
mismatch between the format file and the data. BCP and BULK INSERT
are funny fellows. They read the format description and the follow it
slavically. Field 1, Field 2 and so on. If they get out of sync, they
have no chance to recover. For them there exist no such things as lines;
the rowterminator is only the terminator for the last field. This may
seem corny, but keep in mind that they just as much target binary data
as they target character data.
Anyway, there was an error in the format file, at least in regards to
the data file. Here are the two corrected lines:
9 SQLCHAR 0 0 "\"," 9 OWNER1
SQL_Latin1_General_CP1_CI_AS
10 SQLCHAR 0 0 ",\"" 10 OWNER2
SQL_Latin1_General_CP1_CI_AS
In the sample file, OWNER2 is blank. Now, I would not be surprised if
there are other rows in your data where OWNER2 has a value and is quoted.
I think I mentioned this, but I might not have stressed clear enough.
If the file has inconsistent quoting, that is a file is sometimes quoted
and sometimes not, BULK INSERT and BCP lose. They are completely
unintelligent when it comes to quotes. Your only option in this case
is to read the data with comma as the only delimiter, and inside SQL
Server get rid of the quotes. (Which also means that you need to have
space for the quotes, or use OPENROWSET(BULK).
But if the data can in fact has embedded commas, your defeat is complete.
BCP/BULK INSERT will get out of sync, and everything will be a mess.
This is probably when you need to look at SQL Server Integration Services,
something I have not worked with at all myself.
Finally, don't feel bad if you had the delimiters wrong for OWNER2.
Assembling a format file for a 43-column file with a mix of quoted
and unquoted fields is something that could put anyone to sleep.
Tomorrow I'll have a closer look at the data to see how many instances of
embedded commas I find. If I can remove all embedded commas in the source
data, then BULK INSERT and the format file (after removing quote-delimiters)
should work. Correct? I'll also remove BEGIN-TRY for testing.
Original data source is Access - so I have opportunities on that side to
remove embedded commas.
Also, I suppose that instead of exporting a text file out of Access (2007),
I could create an XML file. But I haven't worked with XML yet - and that
would probably be another long process of trial and error. If I can force
BULK INSERT to work with my existing data file, I'll be able to claim some
progress. (This is my first project with SQL Server.)
I'll post back tomorrow with final results of the BULK INSERT saga. Thanks
for your excellent help!
Keep in mind that with the quotes, some data will not fit your target
table. There are two ways to go. One is to get the data into a temp table
with wider columns, and then use substring to strip quotes. The other is
to use OPENROWSET(BULK) which permits you to select from the data file
and strip the quotes on the fly. But if memory serves, OPENROWSET(BULK)
requires an XML format file. Then again, if you are down on comma as
delimiter for all columns, you don't need a format file.
> Also, I suppose that instead of exporting a text file out of Access
> (2007), I could create an XML file. But I haven't worked with XML yet -
> and that would probably be another long process of trial and error. If I
> can force BULK INSERT to work with my existing data file, I'll be able
> to claim some progress. (This is my first project with SQL Server.)
XML would be a better choice, in that XML gives you a more robust handling
of delimiters and embedded delimiters. Learning to use XML in SQL Server
will also pay off in the long run, but I can agree that for a first project,
that's quite a mouthful.
Yet an option is to create a linked server which would permit you to access
the Access database from SQL Server. This would require that the Access
database is accessible from within SQL Server. The advantage is that in
this case you can use regular SQL statements, and you don't have to export
the data. The disadvantage is that getting the linked server to work may
prove to be yet another battle.
--
Erland Sommarskog, SQL Server MVP, esq...@sommarskog.se
Books Online for SQL Server 2005 at
Made serveral attempts -- but continued to get the "unexpected end of file
was encountered" error.
So I began to look at other options -- XML in particular. As I have recently
taken a course that included a cursory look at shredding XML data for insert
into tables. But a couple of points were not clear.
1) I'll need to pass an XML string into my stored procedure as a parameter.
How do I accomplish this? When I call the stored procedure from within Access
(a pass-through query) I can pass parameters - but how to instantiate a
variable with the content of my XML file so that it can be passed to SQL
Server? Or would this somehow be accomplished at the top of my stored
procedure in T-SQL by pointing to the XML file?
2) I assume that BULK INSERT or Format files will not be needed. Correct?
With answers to these questions, I think I'll be on my way! And as you say,
figuring out how to work with XML at the beginning of my encounter with SQL
Server will be of significant benefit down the road.
From SQL Server you can read the file with OPENROWSET(BULK)
using the SINGLE_BLOB option.
Passing the XML document from within Access... You cannot simply to
a parameterised procedure call and use the xml data type, because ADO
does not support that. But you could pass it a text parameter, and
have the procedure in SQL Server to have an nvarchar(MAX) parameter
and then convert to XML from there. You could also construct a query
batch which goes:
EXEC yourprocedure N'<?xml ....>
and invoke that one. Only watch out for embedded single quotes, but
I believe they entitised in XML, so it should not be an issue.
> 2) I assume that BULK INSERT or Format files will not be needed. Correct?
Correct. Even if OPENROWSET(BULK) is the same as BULK INSERT, the
SINGLE_BLOB option makes it a simple affair.
--
Erland Sommarskog, SQL Server MVP, esq...@sommarskog.se
Links for SQL Server Books Online:
After I create the XML file from Access and call the stored procedure, this
is the error:
[Microsoft][SQL Native Client][SQL Server]Transaction count after EXECUTE
indicates that a COMMIT or ROLLBACK TRANSACTION statement is missing.
Previous count = 0, current count = 1. (#266)
Current files that I'm running are available at:
http://www.state.vt.us/tax/pdf.word.excel/pvr/txrpdfs/xmlFiles.zip
But here's a snippet of the stored procedure . . .
DECLARE @xml_doc xml
DECLARE @hdoc INT -- handle to XML doc
SELECT @xml_doc =CONVERT(XML, bulkcolumn, 2)
FROM OPENROWSET(BULK '\\TAX192\C$\TaxApps\Pvr\NewGL.xml', SINGLE_BLOB)
AS x
SELECT @xml_doc
--Create an internal representation of the XML document.
EXEC sp_xml_preparedocument @hdoc OUTPUT, @xml_doc
etc . . .
Most likely, something is not right in the "SELECT @xml_doc = " statement. I
got that syntax from discussion group posts during a frantic several hours of
searching.
Can you identify the problem swiftly and send me on my way? Thanks -- once
again!
Which indicates that you start a transaction that you never commit or
rollback.
> Current files that I'm running are available at:
> http://www.state.vt.us/tax/pdf.word.excel/pvr/txrpdfs/xmlFiles.zip
And indeed:
RETURN 0
COMMIT TRAN
:-)
> SELECT @xml_doc
> --Create an internal representation of the XML document.
> EXEC sp_xml_preparedocument @hdoc OUTPUT, @xml_doc
Uh-uh, that sp_xml_prearedocument is old stuff from SQL 2000. Rather
that and OPENXML, use the type methods .nodes and .values. You don't
have to jazz with any system procedure, and it is more effeicient.
I have a very quick example on
http://www.sommarskog.se/arrays-in-sql-2005.html#XML.
I see in your procedure that you use element-centred XML. If you have
control over what Access generates, go for attribute-centred instead.
It's more compact, and when using .nodes it's more effecient than
element-based.
Admittedly, some creative guesswork has been employed here on my part -
because I was unsure how the structure of my XML data relates to the example
on your "arrays-in-sql-2005" site. Your article was helpful for sure. But my
lack of experience won the day. Three questions, if I may:
1) I have doubts about that "Select Into" statement I put in there - Does it
belong?
2) Regarding the particular error currently preventing successful execution:
Incorrect syntax near '='. It is raised on the first line of the SELECT
statement . . . .
SELECT
YEARGL = T.Item.value('../../@YEARGL', 'smallint'),
TCODEGL = T.Item.value('@TCODEGL', 'int'),
SPANGL = T.Item.value('@SPANGL', 'nvarchar(13)',
............................
3) And in the FROM statement . . . I have no confidence in the path I have
used:
@xml_doc.nodes('/Temp_GLData') as T(Item)
Should there be more than one node? A "root" node perhaps?
The full stored procedure from today is located at:
http://www.state.vt.us/tax/pdf.word.excel/pvr/txrpdfs/XmlFiles.zip
Also today I've determined that I can't control element-centered v.
attribute-centered coming out of the Access export. Looks like I have to use
element-based XML.
Not sure what you mean, I couldn't see any SELECT INTO in the procedure...
> 2) Regarding the particular error currently preventing successful
> execution: Incorrect syntax near '='. It is raised on the first line
> of the SELECT statement . . . .
>
> SELECT
> YEARGL = T.Item.value('../../@YEARGL', 'smallint'),
> TCODEGL = T.Item.value('@TCODEGL', 'int'),
> SPANGL = T.Item.value('@SPANGL', 'nvarchar(13)',
Now, that's a simple one: look at end of the last line. All lines in
the SELECT list but the first two are like that.
> 3) And in the FROM statement . . . I have no confidence in the path I have
> used:
>
> @xml_doc.nodes('/Temp_GLData') as T(Item)
> Should there be more than one node? A "root" node perhaps?
Yes,
@xml_doc.nodes('/dataroot/Temp_GLData') as T(Item)
as "dataroot" is the top-level tag in the document. I think you would
get away wuth "Temp_GLData" without any leading slash, but I've learnt
that can be a performance sinker.
> The full stored procedure from today is located at:
> http://www.state.vt.us/tax/pdf.word.excel/pvr/txrpdfs/XmlFiles.zip
Thanks, you you're too kind!
> Also today I've determined that I can't control element-centered v.
> attribute-centered coming out of the Access export. Looks like I have to
> use element-based XML.
Too bad. This means that you need to remove the leading @, which specifies
that you are looking for an attribute. Instead you need to add [1] to
specify that it is the first element.
Also, you had:
'../../@YEARGL'
This should be YEARGL[1] just like the rest.
I understand this is more than a mouthful right now, but SQL Server MVP
Michael Coles has a book on APress entitled "SQL Server 2008 XML".
I removed the leading @ and added [1] - plus had already worked the other
items earlier today -- and . . . . we have SUCCESS!!!
Can you believe it?
Thanks for persevering!
Never give up!