Each insert is independent of the others so transactions aren't needed for
database integrity. However, we do wish to use them for the performance
gain. When I use the transactions we'll get the following error on about 1
out of every 100 commits.
This SqlTransaction has completed; it is no longer usable.
at System.Data.SqlClient.SqlTransaction.ZombieCheck()
at System.Data.SqlClient.SqlTransaction.Commit()
To try to track down the cause I put trace statements at every transaction
operation so I could ensure the transaction wasn't being closed before
calling commit. I've confirmed that my app wan't closing the transaction.
I then ran the app again using the exact same input data and it succeeds.
If I turn the logging off it fails again. Turn it back on and it succeeds.
This on/off toggle is done via the app.config without the need to recompile.
Obviously the act of logging changes the timing and causes it to work. This
would indicate a threading issue. However, my app isn't multi-threaded.
I've seen one MS KB entry indicating a bug with .Net 2.0 framework could
cause similar issues (http://support.microsoft.com/kb/912732). However, the
fix they provided doesn't solve this issue.
Thanks for using Microsoft Newsgroup Support Service, this is Colbert Zhou
[MSFT] and I will be working on this issue with you.
The issue we are facing here is not the same one as that KB article
describes. From the KB article, we can see that exception is thrown
at System.Data.SqlClient.SqlTransaction.ZombieCheck()
at System.Data.SqlClient.SqlTransaction.get_IsolationLevel()
But, the exception here is thrown
at System.Data.SqlClient.SqlTransaction.ZombieCheck()
at System.Data.SqlClient.SqlTransaction.Commit()
This is why that that hotfix does not work for our issue. Actually, many
different situations may leads to this exception message.
I write some codes in my side to have a quick test, but cannot reproduce
the same issue when calling the tran.Commit(). So, I am think this may be
related to how we write the codes. For better supporting this case, would
you mind posting your codes in newsgroup so that I can have a test locally?
After the issue is reproduced in our environment, we can do some future
investigations to find the root cause and a possible solution/workaround
for this. Thanks for your understanding.
By the way, actually there is a SQL variable @@TRANCOUNT which can be used
to trace the number of active transactions for the current connection. The
increment/decrement rules for @@TRANCOUNT is,
BEGIN TRANSACTION statement increments @@TRANCOUNT by 1,
ROLLBACK TRANSACTION decrements @@TRANCOUNT to 0,
ROLLBACK TRANSACTION savepoint_name does not affect @@TRANCOUNT,
COMMIT TRANSACTION or COMMIT WORK decrement @@TRANCOUNT by 1
So I also suggest you also log the @@TRANCOUNT to analyze the problem. The
exception is thrown from Commit() method usually means that the @@TRANCOUNT
may be already decrements to 0 before we call Commit(). Does this make
sense to you?
Wish this helps! Have a nice day, Andrew!
Best regards,
Colbert Zhou (colb...@online.microsoft.com, remove 'online.')
Microsoft Online Community Support
Delighting our customers is our #1 priority. We welcome your comments and
suggestions about how we can improve the support we provide to you. Please
feel free to let my manager know what you think of the level of service
provided. You can send feedback directly to my manager at:
msd...@microsoft.com.
==================================================
Get notification to my posts through email? Please refer to
http://msdn.microsoft.com/en-us/subscriptions/aa948868.aspx#notifications.
Note: The MSDN Managed Newsgroup support offering is for non-urgent issues
where an initial response from the community or a Microsoft Support
Engineer within 1 business day is acceptable. Please note that each follow
up response may take approximately 2 business days as the support
professional working with you may need further investigation to reach the
most efficient resolution. The offering is not appropriate for situations
that require urgent, real-time or phone-based interactions or complex
project analysis and dump analysis issues. Issues of this nature are best
handled working with a dedicated Microsoft Support Engineer by contacting
Microsoft Customer Support Services (CSS) at
http://support.microsoft.com/select/default.aspx?target=assistance&ln=en-us.
==================================================
This posting is provided "AS IS" with no warranties, and confers no rights.
I was interested reading your message and never knew there can be an
performance gain with a transaction and I was thinking how that can be.
I also readed slightly the reply from Colbert.
All his disclaimer stuff becomes again as much as what is in a product
description from Microsoft to show that they are very unsure if their
products really do work, where I have the idea that their products are in
general from excelent quality. The same like the help the MSFT's are given.
However, as I was thinking about the performance gain of the transaction, I
was also thinking that you maybe are inserting a hugh amount of data in the
transaction.
Reading your problem I got then the idea that your transaction could be
timed out. What is for me mostly the main subject of problems.
Just an idea
Cor
"Andrew Faust" <aef...@newsgroup.nospam> wrote in message
news:%23tXtpjP...@TK2MSFTNGP02.phx.gbl...
There are definite performance gains. When inserting large numbers of
records I've found that even doing a commit on every 10 records can chop
processing time in half.
This isn't limited to SQL Server either. I've found Oracle to give similar
speed boosts.
> Reading your problem I got then the idea that your transaction could be
> timed out.
I don't think it's timed out. If I slow the app down by inserting logging it
starts working properly. If it were a timeout issue, I would think adding
the logging would aggravate the issue. I'll be posting some code as a
response to Colbert that you can take a look at.
Thanks for your feedback
Andrew Faust
"Cor Ligthert[MVP]" <Notmyfi...@planet.nl> wrote in message
news:OKQ15VXn...@TK2MSFTNGP04.phx.gbl...
What's happening is that in some cases my transaction is getting destroyed
if there was an error inserting a record in to the database. The reason the
record fails to load actually affects whether or not my transaction is
destroyed. For example, if the record fails to load due to a field being too
large, the transaction stays alive. If it fails to load due to a Date
conversion it is destroyed.
I built a test (included below) that illustrates this. The test program will
try to do two transactions with each including 3 records. Each transaction
has 1 insert attempt that is invalid. I would expect the results of the two
transactions to be the same. The 2 valid records should load and the 1
invalid record shouldn't. However, as you'll see the transaction that tries
to load an invalid date only loads 1 record.
I'm not sure if this was by design or if it's a bug. I'd suspect a bug as it
seems that the results should be more consistent than this.
I've tried this on SQL Server 2005 SP3 and SQL Server 2008 Experss.
First Create a database with this table.
/****** Object: Table [dbo].[TransactionTest] Script Date: 03/05/2009
13:25:23 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[TransactionTest](
[id] [int] NOT NULL,
[created_date] [datetime] NOT NULL,
[somedata] [varchar](100) NOT NULL,
[moredata] [varchar](20) NOT NULL
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
Then run the following app to see this issue in action.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Data;
using System.Data.SqlClient;
using System.Data.Common;
namespace TransactionTest
{
class Program
{
static void Main(string[] args)
{
string connectionString = "Data Source=localhost;Initial
Catalog=Test;Integrated Security=True;";
Console.WriteLine("Testing inserts with a truncation error");
SingleTransactionTruncation(connectionString);
Console.WriteLine("");
Console.WriteLine("Testing inserts with a date conversion
issue");
SingleTransactionDate(connectionString);
}
private static void SingleTransactionTruncation(string
connectionString)
{
using (SqlConnection connection = new
SqlConnection(connectionString))
{
connection.Open();
SqlTransaction transaction = connection.BeginTransaction();
InsertRecord(connection, transaction, "insert into
[TransactionTest] ([id], [created_date], [somedata], [moredata]) values
('1', '1/1/1900', 'Truncation', 'More Data')");
//This one will fail due to a string truncation issue
InsertRecord(connection, transaction, "insert into
[TransactionTest] ([id], [created_date], [somedata], [moredata]) values
('2', '1/1/1900', 'Truncation', 'Data that is too long to fit in the field
so will error on insert')");
InsertRecord(connection, transaction, "insert into
[TransactionTest] ([id], [created_date], [somedata], [moredata]) values
('3', '1/1/1900', 'Truncation', 'More Data')");
try
{
transaction.Commit();
Console.WriteLine("Commit Succeded");
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
transaction.Dispose();
}
}
private static void InsertRecord(SqlConnection connection,
SqlTransaction transaction, string insertQuery)
{
using (SqlCommand command = new SqlCommand())
{
command.Connection = connection;
command.Transaction = transaction;
command.CommandText = insertQuery;
try
{
command.ExecuteNonQuery();
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
}
}
private static void SingleTransactionDate(string connectionString)
{
using (SqlConnection connection = new
SqlConnection(connectionString))
{
connection.Open();
SqlTransaction transaction = connection.BeginTransaction();
InsertRecord(connection, transaction, "insert into
[TransactionTest] ([id], [created_date], [somedata], [moredata]) values
('1', '1/1/1900', 'Date', 'More Data')");
//This one will fail due to a Date Conversion issue
InsertRecord(connection, transaction, "insert into
[TransactionTest] ([id], [created_date], [somedata], [moredata]) values
('2', 'An Invalid Date', 'Date', 'More Data')");
InsertRecord(connection, transaction, "insert into
[TransactionTest] ([id], [created_date], [somedata], [moredata]) values
('3', '1/1/1900', 'Date', 'More Data')");
try
{
transaction.Commit();
Console.WriteLine("Commit Succeded");
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
transaction.Dispose();
}
}
}
}
Andrew Faust
"Colbert Zhou [MSFT]" <colb...@online.microsoft.com> wrote in message
news:Ym45M#WnJHA...@TK2MSFTNGHUB02.phx.gbl...
Thanks very much for your detailed information! I can reproduce the issue in
my side now. I debug the codes and find that,
After the codes executes by the second InsertRecord's ExecuteNonQuery in
SingleTransactionDate function, the command.Transaction is null while the
local variable transaction is not null. This looks a little confusion
because we just set the command.Transaction using the parameter transaction.
Actually, this is all out of our code's control because the command's
transaction does already rollback in the SQL Server before it returns to our
application. ADO.NET implementation set the command.Transaction to null and
transaction.IsZombied to true for us in black box for it knows that the SQL
Server has already rollback the transaction in server side.
Our exception throws because the Commit method calls ZombieCheck method to
find if the transaction is still available. The following codes of
ZombieCheck from reflector shows how it throws the exception,
private void ZombieCheck()
{
if (this.IsZombied)
{
if (this.IsYukonPartialZombie)
{
this._internalTransaction = null;
}
throw ADP.TransactionZombied(this);
}
}
Unfortunately, based on my understanding, this should be the by design
behavior. SQL Server uses XACT_ABORT to control the transaction's rollback
behavior. Setting it to ON will make all transaction rollbacks when
encountering an error. Setting it to OFF (default) will make the transaction
rollbacks depending on the error severity. See the document,
http://msdn.microsoft.com/en-us/library/ms188792.aspx
I intend to help on this. So I research a lot on this and also talk to
somebody in our SQL support team. I got his confirmation that there is no
way to disable the auto rollback in SQL Server if the error severity is
high. So our objective does not obey with the Transaction's design pattern.
We may have to walk around the Transaction here.
What about using stored procedure to improve the performance? Whenever a
client application needs to send Transact-SQL to SQL Server, send it in the
form of a stored procedure instead of a script or embedded Transact-SQL.
Stored procedures offer many benefits, including, (quote from
http://www.nimishtandon.co.cc/)
a. Reduced network traffic and latency, boosting application performance.
b. Stored procedure execution plans can be reused, staying cached in SQL
Server's memory, reducing server overhead. This is also mostly true for
Transact-SQL code sent to SQL Server outside of a stored procedure.
c. Client execution requests are more efficient. For example, if an
application needs to INSERT a large binary value into an image data column
not using a stored procedure, it must convert the binary value to a
character string (which doubles its size), and send it to SQL Server. When
SQL Server receives it, it then must convert the character value back to the
binary format. This is a lot of wasted overhead. A stored procedure
eliminates this issue as parameter values stay in the binary format all the
way from the application to SQL Server, reducing overhead and boosting
performance.
Thanks again for contacting the newsgroup service, Andrew! If you have any
future questions or concerns on this, please do not hesitate to let me know.
Best regards,
Colbert Zhou (colb...@online.microsoft.com, remove 'online.')
Microsoft Online Community Support
Delighting our customers is our #1 priority. We welcome your comments and
suggestions about how we can improve the support we provide to you. Please
feel free to let my manager know what you think of the level of service
provided. You can send feedback directly to my manager at:
msd...@microsoft.com.
"Andrew Faust" <aef...@newsgroup.nospam> wrote in message
news:ObH8REdn...@TK2MSFTNGP02.phx.gbl...
SQL Server should never rollback your transaction automatically. The
rollback command is available to developers, it should be our decision to
roll back the transaction. Even if there are technical reasons why SQL
Server needs to do automatic rollbacks in some cases, it shouldn't do it
because of something like a datetime conversion error on an insert. An
insert failure due to a datetime conversion, foreign key violation, field
length error or any other reason doesn't make much difference. In all cases
the record fails to load so they're pretty much equal severity to an
application developer. SQL Server should treat them equally as well.
Thanks again,
Andrew Faust
"Colbert Zhou [MSFT]" <colb...@microsoft.com> wrote in message
news:eeQ4rEln...@TK2MSFTNGP03.phx.gbl...
Sure. I have sent a discussion email on this to appropriate internal
channel. If I receive any feedbacks from the product team members, I will
update it here!
Have a nice weekend!
Best regards,
Colbert Zhou (colbertz @online.microsoft.com, remove 'online.')
I discussed this issue with some senior SQL engineers internally. Their
feedbacks tend towards the current behavior. The reason is that Transaction
is just designed as atom action. And when the high severity error occurs
and it is not recoverable, they think it makes more sense that the rollback
happens automatically. If we do not design as this way, what would it be
when the programmer's transaction.Rollback() fails for unexpected reason?
So actually we recommend using the transaction in ADO.NET like the
following way because trans.Commit() can throw exception when that
transaction already rollbacks in the server side,
bool transaction_ok = true;
try
{
// insert, update, delete, etc
}
catch
{
// handle exception
transaction_ok = false;
}
finally
{
try
{
if (transaction_ok)
trans.Commit();
else
trans.Rollback();
}
catch
{
// handle commit/rollback specific errors
}
}
If you have any future questions on this, please feel free to let me know!
Thanks again for choosing Microsoft Product and if you have any other
feedbacks on the SQL Server Product, I encourage you submit from
http://connect.microsoft.com/SQLServer/Feedback.
The main issue I have is the inconsistency to the developer. SQL Server
automatically rolls back the transaction for insert issues caused by
Datetime conversion errors. However, it does not roll back automatically for
insert issues caused because a data element is too long.
For some reason the SQL Server team decided that the datetime conversion
issue is of higher severity. I disagree with that assesment. In both cases
the end result to the application developer is the same. The record fails to
load. From the application developer standpoint the severity is the same, so
it makes sense to treat them the same.
The other issue I have with the decision to automatically roll back is the
lack of choice. I understand the need for atomicity. Howerver, only the app
developer knows what operations actually need to be atomic. In the case of
my application each insert is it's own atomic operation. I use transactions
purely for the performance gain. Simply wrapping batches of inserts together
in a transactions has shortened processing time of my app nearly in half.
I should point out before someone else does that I can't simply re-architect
my app to use features like Bulk Copy or SSIS. My application needs to be
able to work with as many different databases as possible.
I've worked around the issue. However, I'd still appreciate it if you
relayed to the SQL Server Team my comments about the severity and the fact
that Atomicity isn't the only reason to use transactions.
Andrew Faust
Yes, I can understand your concerns here. Actually, the transaction
rollback behavior was reported via the connect site before,
https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackI
D=409640
From Umachandar's comment, we can see that the SQL team is already on the
trace to improve the error handling and try to clear some of the
inconsistencies in the next major version of SQL Server.
Thanks again for your feedback. Have a nice weekend!
Best regards,
Colbert Zhou (colbertz @online.microsoft.com, remove 'online.')
Microsoft Online Community Support
Delighting our customers is our #1 priority. We welcome your comments and
Andrew Faust
"Colbert Zhou [MSFT]" <colb...@online.microsoft.com> wrote in message
news:NEzo1m8o...@TK2MSFTNGHUB02.phx.gbl...