Tuesday, March 20, 2012

Bad Update Performance with very large recordsets

When updating a table with 80 million records we are experiencing very bad
times. We found using maxdop1 to keep the process in one thread helpful,
but something is still causing problems.
Updating one bigint field which is nullable to a bigint value from a table
inner joined with the current table is crawling.
Should we remove the nullability of the field and set it to a default of -1?
Would this ensure space is allocated for the field? What else can we look
for?
What about NO LOCK or Serializable? Could
What would be the *fastest* way to update records in a table assuming only 1
connection to the database? How do fields allowing NULLs affect this?
Thanks.Never try to update 80 million rows at once if you can avoid it. Wrap the
updates in smaller batches of say 10, 50 or 100K in size. If no one else
needs to use the table while this is happening you can use TABLOCK hint to
help.
--
Andrew J. Kelly
SQL Server MVP
"Kory Yingling" <kory@.removeme-mlsc.com> wrote in message
news:u$j1RcBbDHA.1744@.TK2MSFTNGP12.phx.gbl...
> When updating a table with 80 million records we are experiencing very bad
> times. We found using maxdop1 to keep the process in one thread helpful,
> but something is still causing problems.
> Updating one bigint field which is nullable to a bigint value from a table
> inner joined with the current table is crawling.
> Should we remove the nullability of the field and set it to a default
of -1?
> Would this ensure space is allocated for the field? What else can we look
> for?
> What about NO LOCK or Serializable? Could
> What would be the *fastest* way to update records in a table assuming only
1
> connection to the database? How do fields allowing NULLs affect this?
> Thanks.
>

No comments:

Post a Comment