Very frequently i have seen DBAs having some misconception about the User managed Hot Backup.Most common questions are
1. When tablespace is in backup mode, do the DML activities updates the records?
2. Why there is no change in SCN?
3. Why is there excess redo generation during hot backup?
Long time back,same questions were asked on the OTN forum, and the explanation given by Howard (aka,HJR on Forums) was the best.I am mentioning it here for my future reference and hope it may help others too.
SQL> select file#, checkpoint_change# from v$datafile; FILE# CHECKPOINT_CHANGE# ---------- ------------------ 1 7667692204 2 7667692204 3 7667692204 4 7667692204 5 7667692204 6 7667692204 7 7667692204 8 7667692204 9 7667692204 10 7667692204 11 7667692204 12 7667692204 12 rows selected.
Note: we start with all files with the same checkpoint change number. The entire database is synchronisedSQL> alter tablespace data_medium begin backup; Tablespace altered. SQL> select file#, checkpoint_change# from v$datafile; FILE# CHECKPOINT_CHANGE# ---------- ------------------ 1 7667692204 2 7667692204 3 7667692204 4 7667692204 5 7667692204 6 7667727909 7 7667692204 8 7667692204 9 7667692204 10 7667692204 11 7667692204 12 7667692204 12 rows selected.
Note: file 6 is now out-of-synch with the rest of the database, because it was put into hot backup mode. It’s checkpoint change number is ahead of the rest of the database.SQL> alter tablespace data_medium end backup; Tablespace altered. SQL> select file#, checkpoint_change# from v$datafile; FILE# CHECKPOINT_CHANGE# ---------- ------------------ 1 7667692204 2 7667692204 3 7667692204 4 7667692204 5 7667692204 6 7667727909 7 7667692204 8 7667692204 9 7667692204 10 7667692204 11 7667692204 12 7667692204 12 rows selected.
And note now that although the END BACKUP command has been issued, the datafiles remain in exactly the same state as they were in beforehand. File 6 is still out of synch with all the other files. No additional checkpoints have been issued, and nothing additional has happened to file 6 or anything else to make things re-synchronise.
Putting a tablespace INTO hot backup mode does indeed cause a checkpoint to take place for that tablespace. But taking it OUT of hot backup mode does NOT. The SCNs in the headers of that tablespace will re-synchronize next time there is a system-wide checkpoint, but not until then.
So no, it is NOT “updated when we take tablespace out of the backup mode using online redo log files” -though the syntax of that particular sentence is in any case so bizarre as to defy rational explanation and require pure guesswork.
As to the original poster’s questions, therefore:
I Just want to know when DML activities happen when tablespace in backup mode how the records will get update? why there is no change in SCN? why there is excess redo generation compare to normal DML activitis?
1. DML activities take place entirely normally. Putting a tablespace into hot backup mode stops CKPT from updating its header, but does absolutely nothing to prevent your server process from updating the other contents of the file. If you update every row in a million-row table during a hot backup of the tablespace that houses that table, every single row gets updated exactly as it would have done if no hot backup were taking place. It wouldn’t be a much of a hot backup, after all, if the data file contents were stone-cold frozen for the duration, would it?
2. There IS a change in the SCN when you put a tablespace into hot backup mode. See above. When you take it out of hot backup mode, there is no special update to the SCN… but the next time all data file headers are updated thanks to a system-wide checkpoint (caused by such things as a log switch or by the DBA asking for one), then the header will be updated with the latest SCN as all other datafile headers are.
Getting slightly pickier: it’s the checkpoint change number which is stored in the headers of datafiles, and it is that which doesn’t change during a hot backup. The system change number, of which the checkpoint change number is simply a snapshot at a given point in time, is continually increasing across the entire system, all the time.
3. In a user managed backup, it is the operating system which copies things, and it copies things which the operating system is aware of -such as the ‘operating system block’, which is commonly 512 bytes in size. An Oracle block is made up of multiple operating system blocks (16 in the case of an 8K block and 512 byte OS blocks). Oracle knows that 16 OS blocks make up one Oracle block; the operating system itself hasn’t a clue of their inter-relatedness.
Therefore, the dumb operating system will copy any one of those 16 blocks at any time as it sees fit, because it doesn’t realise that if it copies one, it really ought to copy the other 15 as well. If this was a cold user-managed backup, that wouldn’t matter, because nothing would be happening to any of the 16 OS blocks, and however long it took the OS to copy all of them, the result would be the same as if all 16 had been copied at the same instant.
But this is a hot backup, so things ARE happening to those blocks. Therefore, it’s possible for the OS to copy, say, three of them, just at the point where Oracle is about to change one of its blocks with an update or insert. Three OS blocks thus get copied in one state, and the remaining thirteen OS blocks get copied in a different state. You now have one 8K block in a backup that is internally all over the place and utterly unusable. It’s called a fractured block, and it is what happens when one entity (the OS) copies things oblivious to the significance invested in what it’s copying by another (the Oracle database).
The cure for a fractured block is to have a clean copy of the block as it was before any updates happened to it. To that clean copy of the block, you can apply all redo that was later generated that affected it. And that is what Oracle therefore ensures exists. If a tablespace is in hot backup mode, the first update that is about to change an Oracle block causes that entire block to be written into the redo logs. There’s your clean copy. The copy that the actual backup operation is creating is the copied datafile can now be fractured or not as chance allows: there’s always going to be a good version of it stored inside the redo logs.
However, an update that usually generated 200 bytes of redo has just caused an entire 8K block to be copied into the redo logs. And that increase in redo generation is a hallmark of user-managed backups (and why you should use RMAN instead).