Chia sẻ: Cong Thanh | Ngày: | Loại File: PDF | Số trang:40

lượt xem


Mô tả tài liệu
  Download Vui lòng tải xuống để xem tài liệu đầy đủ

ORACLE8i- P6: We want information… information...” Possibly you recognize these words as the primary interest of a somewhat clandestine group, and as told by a character called Number 2 to Patrick McGoohan’s character Number 6 (in the old TV show The Prisoner). Indeed, in this day, information is king, and the speedy, accurate, and reliable retrieval of this information is paramount.

Chủ đề:

Nội dung Text: ORACLE8i- P6

  1. 190 CHAPTER 5 • UNDERSTANDING ORACLE8i ARCHITECTURE controllers, not only will you improve the performance of LGWR, but you will also protect your online redo logs from failure or human error. If one member of a redo log group becomes corrupted, that will not cause the rest of the database to fail. Rather, Oracle will simply close the log file, note an error in the LGWR trace file, and continue to write to the remaining log files. If all of the online redo logs become corrupted, LGWR will fail. Then you will need to shut down the data- base, investigate the cause of the problem, and probably recover the database. If Oracle cannot open an existing redo log file group, you will need to shut down the database, determine what the problem with the redo log file is, and then restart the database. Database recovery in the form of media recovery will likely not be required in this case. System Monitor (SMON) The SMON process is responsible for a variety of database operations, including the following: • Cleaning up temporary segments • Recovering from a database crash • Coalescing database free space • Running recovery transactions against unavailable datafiles • Recovering an instance of a failed Oracle Parallel Server (OPS) node • Registering instances with the listener • Cleaning up the OBJ$ table • Shrinking rollback segments back to their OPTIMAL size setting • Managing offline rollback segments in PENDING OFFLINE mode • Updating statistics if an object is set to monitoring Process Monitor (PMON) The PMON process is responsible for recovering failed user processes and cleaning up any resources that might have been allocated to that process. If you are running MTS, PMON is responsible for recovery of dispatcher and server processes. PMON also monitors background processes, restarting them if they fail or shutting down instances if required (though PMON shutdowns are not very gracious!). Also, PMON is responsible for registering the database with the Oracle listener. Checkpoint (CKPT) The CKPT process is responsible for updating certain structures in the database datafile headers, as well as in the control files of the database, so that these files are synchronized. Please purchase PDF Split-Merge on to remove this watermark. Copyright ©2002 SYBEX, Inc., Alameda, CA
  2. ORACLE PROCESSES 191 Every three seconds, CKPT will wake up and determine if there are checkpoint events to process. Also, CKPT is responsible for notifying DBWn to start checkpoint processing when a checkpoint event is raised. (Prior to Oracle8, if the CKPT process were not run- ning, this job would fall to the DBWn process, which would further burden it.) Optional Processes The following processes are optional, and their use depends on the setup of your database: • ARCH, the archiver process, is normally enabled when the database is in ARCHIVELOG mode. You can have it start automatically via the init.ora param- eter LOG_ARCHIVE_START, or you can start it from SQL via the ALTER SYSTEM command. PA R T • SNPn, the job queue (or Snapshot) processes, are associated with the Oracle Job Scheduler. II • RECO, the Distributed Database Recovery process, is associated with distributed transaction processing in Oracle. It is responsible for recovering transactions that are left in a prepared state after the loss of a connection to a remote database. Oracle Database This process can be disabled by setting the parameter DISTRIBUTED_TRANSAC- Administration TIONS to 0. • LCKn (Lock processes), LMON (Lock Manager), LMD (Lock Manager Daemon), and BSP (Block Server Process) are part of the Oracle Distributed Lock Manager (DLM) processes associated with OPS. • Dnnn, the MTS Dispatcher processes, are dispatcher processes that act as the liaison between the client sessions and the shared server processes in a Net8 configuration. • Snnn, the MTS Shared Server processes, read requests from the request queue, process them, and then forward the result set to the response queue. • Qnnn, the Queue Monitor processes, are associated with Oracle advanced queu- ing. This process is controlled with the init.ora parameter AQ_TM_PROCESS. • LISTENER, the Oracle TNS listener process, facilitates Net8 connections between the database server and the client. These processes are discussed in more detail in the chapters that describe the data- base functions with which they are associated. Please purchase PDF Split-Merge on to remove this watermark. Copyright ©2002 SYBEX, Inc., Alameda, CA
  3. 192 CHAPTER 5 • UNDERSTANDING ORACLE8i ARCHITECTURE Oracle Database Control Structures In this section, we will review the concepts and internals of the Oracle database that contribute to maintaining the integrity of the database. First, we will discuss the nature of transactions in Oracle and the structures associated with transactions and data durability: commits, rollbacks, and checkpoints. Next, we will look at Oracle’s internal numbering mechanism: the system change number (SCN). Then we will examine rollback segment generation in Oracle (also known as undo), database datafiles, control files, and Oracle redo logs. Transaction Commits and Rollbacks A transaction is at the heart of the Oracle database. Without transactions, the database data never changes—rows are never added, altered, or deleted—making the database pretty useless for the most part. A transaction is the life of a series of different Oracle DML statements. A transaction begins with an initial connection to the database by a user process or after a previous transaction has been completed. Transactions end successfully (the changes are saved to the database) at the imple- mentation of a commit. Commits can be forced through the use of the COMMIT command. Commits can also be implied through the use of certain DDL commands. Transactions end unsuccessfully (database data changes are not saved) with the issuance of a ROLLBACK command or if the user process encounters some sort of fatal failure. Once a transaction has been committed, it cannot be rolled back. In many cases, such as with the default behavior of SQL*Plus, if you exit the process normally, an implied commit is issued, causing the transactional changes to be committed. Even terminations of certain user processes that might seem abnormal (such as hit- ting Ctrl+C in SQL*Plus and having SQL*Plus terminate, as it does in some cases) will cause an implied commit of the data. Checkpoints Because one of the goals of Oracle is speed, the changes to datafiles are not written out to the datafiles when a commit occurs. Changes are only written to the online redo logs (which we will discuss a bit later in this chapter). Of course, the data is writ- ten out at some point. This occurs when a checkpoint is signaled. A checkpoint is an event that causes the database to become synchronized to a single point in time (this may not be to the current point in time, however). The DBWn process is responsible for processing checkpoint operations. During a checkpoint, the DBWn process fires up and starts writing data out to the database Please purchase PDF Split-Merge on to remove this watermark. Copyright ©2002 SYBEX, Inc., Alameda, CA
  4. ORACLE DATABASE CONTROL STRUCTURES 193 datafiles. In concert with the DBWn process, the CKPT process is responsible for updating certain structures in the database datafile headers, as well as updating the control files of the database so that they are also synchronized. Checkpoints can be caused by a number of events, including the following: • Taking a datafile or tablespace offline • Issuing the ALTER TABLESPACE BEGIN BACKUP command • Issuing the ALTER SYSTEM CHECKPOINT command • Shutting down the database with a SHUTDOWN, SHUTDOWN NORMAL, or SHUTDOWN IMMEDIATE command TI P If you need to abort a shutdown, it’s a good idea to force the system to perform a PA R T checkpoint by issuing an ALTER SYSTEM CHECKPOINT command. This will cause your shut- down to take a little longer, but it will allow your database to come back up much faster. II Checkpoint Processing Oracle Database Oracle8i has changed how it handles checkpoints from previous versions of the data- Administration base. In Oracle8i, you can have multiple checkpoints called and ready to be processed by DBWn at the same time (though checkpoints themselves occur in serial fashion). Oracle creates a checkpoint queue. All dirty buffers in the database buffer caches are linked to this list. The buffers in this queue are ordered according to when the changes occurred by using the low redo value in that block (an SCN value that indi- cates when the buffer was first changed). When a checkpoint is signaled, the range of redo values that the specific check- point request is responsible for is sent. Once the last redo value has been processed, that checkpoint will be complete. Even though that particular checkpoint may be completed, DBWn may continue to process subsequently requested checkpoints. Each checkpoint is handled in the order requested, so the oldest dirty blocks are always written to disk first. Should a block change before it was written during the check- point, that dirty block will still be written by the original checkpoint. There are five different kinds of checkpoints in Oracle: Database All dirty blocks in memory that contain changes prior to the check- point SCN will be written to disk. Datafile All dirty blocks in memory that contain changes that belong to a spe- cific datafile will be written to that datafile. Please purchase PDF Split-Merge on to remove this watermark. Copyright ©2002 SYBEX, Inc., Alameda, CA
  5. 194 CHAPTER 5 • UNDERSTANDING ORACLE8i ARCHITECTURE Incremental These checkpoints do not update datafile headers and are used to reduce recovery time. Incremental checkpoints are discussed in more detail in the next section. Mini-checkpoint These checkpoints are caused by certain types of DDL oper- ations, such as dropping a table. This checkpoint impacts only the blocks of those objects. Thread checkpoints These checkpoints are caused when an instance check- points in an OPS environment. Incremental Checkpoints In Oracle8i, the incremental checkpoint process limits the number of dirty buffers in the database buffer cache, in an effort to reduce the time required to restart the database in the event of a system failure. When an amount of redo equivalent to 90 percent of the size of the smallest log file is generated, an incremental checkpoint will be signaled. During normal database operations, the DBWn process constantly writes out small batches of changes to the database datafiles. In doing so, an incremental checkpoint counter is maintained in each datafile, as well as in the database control file. This counter represents the progress of the incremental checkpoints. Because dirty blocks are written to disk in a more frequent fashion (to reduce I/O con- tention), Oracle will need to apply less redo from the online redo logs to recover the database. (This particular requirement has some restrictions associated with the size of the redo log buffer.) You can set several parameters to control incremental checkpoints (each of these can be set in the init.ora database parameter file): FAST_START_IO_TARGET This parameter is available only in the Oracle8i Enterprise Edition of the database. It defines the upper bounds on recovery reads, and it is measured by the maximum number of database buffer cache blocks that you want to recover after a database failure. In other words, you define the maxi- mum number of I/Os that you want Oracle to process during recovery. Oracle will manage the incremental checkpoint process such that this target recovery figure will be maintained as best as possible. The smaller this number, the faster recov- ery will be (but there may be I/O and database performance impacts). LOG_CHECKPOINT_INTERVAL This parameter is measured in redo log blocks (which is generally different from the database block size). It defines the maximum number of dirty redo log buffer blocks that can exist, unwritten, in the database. If this number of blocks is exceeded, an incremental checkpoint is signaled. Please purchase PDF Split-Merge on to remove this watermark. Copyright ©2002 SYBEX, Inc., Alameda, CA
  6. ORACLE DATABASE CONTROL STRUCTURES 195 LOG_CHECKPOINT_TIMEOUT This parameter, which is measured in sec- onds, indicates that the tail of the incremental checkpoint should be where the last redo log record was x seconds ago. Thus, this parameter attempts to control the checkpoint process by causing incremental checkpoints no more than a set number of seconds behind in the redo stream. LOG_CHECKPOINTS_TO_ALERT If you want to know how long it takes to complete a checkpoint (which can be a measure of overall database perfor- mance and a good trending tool), you can log the start and stop times of check- points to the alert log by using this parameter. NOTE The logging time of checkpoints in Oracle8i version does not appear to be working correctly. If you set the init.ora parameter LOG_CHECKPOINTS_TO_ALERT to PA R T report checkpoint completions, the times being reported do not seem to match when the checkpoints actually appear to have been completed. This may be due to changes in incre- II mental checkpointing in Oracle8i. You can see how the various parameters impact instance recovery by looking at the Oracle Database Administration V$INSTANCE_RECOVERY view. If you want to know how many checkpoints have occurred on your system since it was started, you can query the V$SYSSTAT view and look for the statistic called DBWR Checkpoints. This value represents the number of checkpoint requests that have been completed. The System Change Number (SCN) The SCN is simply a counter that represents the state of the database at a given point in time. As an analogy, consider the counter on your VCR. As you record a movie on your videotape, the counter represents individual places on the tape. Thus, if you knew where the counter was when Neo says, “I know Kung-Fu” in The Matrix, you can always go back to that position and watch him strut his stuff. The SCN is some- what like the VCR counter. It is a continuously flowing timeline that identifies when things happened. The SCN is a 6-byte number that increases monotonically; that is, it goes from a small number (starting at 0) and moves up as transactions occur. Each database trans- action is committed at a given SCN. The SCN consists of two parts: the base value and the wrap value. The base value of the SCN is increased first. If the base value rolls over, then the wrap value is increased. A new SCN is assigned every time a transaction is committed. Please purchase PDF Split-Merge on to remove this watermark. Copyright ©2002 SYBEX, Inc., Alameda, CA
  7. 196 CHAPTER 5 • UNDERSTANDING ORACLE8i ARCHITECTURE The SCN is central to several critical database operations, including providing read- consistent images, database consistency checks, and database recovery services. The role the SCN plays in these operations is described in the following sections. Read-Consistent Images Each committed change is associated with an SCN. That SCN is stored in the header of each block in the database, as well as in each rollback segment entry. As you know, if you start a report at 2 P.M., you will only see the data in the database as it looked at 2 P.M. This consistency is facilitated through the SCN. When you start your report at 2 P.M., the database will be at a given SCN. NOTE The database does not care what time you started your report; it only cares what the SCN was when you started your report. This is why it doesn’t matter if you change the system clock while the database is running. For example, suppose you start your report at SCN 1000. Oracle will process the blocks from the database buffer cache if they are present there, or it will extract the correct blocks from the database datafiles. When it does so, it will read the header of the block, which contains, among other things, the SCN of the last time the block was updated. Now, what happens if that block happens to have been updated at SCN 1010? Can Oracle then use that block? No, since the report was started at SCN 1000, Oracle can use the data blocks only if the last update SCN is 1000 or less. So, you have a problem, don’t you? The data block that contains the row(s) you need has been updated. What do you do? The solution is to use the rollback segments to generate a read-consistent image of the block as it existed at SCN 1000. Having determined that the SCN of the actual data block is too high, Oracle will scan through the rollback segments (using some internal pointers that make the process rather fast), looking for the first image of that block that has an SCN less than or equal to 1000. Oracle will then create a read-consistent image of the rows that are needed by taking the original block and, using the rollback segments, constructing an image consistent to the time needed by the report. Database Consistency Checks The SCN provides methods of ensuring that the database is in a consistent state when it is started. Oracle will cross-check the SCNs stored in each database datafile header against those stored in the control file. If Oracle finds that the SCNs match, then the Please purchase PDF Split-Merge on to remove this watermark. Copyright ©2002 SYBEX, Inc., Alameda, CA
  8. ORACLE DATABASE CONTROL STRUCTURES 197 database is in a consistent state. If the SCNs do not match, then the database is not in a consistent state and some form of recovery will be required. When a database is shut down normally, the shutdown process will cause a check- point to occur, and all datafiles will be updated. This will cause the SCNs in the datafile headers to be synchronized. The database will then update the SCNs in the control files, so that those SCNs will also be consistent. When the database crashes, or a SHUTDOWN ABORT is executed, these synchronization processes do not occur. Since the database datafiles will be out of synch with each other and the database, they will be considered inconsistent, and Oracle will apply recovery when the data- base is restarted. Database Recovery Services The SCN is used during database recovery to ensure that all database datafiles are recovered to a consistent point in time. If, during the recovery process, a database PA R T datafile is not consistent with the remaining datafiles of the database, Oracle will sig- II nal an error indicating that there is an inconsistent state in the database and that the database will not open. This means that all of your database datafiles must be recov- ered to the same point in time. For example, if you want to recover one tablespace (and the objects in it) as they Oracle Database Administration looked at 3 P.M., and the rest of the database as it looked at 4 P.M, you will need to con- sider some different options from a physical recovery of the data, such as performing a logical recovery or taking advantage of Oracle’s transportable tablespace feature. We will discuss each of these options in more detail in Chapters 11 and 12. Rollback Segments and Undo We have already discussed the role of rollback segments in providing a read-consistent image of changes made to the database. These changes are in the form of undo. Undo is simply the representation of the data that was changed as it existed before the change took place. Undo is what is stored in rollback segments and allows a block change to be undone. Different changes generate different levels of undo. Some changes, such as INSERT statements, generate minimal undo (since the undo for an INSERT operation is basi- cally a deletion of the row). Other changes, such as the results of UPDATE or DELETE statements, can generate a lot of undo. Please purchase PDF Split-Merge on to remove this watermark. Copyright ©2002 SYBEX, Inc., Alameda, CA
  9. 198 CHAPTER 5 • UNDERSTANDING ORACLE8i ARCHITECTURE NOTE Don’t confuse undo with redo. They are two very different things. Redo is associ- ated with redo logs and represents atomic-level changes to the database, as explained in the “Redo Logs” section of this chapter. If you are interviewing for an Oracle DBA job, don’t be surprised if you are asked to explain the difference between undo and redo! More on Read Consistency Read consistency is one of the fundamental principles of any truly relational data- base. Read-consistency implies that any query you execute at time x will use the data in the database as it looked at time x. For example, suppose that you are going to query row number 1000, and the value in column B was 100. What happens if, before your query actually read column B, someone changed it to a value of 1010 and committed that change? Because you are using a read-consistent database, when your report gets to row number 1000, Oracle will know that the value 1010 is not consistent to the time you started the report. Oracle will then root around to find the rollback segment entries that were generated when the row value was changed from 1000 to 1010. Oracle will put this puzzle together and construct an image—a snapshot—of the row as it looked at time x, rather than as it currently looks. Thus, your report will get a row as it looked when the report started and not as it looked after the change. In providing read consistency, Oracle has accepted that there will be some, shall we say, “wrinkles” in the process from time to time. The wrinkle I refer to is the Oracle error, ORA-1550, Snapshot Too Old message. This error has served to confuse more than a few DBAs, so let’s discuss it in a bit more detail before continuing with the other architectural elements of Oracle. The ORA-1555 Snapshot Too Old Error As you just learned, when Oracle constructs a read-consistent image, it creates a con- sistent result set, or a snapshot; hence, the reference to “Snapshot” in the error ORA- 1550 message. The undo records (and the associated changed rows) for a given transaction are locked for as long as that transaction is active. So, there should be no problem finding the needed undo as long as the transaction is still churning away. However, a problem arises when a transaction commits its work. When this occurs, all the undo in the rollback segments associated with that transaction is unlocked, leaving those undo segment locations available to be reused by other transactions. Guess what happens if the undo segment location is reused and your report needs what was in that segment Please purchase PDF Split-Merge on to remove this watermark. Copyright ©2002 SYBEX, Inc., Alameda, CA
  10. ORACLE DATABASE CONTROL STRUCTURES 199 to generate the read-consistent image? You guessed it: Oracle returns the ORA-01555 error. So, of course, your report dies, and you want to go smacking some heads together. There can be some other causes of the ORA-01555 besides missing rollback seg- ment information. It is possible that the rollback segment information is present, but that the transaction slot information is missing from the rollback segment header. A transaction slot is kind of a directory in the rollback segment header that points to the location of the transaction within the rollback segment (it contains additional infor- mation on the transaction as well). The information in the transaction slot might be missing due to a feature of Oracle called delayed block cleanout, which is designed to assist with performance. When a transaction is complete, to save time, Oracle does not clean out the rollback segment entries. Rather, Oracle waits for the next user process that needs the entries to clean out those entries later on. Sometimes, the transaction slots in the rollback segment header may be overwritten, but the actual PA R T undo will not. In some cases, Oracle can reconstruct the transaction slot, as long as II the undo image in the rollback segment is still present. Otherwise, an ORA-1555 is signaled. PL/SQL programmers may run into ORA-1555 problems if they tend to perform large fetch-across-commit operations. This is because they end up overwriting their Oracle Database Administration own session’s before-image views or rollback segment transaction slots. This is more likely to happen if the program uses the SET TRANSACTION USE ROLLBACK SEG- MENT command. So, what can be done to eliminate the “Snapshot Too Old” problem? Here are a few pointers: Make sure that your rollback segments are large enough. Just giving your rollback segments enough tablespace space to grow into is not suffi- cient. You need to allocate enough initial and growing space to make the extents of the segment big enough to handle the load you will be throwing at it. Remember that your rollback segments will not extend to avoid an ORA-1555 problem. Watch EXTENDS and SHRINKS. The effects of the OPTIMAL clause in a rollback segment can be devastating to transactions needing read consistency. Guess what happens if Oracle shrinks a rollback segment back to its OPTIMAL size and you need a block that used to be allocated to the rollback segment but is no more? That’s right, ORA-1555. This is one reason why Oracle tells you not to set OPTIMAL on the SYSTEM rollback segment during database migrations. Keep an eye on EXTENDS and SHRINKS. Rebuild rollback segments larger until EXTENDS are as near zero as possible. Please purchase PDF Split-Merge on to remove this watermark. Copyright ©2002 SYBEX, Inc., Alameda, CA
  11. 200 CHAPTER 5 • UNDERSTANDING ORACLE8i ARCHITECTURE Watch the size of the SGA. The size of the SGA can affect whether or not you run into ORA-1555. This is because Oracle will use an image in the database buffer cache if it is present. If the image of the rollback segment entry is in the SGA, Oracle will not need to go to disk to find it. Thus, this reduces the chance that when Oracle does go to disk, it will find that the entry is no longer there. Add rollback segments. If you have many users and a lot of volume, adding rollback segments can help reduce contention for the transaction slots in the rollback segments. Lock objects. When running large reports, you might consider locking the objects in SHARED or EXCLUSIVE mode by using the LOCK TABLE command. This will prevent other transactions from writing to these objects (thus requiring consistent reads by your session). Try not to mix and match long-running and short-running transactions. If this is unavoidable, make sure that the rollback segments are sufficiently sized. This is probably the biggest cause of the ORA-1555 problem. If the problem is really large, consider creating a separate environment for reporting or ad-hoc users. For example, a standby database in read-only mode might be a good alternative. Be careful of environments where there are large numbers of readers and writers. Readers are basically operations that cause reads to occur on the database. This would include things like reports and OLTP lookup operations. Writers are operations that cause a lot of database writing. This would include database load programs, OLTP operations, and certain analytic operations. If you are writing PL/SQL, be aware of the impacts of fetching across commits. As you commit data, you risk losing the consistent image of that data that you may require. If you are using fetches across commits, write your code in such a way that you need to revisit blocks as little as possible. This can be done by forcing a full table scan or using an ORDER BY clause in the cursor. Tune your queries to reduce total I/O as much as possible. If your queries need to bring in fewer blocks, this will result in less I/O, fewer changes needing to occur in memory, fewer accesses to disk, and a reduction in ORA-1555 messages to boot. Also, consider breaking up long-running reports into smaller queries and combining the results. Tune your applications. If an application issues five different UPDATE statements on the same row instead of just one, for example, you can make a sig- nificant difference by modifying the code so the UPDATE occurs only once. Please purchase PDF Split-Merge on to remove this watermark. Copyright ©2002 SYBEX, Inc., Alameda, CA
  12. ORACLE DATABASE CONTROL STRUCTURES 201 The Database Datafiles The database datafiles are the main part of the physical database. Each database datafile is associated with one tablespace, and multiple datafiles may be assigned to a given tablespace. Datafile Headers Database datafiles contain a header that is of a fixed length. This header includes vari- ous information about the datafile, including the following: • Its current checkpoint SCN • Its current incremental checkpoint SCN • The datafile number • The datafile name PA R T • Its creation date, time, and SCN number II You can see the contents of a specific datafile header by setting the FILE_HDRS event, as follows: ALTER SESSION SET EVENTS ‘immediate trace name file_hdrs level 10’; Oracle Database Administration See Chapter 15 for more information about setting events. Datafile Fuzziness We often talk about “fuzziness” in regard to database datafiles. Actually, fuzziness is a state that revolves around the setting of any of three different status flags in the datafile header. One status flag is set when the database is opened. This is the online fuzzy bit. This bit will not be reset until the database is closed in a normal fashion (with a SHUT- DOWN or SHUTDOWN IMMEDIATE command). In the case of a SHUTDOWN ABORT or any abnormal database termination, this bit will not be reset. When Oracle is restarted, this bit is checked. If it is set, the datafile is considered to be “fuzzy” and recovery will be required. This recovery may or may not require DBA intervention. Other fuzzy flags are set when a hot backup is signaled for a given tablespace. It is this fuzzy flag that causes the database to signal you that a tablespace is in hot backup mode when you try to shut down the database. It is also this flag that prohibits you from opening the database if it was terminated abnormally during a hot backup. If this occurs, you will need to mount the database and use the ALTER DATABASE DATAFILE END BACKUP command for each datafile that is in hot backup mode. You will then be able to open the database backup. (See Chapter 10 for details on hot backups.) Please purchase PDF Split-Merge on to remove this watermark. Copyright ©2002 SYBEX, Inc., Alameda, CA
  13. 202 CHAPTER 5 • UNDERSTANDING ORACLE8i ARCHITECTURE The last fuzzy flag is the media recovery flag that is set when media recovery is exe- cuted. This flag will be reset once recovery of the datafile is complete. Control Files The control file is one of the central physical files of the Oracle database. Control files are used when starting up the database to ensure that it is in a consistent state. The control file is required to start any Oracle database. A control file stores a vast amount of database information, including information on the datafiles that belong to the database, such as their size, status, and current SCN (which may not always be in synch with the actual SCN of the datafile). It also stores RMAN backup information, the name and location of the database redo logs, and other database control information. Control files contain both a static section as well as a variable (or reuse) portion (also known as circular reuse records) in its structure. This variable area includes archived redo log history, backup history, and other historical information. This vari- able portion can grow so that the control file itself must expand, although there is a limit to the growth of the circular reuse area. If the control file is extended, you will see a message in the database alert log that looks something like this: kccrsz: expanded controlfile section 9 from 56 to 112 records requested to grow by 56 record(s); added 1 block(s) of records This message simply means that one operating system block was added to the control file (on my NT 4 system, an operating system block is 512 bytes). You can limit the number of records stored in the variable portion of the control file by setting the CONTROL_FILE_RECORD_KEEP_TIME parameter in the database init.ora file. This parameter, which is an integer, represents the number of days of records that Oracle should maintain in the variable portion. Oracle will then delete any records older than that number of days, should it require space in the variable portion of the control file and be unable to find it. Setting this parameter to 0 will effectively disable any growth of this variable portion of the control file. You can dump the contents of the control file by setting an event, as in the follow- ing example: ALTER SESSION SET EVENTS ‘immediate trace name controlf level 10’ Redo Logs Redo logs are a critical set of files in the Oracle database. They provide for recovery of the database in the event of any type of failure. Please purchase PDF Split-Merge on to remove this watermark. Copyright ©2002 SYBEX, Inc., Alameda, CA
  14. ORACLE DATABASE CONTROL STRUCTURES 203 Oracle redo logs record each and every change to the database. Redo logs (and stored redo logs called archived redo logs) are one of the most critical database struc- tures. Here is an idea of how critical they can be: If your database is in ARCHIVELOG mode, you can essentially restore any datafile (with a few exceptions like the SYSTEM tablespace datafiles), even if you don’t have a backup of that datafile. All you need to have is the archived redo logs generated since the datafile (or tablespace) was created. (See Chapter 10 for details on putting the database in ARCHIVELOG mode.) Redo logs are identified uniquely by both a thread number (which is normally 1, unless you are running OPS) and a log sequence number. Each time the current online redo log changes, the sequence number of the next log will increase by one. Each redo log has a header associated with it. The redo log header contains the low and high SCNs that are in that redo log. This is used by Oracle during media recovery to determine which redo logs it needs to apply. You can use the REDOHDR event to produce a trace file that contains event information, as in the following example: PA R T ALTER SESSION SET EVENTS ‘immediate trace name redohdr level 10’; II What Is Redo? To understand redo logs, you need to have a clear concept of what redo is. Every change to the database is recorded as redo. This includes changes to tables, indexes, Oracle Database Administration and rollback segments. Redo is used to replay transactions during database recovery. Redo represents a single atomic change to the database. A single redo entry has three components: the redo byte address (RBA), change vectors, and redo records. The RBA We have already discussed the SCN, which is one of the atomic parts of all redo that is generated. Along with the SCN, there is another identifier called the RBA. The RBA is a 10-byte number that identifies and is stored with each redo record. The RBA stores three values: the log sequence number, the block number within the redo log, and the byte number within the block. Similar to the SCN the RBA is kind of like the counter on your VCR. In this case, Oracle uses it to locate specific records within a redo log quickly, thus speeding up various operations such as instance recovery. This is because the last RBA record that is associated with each datafile is stored in each datafile header. The RBA is also stored in the control file of the database. During recovery, Oracle can use the stored RBA to quickly locate the beginning of the redo that it needs to apply for that datafile. Change Vectors A change vector describes a single change to a single block in an Oracle database. The change vector begins with a header record that contains (among other things) the Please purchase PDF Split-Merge on to remove this watermark. Copyright ©2002 SYBEX, Inc., Alameda, CA
  15. 204 CHAPTER 5 • UNDERSTANDING ORACLE8i ARCHITECTURE SCN and the RBA of the change. There are different types of change vectors, such as insert, update, delete, and commit change vectors. The type is identified internally by a code associated with each change vector, called an op code. For example, the commit vector guarantees that a change to the database will be recoverable. When you issue a COMMIT command from your user session (or you perform some action that causes a commit to occur automatically), a commit change vector will be written to the redo log. The COMMIT command issued by the session will not return as successful until the change vector has been written to the online redo log on disk. NOTE When you issue a COMMIT command, Oracle does not write the changes to the database datafiles. Oracle relies completely on the commit vector and the other change vectors in the redo logs to provide database recoverability should the database fail. This should underscore the importance of protecting the online redo logs from failure. See Chapter 10 for details on backup and recovery. Redo Records Redo records consist of a series of change vectors that describe a single atomic change to the database. A given transaction can consist of multiple redo records. Redo records are written in ordered fashion based on the SCN and the RBA of the redo record. Redo Log Switches When Oracle writes to the online redo log, it eventually becomes full. When this occurs, an event known as a redo log switch occurs. During the log switch processing, Oracle will take a latch against the redo log buffer, which will prevent any processes from allocating space in the redo log buffer or writing to it. Next, Oracle will flush the contents of the redo log buffer to the online redo log file. Oracle will then release the switch to the next available online redo log. Once this is done, the latches against the redo log buffer will be released, and Oracle can again send redo to the redo log buffer. As you might guess, the processing of a log switch can be an expensive operation, because it holds up any redo generation on the system, and it also causes additional I/O to occur because of the open and close actions occurring against the online redo logs. Additionally, if the database is in ARCHIVELOG mode, a log switch will cause additional I/O to begin to occur with the copying of the old online redo log to the archive log directory. Because of these costs (and also to reduce the costs of check- pointing in some cases), it is recommended that you size your redo log files so that Please purchase PDF Split-Merge on to remove this watermark. Copyright ©2002 SYBEX, Inc., Alameda, CA
  16. ORACLE SCALAR DATATYPES 205 your database executes log switches somewhere around every 15 minutes. You might end up with particularly large redo log files (online redo log files of several gigabytes are not unusual), but this is okay. As long as the online redo log files are properly pro- tected and you have properly configured the incremental checkpointing process, larger online redo logs are not a problem. Oracle Scalar Datatypes Datatypes are associated with each column defined in a table, and are also used in SQL*Plus and PL/SQL programming. Datatypes are assigned to a column name or variable name and define a storage format, constraint, and valid range of values for that column or variable. Oracle comes with a number of native, or scalar, datatypes, which we introduced PA R T in Chapter 1. A scalar type has no internal components. The following sections pro- II vide details on NUMBER, CHAR, and other Oracle8i scalar datatypes. NUMBER Oracle Database Administration NUMBER types allow you to store numeric data (integers, real numbers, and floating- point numbers), represent quantities, and do calculations. You use the NUMBER datatype to store fixed-point or floating-point numbers of virtually any size. Its mag- nitude range is 1E–130 to 10E125. If the value of an expression falls outside this range, you get a numeric overflow or underflow error (and if it overflows with a posi- tive number and the value represents your bank balance, give me a call—I’m sure you need some good full-time Oracle consulting!). To define a column as a NUMBER type, you might issue the following CREATE TABLE statement: CREATE TABLE test (id NUMBER); This creates a floating-point type of number with no precision or scale. NUMBER Precision and Scale With a NUMBER datatype, you can optionally define the precision and scale of the number. The syntax looks something like this: NUMBER[(precision,scale)] This precision is the total number of digits. If you do not specify precision, it defaults to 38 or the maximum supported by your system, whichever is less. Please purchase PDF Split-Merge on to remove this watermark. Copyright ©2002 SYBEX, Inc., Alameda, CA
  17. 206 CHAPTER 5 • UNDERSTANDING ORACLE8i ARCHITECTURE The scale is the number of digits to the right of the decimal point. The scale can range from –84 to 127. The scale also determines where rounding occurs. For instance, a scale of 2 rounds to the nearest hundredth (3.456 becomes 3.46). A nega- tive scale rounds to the left of the decimal point. For example, a scale of –3 rounds to the nearest thousand (3456 becomes 3000). A scale of 0 rounds to the nearest whole number. If you do not specify scale, it defaults to 0. An example of declaring a number with precision and scale in a table would look something like this: CREATE TABLE test (id NUMBER(5,2)); This example creates a column that can hold a number of up to five digits. The last two digits are after the decimal point. Thus, the largest value you could put in this column would be 999.99. If you were to try to force a value larger than this into the column, Oracle would respond with the following error: SQL> INSERT INTO test VALUES (9999.99); INSERT INTO test VALUES (9999.99) * ERROR at line 1: ORA-01438: value larger than specified precision allows for this column You would get the same error if you tried to insert –9999.99. However, you could insert –999.99 without any problem, because the minus sign is not included in the precision of the number. To declare an integer (which is a number with no fractional component), use this form: NUMBER(precision) This is essentially the same as specifying 0 as the scale. NUMBER Subtypes You can use the following NUMBER subtypes for compatibility with ANSI/ISO and IBM types or when you want a more descriptive name: DEC, DECIMAL, DOUBLE PRECISION, FLOAT, INTEGER, INT, NUMERIC, REAL, and SMALLINT. These subtypes are used as follows: • Use the subtypes DEC, DECIMAL, and NUMERIC to declare fixed-point num- bers with a maximum precision of 38 decimal digits. • Use the subtypes DOUBLE PRECISION and FLOAT to declare floating-point numbers with a maximum precision of 126 binary digits, which is roughly equivalent to 38 decimal digits. Please purchase PDF Split-Merge on to remove this watermark. Copyright ©2002 SYBEX, Inc., Alameda, CA
  18. ORACLE SCALAR DATATYPES 207 • Use the subtype REAL to declare floating-point numbers with a maximum preci- sion of 63 binary digits, which is roughly equivalent to 18 decimal digits. • Use the subtypes INTEGER, INT, and SMALLINT to declare integers with a maxi- mum precision of 38 decimal digits. Character Types Character types allow you to store alphanumeric data, represent words and text, and manipulate character strings. There are two primary character datatypes in Oracle: CHAR and VARCHAR2. Oracle8i also provides National Language Support (NLS) char- acter types. All of the character datatypes take an optional parameter that lets you specify a maximum length: CHAR[(maximum_length)] PA R T VARCHAR2(maximum_length) II NCHAR[(maximum_length)] NVARCHAR2(maximum_length) You cannot use a constant or variable to specify the maximum length; you must use Oracle Database an integer literal. Administration You cannot insert values that are larger than the maximum size of the column datatype into a column, nor retrieve values that are larger than the maximum size of the character datatype into a character datatype variable. For example, a CHAR col- umn’s maximum size is 2000 bytes, so you cannot insert a CHAR value larger than that into a CHAR column. You can insert any CHAR(n) value into a LONG database column, because the maximum width of a LONG column is 2,147,483,647 bytes (or 2GB). However, you cannot retrieve a value longer than 2000 bytes from a LONG col- umn into a CHAR(n) variable. CHAR The CHAR datatype stores fixed-length character data, specified in bytes. The maximum width of a CHAR database column is 2000 bytes. If you do not specify a maximum length, it defaults to 1. NOTE You specify the maximum length of the CHAR and VARCHAR2 datatype in bytes, not characters. So, if a CHAR(n) or VARCHAR2 variable stores multibyte characters, its max- imum length is less than n characters. Please purchase PDF Split-Merge on to remove this watermark. Copyright ©2002 SYBEX, Inc., Alameda, CA
  19. 208 CHAPTER 5 • UNDERSTANDING ORACLE8i ARCHITECTURE The CHAR subtype CHARACTER has the same range of values as its base type. In other words, CHARACTER is just another name for CHAR. You can use this subtype for compatibility with ANSI/ISO and IBM types or when you want use an identifier that is more descriptive than CHAR. VARCHAR2 You use the VARCHAR2 datatype to store variable-length character data, specified in bytes. The maximum width of a VARCHAR2 database column is 4000 bytes. The VAR- CHAR subtype has the same range of values as VARCHAR2. You can use this subtype for compatibility with ANSI/ISO and IBM types. NOTE Currently, VARCHAR is synonymous with VARCHAR2. However, in future releases of PL/SQL, to accommodate emerging SQL standards, VARCHAR might become a separate datatype with different comparison semantics. So, it is a good idea to use VARCHAR2 rather than VARCHAR. NLS Character Types Although the widely used 7- or 8-bit ASCII and EBCDIC character sets are adequate to represent the Roman alphabet, some Asian languages, such as Japanese, contain thou- sands of characters. These languages require 16 bits (2 bytes) to represent each charac- ter. Oracle provides NLS types, which let you process single-byte and multibyte character data and convert between character sets. They also let your applications run in different language environments. Oracle supports two character sets: the database character set, which is used for identifiers and source code, and the national character set, which is used for NLS data. The datatypes NCHAR and NVARCHAR2 store character strings from the national char- acter set. How the data is represented internally depends on the national character set, which might use a fixed-width encoding, such as WE8EBCDIC37C, or a variable-width encoding, such as JA16DBCS. For fixed-width character sets, you specify the maximum length in characters. For variable-width character sets, you specify it in bytes. NCHAR The NCHAR datatype stores fixed-length (blank-padded if necessary) NLS character data. The maximum width of an NCHAR database column is 2000 bytes. If the NCHAR value is shorter than the defined width of the NCHAR column, Oracle blank- pads the value to the defined width. You cannot insert CHAR values into an NCHAR column. Likewise, you cannot insert NCHAR values into a CHAR column. Please purchase PDF Split-Merge on to remove this watermark. Copyright ©2002 SYBEX, Inc., Alameda, CA
  20. ORACLE SCALAR DATATYPES 209 NVARCHAR2 The NVARCHAR2 datatype stores variable-length NLS character data. The maximum width of a NVARCHAR2 database column is 4000 bytes. You cannot insert VARCHAR2 values into an NVARCHAR2 column. Likewise, you cannot insert NVARCHAR2 values into a VARCHAR2 column. LONG, LONG RAW, and RAW The LONG datatype stores variable-length character strings. It is like the VARCHAR2 datatype, except that the maximum length of a LONG value is 2GB. LONG columns can store text, arrays of characters, or even short documents. As with the other datatypes, you cannot insert values that are larger than the maxi- mum size of the column datatype into a column, nor retrieve values that are larger than the maximum size of the character datatype into a character datatype variable. PA R T You can reference LONG columns in UPDATE, INSERT, and (most) SELECT state- II ments. You cannot reference LONG columns in expressions, SQL function calls, or certain SQL clauses, such as WHERE, GROUP BY, and CONNECT BY. You use the LONG RAW datatype to store binary data or byte strings. LONG RAW data is like LONG data, except that LONG RAW data is not interpreted by Oracle. The Oracle Database Administration maximum width of a LONG RAW column is 2GB. TI P LONG and LONG RAW datatypes are not supported by many of the newer Oracle features and will be replaced by LOB datatypes. It is suggested that you use LOBs instead of LONG or LONG RAW datatypes whenever possible. The RAW datatype stores binary data or byte strings. For example, a RAW variable might store a sequence of graphics characters or a digitized picture. RAW data is like VARCHAR2 data, except that Oracle does not interpret RAW data. Likewise, Net8 does not do character set conversion when you transmit RAW data from one system to another. The RAW datatype takes a required parameter that lets you specify a maxi- mum length up to 2000 bytes. LOBs LOBs are an able replacement for the older and less flexible LONG datatype. Oracle has announced that it will eventually remove support for LONG datatypes (although it Please purchase PDF Split-Merge on to remove this watermark. Copyright ©2002 SYBEX, Inc., Alameda, CA
Đồng bộ tài khoản