Oracle Unleashed- P2

Chia sẻ: Thanh Cong | Ngày: | Loại File: PDF | Số trang:50

lượt xem

Oracle Unleashed- P2

Mô tả tài liệu
  Download Vui lòng tải xuống để xem tài liệu đầy đủ

Oracle Unleashed- P2: When I first started using Oracle many years ago, it was possible to know the database and the tools available. With the rash of recent releases of different options for the database and the spate of new tools, only people who wear their underpants over their trousers will be able to know everything there is to know about the Oracle products.

Chủ đề:

Nội dung Text: Oracle Unleashed- P2

  1. Because integrity constraints are checked at the database level, they are performed regardless of where the insert, update, delete statement originated—whether it was an Oracle or a non-Oracle tool. Defining checks using these constraints is also quicker than performing the same checks using SQL. In addition, the information provided by declaring constraints is used by the Oracle optimizer to make better decisions about how to run a statement against the table. The Oracle Forms product can also use constraints to automatically generate code in the front-end programs to provide an early warning to the user of any errors. The types of integrity constraints that you can set up on a table are NOT NULL, PRIMARY KEY, UNIQUE, FOREIGN KEY, CHECK, and indexes. NOT NULL Constraints You set the NOT NULL constraint against a column to specify that the column must always have a value on every row; it can never be null. By default, all the columns in a table are nullable. For example, using a NOT NULL constraint on an orders table, you can specify that there must always be an order amount. PRIMARY KEY The PRIMARY KEY constraint defines a column or a set of columns that you can use to uniquely identify a single row. No two rows in the table can have the same values for the primary key columns. In addition, the columns for a primary key constraint must always have a value—in other words, they are NOT NULL. If you add a constraint to a table after it has been created, any columns that make up the PRIMARY KEY constraint are modified to NOT NULL. Only one PRIMARY KEY constraint can exist for any table. For example, using a PRIMARY KEY constraint on an orders table, you can specify that a table cannot have two records with the same order number. UNIQUE The UNIQUE constraint defines a secondary key for the table. This is a column or set of columns that you can use as another way of uniquely identifying a row. No two rows can have the same values for the UNIQUE key column or columns. Although it is not possible for a table to have more than one primary key, a table can have more than one UNIQUE constraint. The columns for a UNIQUE constraint do not have to be identified as NOT NULL (although they usually are). If the values for any of the columns that form the unique constraint are null, the constraint is not checked. For example, using a PRIMARY KEY and UNIQUE constraint on a customers table, you can specify that the customer number is a primary key and that the customer name is a unique key (which would mean that you could not have two customers with the same name on your table—a rare situation). The UNIQUE constraint is not checked if the values in the column are null. FOREIGN KEY The FOREIGN KEY or referential integrity constraint enforces relationship integrity between tables. It dictates that a column or set of columns on the table match a PRIMARY KEY or UNIQUE constraint on a different table. For example, you could set up a FOREIGN KEY constraint on the orders table to specify that whenever an order record is inserted or updated, the customer number must also exist in the customers table. This ensures that you don't get orders for nonexistent customers. You use FOREIGN KEY constraints to enforce parent/child relationships between tables. You can even use them to Please purchase PDF Split-Merge on to remove this watermark.
  2. enforce self-referential constraints, usually in situations where a hierarchical structure is set up with all the rows held in the same table. If any of the columns of the foreign key are null, the constraint is not checked at all. Foreign key columns are usually declared as NOT NULL. It is possible to specify that when the parent row is deleted, the delete should automatically cascade and delete the child rows—a dangerous situation. The user is informed only about the master rows that were removed, and he might not be aware of the additional rows that were deleted automatically in the background because he is not told that this cascading deletion has happened. Only this automatic deletion of child rows is supported by specifying the ON DELETE CASCADE clause to the end of the foreign key creation statement. If you change the master table's key value, however, the child rows are not updated automatically to reflect the new key; you can implement this update cascade requirement using database triggers. FOREIGN KEY constraints are not checked at all if any of the columns in the foreign key are null. CHECK A CHECK constraint specifies additional logic that must be true for the insert, update, or delete statement to work on the table. The additional logic returns a Boolean result, and in the check constraint, you ensure the values in the row being modified satisfy a set of validation checks that you specify. The syntax of a CHECK constraint is very similar to the syntax found in the WHERE clause of a SELECT statement; however, you cannot use subqueries or other columns that vary over time (such as SYSDATE). You can use database triggers to perform this additional processing that you cannot put into constraints. For example, using a CHECK constraint on the orders table, you can specify that the order amount must be greater than zero and the salesman's commission cannot be greater than 10 percent of the order total. Indexes PRIMARY KEY and UNIQUE constraints automatically create an index on the columns they're defined against if the constraint is enabled upon creation. If an index already exists on the columns that form the PRIMARY KEY or UNIQUE constraint, that index is used, and Oracle cannot create a new one. Oracle creates indexes when the constraint is enabled (which is the default when the constraint is first added to the table). Oracle drops the indexes from the table when the constraint is disabled. Enabling and disabling constraints can take significant time and system overhead due to the index creation and removal. When you set up a FOREIGN KEY constraint, the columns are not indexed automatically. Because the foreign key columns are usually involved in joining tables together, you manually create indexes on those columns. Disabling a PRIMARY KEY or UNIQUE constraint drops the index for the constraint. Database Triggers A database trigger is a PL/SQL block that you can define to automatically execute for insert, update, and delete statements against a table. You can define the trigger to execute once for the entire statement or once for every row that is inserted, updated, or deleted. For any one table, there are twelve events for which you can define database triggers. For each of the twelve events, you can define many database triggers for the same event. Please purchase PDF Split-Merge on to remove this watermark.
  3. A database trigger can call database procedures that are also written in PL/SQL. Unlike database triggers, procedures on the database are stored in a compiled form. For this reason, you should put the longer code segments into a procedure and then call the procedure from the database trigger. In addition to implementing complex business rules, checking, and defaulting, you can use database triggers to insert, update, and delete other tables. An example of this use is providing an auditing facility where an audit trail is automatically created in an audit table whenever a row is changed on a table. Without database triggers, this function would be implemented in the front-end programs that make the change to the database; however, someone bypassing the code in the front-end programs (using SQL*Plus, for example) would not go through the checks and processing defined. Database triggers differ from constraints in that they enable you to embed SQL statements within them, whereas constraints do not. If possible, use constraints for checking; they are quicker than using database triggers. System-Level Privileges Each Oracle user defined on the database can have one or more of over 80 system-level privileges. These privileges control on a very fine level the right to execute SQL commands. The database administrator assigns system privileges either directly to Oracle user accounts or to roles. The roles are then assigned to the Oracle user accounts. For example, before I can create a trigger on a table (even if I own the table as an Oracle user), I must have a system privilege called CREATE TRIGGER either assigned to my Oracle user account or assigned to a role given to the user account. The CREATE SESSION privilege is another frequently used system-level privilege. In order to make a connection to the database, an Oracle account must have the CREATE SESSION system-level privilege assigned to it. This gives the account the privilege to make connections to the database. Object-Level Privileges Object-level privileges provide the capability to perform a particular type of action (select, insert, update, delete, and so on) on a specific object. The owner of the object has full control over the object and can perform any action on it; he doesn't need to have object-level privileges assigned to him. In fact, the owner of the object is the Oracle user who grants object-level privileges to others. For example, if the user who owns a table wants another user to select and insert rows from his table (but not update or delete), he grants the select and insert object-level privileges on that table to the other user. You can assign object-level privileges either directly to users or to roles that are then assigned to one or more Oracle user accounts. Users and Roles A role is a type of object that you can use to simplify the administration of system and object-level privileges. Instead of assigning privileges directly to user accounts, you can assign the privileges to roles that are then assigned to users. Please purchase PDF Split-Merge on to remove this watermark.
  4. Roles are essentially groupings of system and object-level privileges. They make the administration of privileges much easier because you can configure the privileges for a particular type of user once and assign those privileges to a role. When a user needs that set of privileges, you can use a single role assignment command to set that user up. Without the use of roles, you'd need to issue many commands for each of the different privileges required. In addition, you can set up different roles with the correct privileges even though you don't yet have Oracle user accounts that require those assignments. You can assign a role to another role, building hierarchies of roles. Also, you can protect a role with a password that the user must supply when he wants to enable the role. As already discussed, a physical database could contain many Oracle user accounts that are protected by passwords. You must supply the username and password regardless of which tool you use to gain access to the database. Roles are not the same as Oracle users; you cannot connect to the database by supplying a role name and password. Auditing Oracle's auditing mechanism provides three types of audit trails. One audit trail tracks which system privileges are used. Statement auditing keeps track of which SQL statements are used without regard to specific objects. Object-level auditing audits access to specific objects. You can initiate these audit trails to track when the statements succeed, when they fail, or both, so that all accesses are audited. You can use auditing to keep track of anyone attempting to break into the system. In addition, you can set up how all the different types of auditing record the entries. The audit trail can record one entry per operation regardless of how many attempts are made on the operation during the connection session. Alternatively, request one entry in the audit trail for every attempt (successful or not) on the operation during the session. If it's set up and enabled, the audit trail keeps the audit information in a data dictionary table owned by the user SYS. This table indicates the operation being audited, the user performing the operation, and the date and time of the operation. Oracle provides a set of data dictionary views to make the information in the dictionary audit table more meaningful. Although the audit trail is implemented in a data dictionary table, it keeps the insertion of rows in the audit trail even if the user rolls back his transaction. The database administrator can clear out or archive the audit trail periodically. Backup and Recovery In this part, I discuss some of the options that the architecture gives you when it comes to backing up and recovering your database. See Chapter 14, "Backup and Recovery," for more detail. Backup and Recovery Options This section outlines at a high level some of the options available for backing up and restoring your database. I discuss the types of failure that can occur and the actions to take. The major part of this section describes preventive action to guard against loss of your database files. This section discusses in theory the available options. The backup and recovery options mentioned here, along with other available options, are discussed in greater detail in Chapter 14. Different Types of Failure The major types of failure that can occur are statement failure, user process failure, machine failure, distributed transaction failure, instance failure, and disk failure/file loss. Please purchase PDF Split-Merge on to remove this watermark.
  5. Statement Failure In Oracle, a DML statement such as UPDATE operates on either all the rows satisfying its where clause or none at all. Failure with a statement occurs for a myriad of reasons. For example, when you insert rows into a table, the table might require more storage; if the database software discovers that no more free storage is available, it returns an error message to the user. Oracle does not leave only half the rows updated. Even if the failure occurs halfway through the statement, the rows already modified are "unmodified." This is known as statement-level rollback. Note that other DML statements in the transaction remain in a pending state ready for a commit or rollback. User Process Failure A user process failure occurs when the user process making the connection to the database terminates abnormally during execution. For example, the system administrator could have killed the user process. If this does occur, the Oracle background process PMON automatically rolls back any changes for the current transaction. All changes already committed by the user process are saved, but inserts, updates, and deletes since the last commit or rollback are reversed. Also, the PMON background process releases any locks, rollback segments, and other system resources acquired by the user process when it was alive. No database administrator involvement is necessary. The database continues to function as usual, and the tables are accessible to other users. (A slight delay could occur before the locks are released.) Machine Failure When the machine on which the database server is running fails and shuts down (the power is turned off, for example), the Oracle instance stops running. As long as no database files are lost, the only action required of the database administrator is restarting the Oracle instance. When you do this, the SMON background process reads the online redo log files and reapplies any changes for committed transactions. Any changes that had not been committed are rolled back. Remember that a COMMIT statement writes only the changes to the redo log files; it does not write the database blocks back to disk at the point at which the commit was issued. If the database blocks with committed changes were written to the database files before the machine failure, the SMON background process obviously does not need to reapply the changes for those blocks. Instance Failure Instance failure occurs when the machine is still up and running but the Oracle instance itself fails (perhaps one of the background processes was killed). This situation is very similar to machine failure in that the database administrator needs only to restart the instance; the SMON process reapplies any changes. When restarting the instance after this kind of failure, you will notice no difference from when the instance is started after a normal shutdown. Distributed Transaction Failure A distributed transaction is one that involves changes to more than one database. If a failure occurs during a distributed transaction, the RECO background process (if it's running) automatically synchronizes the rollbacks of the transaction's changes across all the databases involved. Again, no manual intervention is required in all but the most serious cases. The database administrators of the instances involved in the distributed transaction can manually force the commit or rollback of the changes on their databases, but I recommend that you leave the recovery to the RECO background process if possible. This might not be possible if the links between the databases are not available for the RECO processes on all the instances to communicate. Please purchase PDF Split-Merge on to remove this watermark.
  6. Disk Failure/File Loss The only time you really need to concern yourself with recovery is when you lose one or more of the files making up the database—the database files themselves, the control file, and the redo logs. Some type of manual recovery is necessary. If you lose one or more of the files (database, redo, control), you have available the options highlighted in the following sections. In every situation, you must work with a previous backup of the database. Cold Backup A cold backup is when you copy the three sets of files (database files, redo logs, and control file) when the instance is shut down. This is a straight file copy, usually from the disk directly to tape. You must shut down the instance to guarantee a consistent copy. (It is possible to back up the files without bringing the instance down; see the section titled "Hot Backup.") If you only perform a cold backup, the only option available in the event of data file loss is restoring all the files from the latest backup. All work performed on the database since the last backup is lost. Archiving If you've set up the database to run in ARCHIVELOG mode (easily done by the DBA), the database changes recorded in the redo logs are archived to an archive destination whenever the redo logs fill. Using this option, you have a complete record of changes made to the database files in the offline and online redo log files. If you lose one or more of the database files, you could restore them from the last backup and reapply the changes since the last backup from the online and offline redo log files. You must have some kind of backup of the files and the complete set of online and offline redo logs from which to reapply all the changes made to the database. With the archiving option, you lose no changes in the database if the complete set of redo logs is available. All the changes committed before the file was lost are reapplied. It is also possible to perform recovery if the control or redo log files are lost. Hot Backups Some sites (such as worldwide airline reservations systems) cannot shut down the database while making a backup copy of the files. The cold backup is not an available option. You can use a different means of backing up your database—the hot backup. Issue a SQL command to indicate to Oracle, on a tablespace-by-tablespace basis, that you want to back up the files of the tablespace. The users can continue to make full use of the files, including making changes to the data. Once you have indicated that you want to back up the tablespace files, you can use your operating system to copy those files to your backup destination. The database must be running in ARCHIVELOG mode for the hot backup option. If a data loss failure does occur, you can restore the lost database files using the hot backup and the online and offline redo logs created since the backup was done. The database is restored to the most consistent state without any loss of committed transactions. Export and Import Along with the RDBMS software, Oracle provides two utilities that you can use to back up and restore the database. These utilities are useful to database administrators for system-wide backups and recoveries and also application Please purchase PDF Split-Merge on to remove this watermark.
  7. developers for temporary backups of their own data and object recovery into their own user accounts. The Export utility dumps the definitions and data for the specified part of the database to an operating system binary file. The Import utility reads the file produced by an export, recreates the definitions of objects, and inserts the data. For a full database import, you must have an existing template database already created. If you use Export and Import as a means of backing up and recovering the database, you cannot recover all the changes made to the database since the export was performed. This is similar to the situation with the cold backup. The best you can do is recover the database to the time when the export was last performed. On large, data-filled databases, the Export and Import utilities can take a long time to run; many hours is not unusual. However, the utilities do provide an option to selectively export and import different user accounts and even objects within an Oracle user account. Multiplexing In this part, I discuss the options available for Oracle to duplicate data to provide an extra level of protection in the event of data loss. Control Files To protect against control file loss, the Oracle RDBMS software can maintain more than one copy of the control file. You do this by making a consistent copy of the existing control file and modifying an INIT.ORA parameter. This does not significantly impact the performance of the database. If all copies of the control file are lost, you can manually recreate them using the CREATE CONTROLFILE command. Redo Logs Redo logs record all the changes made to data blocks on the database. If the database is running in ARCHIVELOG mode and only the offline redo logs are lost, you should shut down the database and make another backup of the three sets of files for the database. If the online redo logs are lost, however, you could lose some work because some of the information required to reapply changes to the database files is in the online redo log files. To guard against this, you can multiplex (mirror) the online redo log files in the same way as the control files. When the RDBMS software writes changes to one redo log, the exact same information is written to an exact copy of the redo log. Distributed Databases A distributed database is one logical database that is implemented as two or more physical databases on either the same machine or separate machines thousands of miles away. The system's designers decide where the tables should physically reside. Each physical database has its own instance and sets of files, and the machines on which the databases reside are connected over a network. The location of tables can be made transparent to the application using database links and synonyms. Oracle enables a transaction and even a single statement to access tables on two or more distributed databases. This does not necessitate any more coding by the application developers. Please purchase PDF Split-Merge on to remove this watermark.
  8. A distributed transaction is a transaction that modifies tables on more than one database and then expects all the changes to be committed. If there is any kind of failure, all the changes on all the databases are rolled back. A distributed transaction can involve many Oracle databases and only one non-Oracle database. The Oracle two-phase commit mechanism controls the synchronization of commits across all databases and can automatically roll back changes on all the databases if any kind of failure should occur. The RECO background process synchronizes this operation. In addition to the this functionality, Oracle also provides the capability to replicate tables from one database to others. This is called creating a snapshot of the table. You create a snapshot with the CREATE SNAPSHOT command on the database where you want to have the copy of the data. The Oracle RDBMS software automatically sends down any changes made to the master copy of the table to each of the snapshot copies at user-defined intervals without any manual intervention. The snapshot mechanism enables you to make updates to the snapshot copy of the table, in which case the changes are sent from the copy table back to the master table. Chapter 53, "Networking Oracle," discusses distributed databases in greater detail. National Language Support Oracle's national language support (NLS) enables users to use the database in their own languages. It provides the following functions: q Support for different encoding schemes, so that data created with an encoding scheme on one machine can be processed and displayed on another. You define the character set to be used for storing data on the database as part of the CREATE DATABASE statement. For example, data created on a database with the 7-bit U.S. ASCII standard character set can be displayed on a machine connected to the same database using the Chinese GB2312-8 character set. Translation tables within the national language support provide this support. q Control over the language used for server error and informational messages, numbers, dates, currency formats, and the starting day of the week. q Support for linguistic sort to ensure the characters appear in the correct order next to each other in a sort. q Definition of a NLS language either for the database as a whole or at the session level. With changes to the session parameters but without any changes to the Oracle user account, you could run some sessions with English, others German, others French, and so on, if the same Oracle username is connected to the database with many different sessions. You can add support for new languages using the NLS*WorkBench product, which essentially maintains translation tables for interpreting input from the user and for displaying output. When it comes to delivering application systems in different languages, the most important part of the user interface is the different prompts, boilerplate, and messages from the application. Currently, the application developers themselves define how the boilerplate, prompts, and messages from the application system change from one language to another. Oracle is working on a translation product to make this task easier. Following a SQL Statement Through the Architecture In this section, I bring together major parts of the Oracle architecture and follow the steps a typical SQL statement might go through to be executed. I use a simple scenario with both the Oracle SQL*Plus tool and the Oracle database server machine on a UNIX box without any networking involved. Using a single task configuration, the Oracle instance has just started. Please purchase PDF Split-Merge on to remove this watermark.
  9. The following shows some of the steps involved in executing SQL statements. 1. The user executes the SQL*Plus tool and enters the Oracle username and password. 2. Oracle validates the username and password against the data dictionary and sends a response to the user process to indicate connection. 3. The user enters a SELECT statement. 4. Oracle must translate the SELECT before it executes it so the Oracle parser and optimizer is called. If any user has issued exactly the same statement before, the parsed version might be in the shared pool area in memory. Oracle uses the parsed version, so no extra parsing is done for this statement. 5. To translate the SELECT statement, Oracle must obtain the names of the objects, privileges, and other information from the data dictionary. The data dictionary cache area in the shared pool in the SGA does not have the information on the tables, so parsing of the SELECT statement is suspended while the information is read in. 6. Oracle runs a recursive SQL statement (a system-generated statement) to load information about the objects from the data dictionary tables in the database files into the data dictionary cache in memory. 7. Parsing of the original user SELECT statement resumes, and Oracle constructs an optimization plan to control the way the statement runs. 8. The statement accesses a table. Assume the Oracle blocks for the table are not in the database buffer cache in the SGA. The required Oracle blocks are read in from the database files and held in the cache area of the SGA. 9. Oracle runs the statement and returns the results to the user. 10. The user issues an UPDATE statement to modify some of the fields on the rows he's just selected. Because the data dictionary cache already has the information about the table in memory, no more recursive SQL is generated (assuming that the information has not been flushed out by another process requiring space in the cache area). Also, the Oracle blocks for the table are in the database buffer cache, so you won't do another disk I/O to read these blocks in. 11. Oracle locks the rows to be updated. 12. Before Oracle makes the UPDATE, information about the old state of the blocks is recorded in a rollback segment and the original version of the values is also recorded in the redo buffers cache. 13. Oracle updates the rows and records the new version of the values in the redo buffer cache in the SGA. 14. The user issues the COMMIT command to make the change permanent to the database. 15. Oracle records an entry indicating a commit in the redo buffer cache, and the old, new, and the commit entry are all flushed down to the online redo log (whichever one is the current one in use). 16. The rollback segment is released (but not overwritten) for other processes to use. 17. Oracle releases the locks on the table. 18. The user receives a commit successful message (the exact wording of this message varies from tool to tool). 19. If the user issues a rollback instead of a commit, the old versions of the values changed are restored back to the Oracle data blocks from the rollback segment. Oracle also writes to the redo buffer cache an entry indicating that a rollback was performed. Summary In this chapter, I covered the major parts of the Oracle architecture—the architecture used by all the tools and commands Please purchase PDF Split-Merge on to remove this watermark.
  10. against the database. Even though familiarity with the intricate workings of the architecture is not necessary for you to use the database and tools, it provides valuable background information in helping to work out why the database is not functioning as it should. Knowledge of the architecture also lends more meaning to some of the more cryptic error and information messages. Previous Next TOC Home Page Page Please purchase PDF Split-Merge on to remove this watermark.
  11. Previous Next TOC Home Page Page q Part II Part II Oracle Tools and Utilities Previous Next TOC Home Page Page Please purchase PDF Split-Merge on to remove this watermark.
  12. Previous Next TOC Home Page Page q 4 r SQL s Oracle's SQL: An Overview s SQL—The Standard s Data Types s Numeric s Date s Character s Binary s Others s The CREATE Statement s Tables s Indexes s Sequences s Other Objects s Writing Queries s Built-In Functions s Know Your Tables and Views s Joining Tables s Avoid Cartesian Joins s Outer Joins s Subqueries s The DECODE Statement s INSERTs, UPDATEs, and DELETEs s Master/Detail or Parent/Child SQL s Additional Tips s Summary. 4 SQL Oracle's SQL: An Overview Structured Query Language (SQL) was introduced by IBM as the language to interface with its prototype relational database management system, System R. The first commercially available SQL relational database management system was introduced in 1979 by Oracle Corporation. Today, SQL has become an industry standard, and Oracle Corporation clearly leads the world in relational database management system technology. Because SQL is a non-procedural language, sets of records can be manipulated instead of one record at a time. The syntax is free-flowing, enabling you to concentrate on the data presentation. Oracle has two optimizers (cost- and rule- based) that will parse the syntax and format it into an efficient statement before the database engine receives it for processing. The database administrator (DBA) determines which optimizer is in effect for each database instance. Please purchase PDF Split-Merge on to remove this watermark.
  13. SQL—The Standard The American National Standards Institute (ANSI) has declared SQL as the standard language for relational database management systems. Most companies that produce relational database management systems support SQL and tend to comply with the ANSI SQL89 standard. Data Types This chapter does not give a lesson on data modeling and creating a proper database schema. In order to write proper SQL statements, familiarity with database objects (tables, views, constraints) are essential. One general rule to follow when you are writing SQL statements is that data types cannot be mixed. Conversion utilities are available to convert from one type to another. These conversion functions are covered later in this chapter. Numeric The NUMBER data type is used to store zero, negative, positive, fixed, and floating point numbers with up to 38 digits of precision. Numbers range between 1.0x10 -130 and 1.0x10 126. Numbers can be defined in one of three ways: NUMBER(p,s) where p is the precision up to 38 digits and s is the scale (number of digits to the right of the decimal point). The scale can range between -84 to 127. NUMBER (p) This is a fixed-point number with a scale of zero and a precision of p. NUMBER This is a floating-point number with a precision of 38. The following table shows how Oracle stores different scales and precisions: Actual Data Defined as Stored as 123456.789 NUMBER(6,2) 123456.79 123456.789 NUMBER(6) 123457 123456.789 NUMBER(6,-2) 123400 123456.789 NUMBER 123456.789 Please purchase PDF Split-Merge on to remove this watermark.
  14. Date Instead of storing date and time information in a character or numeric format, IBM created a separate data type. For each DATE data type, the following information is stored: Century Year Month Day Hour Minute Second You can easily retrieve the current date and time by using the function SYSDATE. Date arithmetic is possible using number constants or other dates. Only addition and subtraction are supported. For example, SYSDATE + 7 will return one week from today. Every database system has a default date format that is defined by the initialization parameter NLS_DATE_FORMAT. This parameter is usually set to DD-MON-YY, where DD is the day of the month (the first day of the month is 01), MON is the abbreviated month name, and YY is a two-digit year designation. If you do not specify a time, the default time is 12:00:00 a.m. If only the time component is captured, the default date will be the first day of the current month. Character There are four character types available: 1. The CHAR data type is used where fixed-length fields are necessary. Any length up to 255 characters can be specified. The default length is 1. When data is entered, any space left over will be filled with blanks. All alpha- numeric characters are allowed. 2. The VARCHAR2 is used for variable-length fields. A length component must be supplied when you use this data type. The maximum length is 2000 characters. All alpha-numeric characters are allowed. 3. The LONG data type is used to store large amounts of variable-length text. Any length up to 2 GB can be specified. Be aware that there are some restrictions to using this data type, such as: Only one column per table can be defined as LONG. A LONG column cannot be indexed. A LONG column cannot be passed as an argument to a procedure. A function cannot be used to return a LONG column. Please purchase PDF Split-Merge on to remove this watermark.
  15. A LONG column cannot be used in where, order by, group by, or connect by clauses. 4. 4. The VARCHAR data type is synonymous with VARCHAR2. Oracle Corporation is reserving this for future use. Do not use this data type. Binary Two data types, RAW and LONGRAW, are available for storing binary type data such as digitized sound and images. These data types take on similar characteristics as the VARCHAR2 and LONG data types already mentioned. Use the RAW data type to store binary data up to 2000 characters and use the LONGRAW data type to store binary data up to 2 GB. Oracle only stores and retrieves binary data; no string manipulations are allowed. Data is retrieved as hexadecimal character values. Others Every row in the database has an address. You can retrieve this address by using the ROWID function. The format of the ROWID is as follows: BLOCK.ROW.FILE BLOCK is the data block of the data FILE containing the ROW. The data is in hexadecimal format and has the data type ROWID. MLSLABEL is a data type used to store the binary format of a label used on a secure operating system. The CREATE Statement The CREATE statement opens the world to the user. Whether a simple temporary table is to be created or a complex database schema, you will repeatedly use the CREATE statement. Only a few of the more common CREATE statements are covered here. Tables Every database designer will have to create a table sometime. The CREATE TABLE system privilege is needed to execute this command. The DBA is responsible for administering these privileges. The syntax to create a table is CREATE TABLE schema.TABLE (COLUMN DATATYPE default expression column constraint) table constraint PCTFREE x PCTUSED x INITRANS x MAXTRANS x TABLESPACE name STORAGE clause CLUSTER cluster clause ENABLE clause DISABLE clause AS subquery In this syntax, SCHEMA is an optional parameter to identify which database schema to place this table in. The default is your own. TABLE is mandatory and is the name of your table. Please purchase PDF Split-Merge on to remove this watermark.
  16. COLUMN DATATYPE are required to identify each column in the table. Separate the columns with commas. There is a maximum of 254 columns per table. The DEFAULT expression is optional and is used to assign a default value to a column when a subsequent insert statement fails to assign a value. COLUMN CONSTRAINT is optional. It is used to define an integrity constraint such as not null. TABLE CONSTRAINT is optional and is used to define an integrity constraint as part of the table, such as the primary key. PCTFREE is optional but has a default of 10. This indicates that 10 percent for each data block will be reserved for future updates to the table's rows. Integers from 1 to 99 are allowed. PCTUSED is optional but has a default of 40. This indicates the minimum percentage of space used that Oracle maintains before a data block becomes a candidate for row insertion. Integers from 1 to 99 are allowed. The sum of PCTFREE and PCTUSED must be less than 100. INITRANS is optional but has a default of 1. Integers from 1 to 255 are allowed. It is recommended that you leave this alone. This is an allocation of the number of transaction entries assigned within the data block for the table. MAXTRANS is optional but has a default that is a function of the data block size. This is used to identify the maximum number of concurrent transactions that can update a data block for your table. It is recommended that this parameter not be changed. TABLESPACE is optional but has a default value as the tablespace name of the owner of the schema. A different tablespace name than the default can be used. Tablespace names are usually application-dependent. The DBA will be able to give proper recommendations. STORAGE is optional and has default characteristics defined by the DBA. CLUSTER is optional and specifies that a table is to be part of a cluster. You must identify the columns from the table that need to be clustered. Typically, the cluster columns are columns that comprise the primary key. ENABLE is optional and turns on an integrity constraint. DISABLE is optional and turns off an integrity constraint. AS SUBQUERY is optional and inserts the rows returned by the subquery into the table upon creation. Once the table is created, you can use the ALTER TABLE command to make alterations to the table. To modify an integrity constraint, DROP the constraint first, and then re-create it. Let's look at two examples on creating tables: CREATE TABLE ADDRESSES (ADRS_ID NUMBER(6), ACTIVE_DATE DATE, BOX_NUMBER NUMBER(6), Please purchase PDF Split-Merge on to remove this watermark.
  17. ADDRS_1 VARCHAR2(40), ADDRS_2 VARCHAR2(40), CITY VARCHAR2(40), STATE VARCHAR2(2), ZIP VARCHAR2(10)); This is the simplest form of a table create using all of the default capabilities. The second example follows: CREATE TABLE ADDRESSES (ADRS_ID NUMBER(6) CONSTRAINT PK_ADRS PRIMARY KEY, ACTIVE_DATE DATE DEFAULT SYSDATE, BOX_NUMBER NUMBER(6) DEFAULT NULL, ADDRS_1 VARCHAR2(40) NOT NULL, ADDRS_2 VARCHAR2(40) DEFAULT NULL, CITY VARCHAR2(40) DEFAULT NULL, STATE VARCHAR2(2) DEFAULT 'NY', ZIP VARCHAR2(10)) PCTFREE 5 PCTUSED 65 TABLESPACE adrs_data STORAGE (INITIAL 5140 NEXT 5140 MINEXTENTS 1 MAXEXTENTS 10 PCTINCREASE 10); In this example, data constraints are being utilized and certain storage parameters will be in effect. Using PCTFREE and PCTUSED is a good idea if your data is relatively static. Indexes Indexes are used to increase performance of the database. An index is created on one or more columns of a table or cluster. Multiple indexes per table are allowed. The CREATE INDEX system privilege is needed to execute this command. The DBA is responsible for administering these privileges. The syntax to create an index is CREATE INDEX schema.index ON schema.table (COLUMN ASC/DESC) Please purchase PDF Split-Merge on to remove this watermark.
  18. CLUSTER schema.cluster INITRANS x MAXTRANS x TABLESPACE name STORAGE clause PCTFREE x NOSORT In this syntax, SCHEMA is an optional parameter to identify which database schema to place this table in. The default is your own. INDEX is mandatory and is the name of the index. ON is a mandatory reserved word. TABLE is a mandatory table name upon which the index will be built. COLUMN is the column name to be indexed. If there is more than one column, make sure they are in order of priority. ASC/DESC are optional parameters. Indexes are built in ascending order by default. Use DESC for descending order. CLUSTER is needed only if this index is for a cluster. INITRANS is optional but has a default of 1. Integers from 1 to 255 are allowed. It is recommended that this parameter not be changed. This is an allocation of the number of transaction entries assigned within the data block for the index. MAXTRANS is optional but has a default that is a function of the data block size. It is used to identify the maximum number of concurrent transactions that can update a data block for the index. It is recommended that this parameter not be changed. TABLESPACE is optional but has a default value as the tablespace name of the owner of the schema. A different tablespace name than the default might be needed. The DBA will be able to give some recommendations. STORAGE is optional and has default characteristics defined by the DBA. PCTFREE is optional but has a default of 10. This indicates that 10 percent for each data block will be reserved for future updates to the index. Integers from 1 to 99 are allowed. NOSORT is an optional parameter that will save time when creating the index if the table data is already stored in ascending order. This cannot be used if a clustered index is being created. Using the addresses table defined from the create table example, two indexes will be created in the next example. CREATE INDEX x_adrs_id ON ADDRESSES (ADRS_ID); This will create an index on the adrs_id column only. CREATE INDEX x_city_state ON ADDRESSES (CITY,STATE) TABLESPACE application_indexes; This index has two columns; CITY is the primary column. In order for queries to use an index, the column names must be part of the select statement. If a select statement included STATE but not CITY, the index would not be used. However, if the select statement contained a reference to CITY but not STATE, part of the index would be used because CITY is the first column of the index. Please purchase PDF Split-Merge on to remove this watermark.
  19. Sequences Sequences are a great way to have the database automatically generate unique integer primary keys. The CREATE SEQUENCE system privilege is needed to execute this command. The DBA is responsible for administering these privileges. The syntax to create a sequence is CREATE SEQUENCE INCREMENT BY x START WITH x MAXVALUE x NOMAXVALUE MINVALUE x NOMINVALUE CYCLE NOCYCLE CACHE x NOCACHE ORDER NOORDER In this syntax, SCHEMA is an optional parameter that identifies which database schema to place this sequence in. The default is your own. NAME is mandatory because it is the name of the sequence. INCREMENT BY is optional. The default is one. Zero is not allowed. If a negative integer is specified, the sequence will descend in order. A positive integer will make the sequence ascend (the default). START WITH is an optional integer that enables the sequence to begin anywhere. MAXVALUE is an optional integer that places a limit on the sequence. NOMAXVALUE is optional. It causes the maximum ascending limit to be 10 27 and -1 for descending sequences. This is the default. MINVALUE is an optional integer that determines the minimum a sequence can be. NOMINVALUE is optional. It causes the minimum ascending limit to be 1 and -(10 26) for descending sequences. This is the default. CYCLE is an option that enables the sequence to continue even when the maximum has been reached. If the maximum is reached, the next sequence that will be generated is whatever the minimum value is. NOCYCLE is an option that does not enable the sequence to generate values beyond the defined maximum or minimum. This is the default. CACHE is an option that enables sequence numbers to be preallocated that will be stored in memory for faster access. The minimum value is 2. Please purchase PDF Split-Merge on to remove this watermark.
  20. NOCACHE is an option that will not enable the preallocation of sequence numbers. ORDER is an option that ensures the sequence numbers are generated in order of request. NOORDER is an option that does not ensure that sequence numbers are generated in the order they are requested. If you want to create a sequence for your adrs_id column in the ADDRESSES table, it could look like the following example: CREATE SEQUENCE adrs_seq INCREMENT BY 5 START WITH 100; To generate a new sequence number, use the pseudocolumn NEXTVAL. This needs to be preceded with your sequence name. For example, adrs_seq.nextval would return 100 for the first access and 105 for the second. If determining the current sequence number is necessary, use CURRVAL. Therefore, adrs_seq.currval will return the current value of the sequence. Other Objects The purpose of this chapter is not to elaborate on every SQL statement. The ones given have been covered to give an overview of the more common create statements. Listed next is an alphabetical list of all objects that can be created with the CREATE statement. CREATE xxx, where xxx is one of the following: CLUSTER CONTROLFILE DATABASE DATABASE LINK DATAFILE FUNCTION INDEX PACKAGE BODY PACKAGE PROCEDURE PROFILE Please purchase PDF Split-Merge on to remove this watermark.
Đồng bộ tài khoản