intTypePromotion=1
zunia.vn Tuyển sinh 2024 dành cho Gen-Z zunia.vn zunia.vn
ADSENSE

Apress - Pro SQL Server 2008 Relational Database Design and Implementation (2008)03

Chia sẻ: Hoang Nhan | Ngày: | Loại File: PDF | Số trang:15

103
lượt xem
8
download
 
  Download Vui lòng tải xuống để xem tài liệu đầy đủ

Apress - Pro SQL Server 2008 Relational Database Design and Implementation (2008)03

Chủ đề:
Lưu

Nội dung Text: Apress - Pro SQL Server 2008 Relational Database Design and Implementation (2008)03

  1. CHAPTER 1 s INTRODUCTION TO DATABASE CONCEPTS 3 A thorough database design process will undergo four distinct phases, as follows: • Conceptual: This is the “sketch” of the database that you will get from initial requirements gathering and customer information. During this phase, you attempt to identify what the user wants. You try to find out as much as possible about the business process for which you are building this data model, its scope, and, most important, the business rules that will gov- ern the use of the data. You then capture this information in a conceptual data model consisting of a set of “high-level” entities and the interactions between them. • Logical: The logical phase is a refinement of the work done in the conceptual phase, trans- forming what is often a loosely structured conceptual design into a full-fledged relational database design that will be the foundation for the implementation design. During this stage, you fully define the required set of entities, the relationships between them, the attrib- utes of each entity, and the domains of these attributes (i.e., the sort of data the attribute holds and the range of valid values). • Implementation: In this phase, you adapt the logical model for implementation in the host relational database management system (RDBMS; in our case, SQL Server). • Physical: In this phase, you create the model where the implementation data structures are mapped to physical storage. This phase is also more or less the performance tuning/optimiza- tion phase of the project because it is important that your implementation should function in the same way no matter what the physical hardware looks like. It might not function very fast, but it will function. It is during this phase of the project that indexes, disk layouts, and so on, come into play, and not before this. The first four chapters of this book are concerned with the conceptual and logical design phases, and I make only a few references to SQL Server. Generally speaking, the logical model of any relational database will be the same, be it for SQL Server, Oracle, Informix, DB2, MySQL, or any- thing else based, in some measure, on the relational model. s Note A lot of people use the name physical to indicate that they are working on the SQL Data Definition Language (DDL) objects, rather than the meaning I give, where it is the layer “below” the SQL language. But lump- ing both the DDL and the tuning layers into one “physical” layer did not sit well with some readers/reviewers, and I completely agree. The implementation layer is purely SQL and doesn’t care too much about tuning. The physical layer is pure tuning, and nothing done in that layer should affect the meaning of the data. Conceptual The conceptual design phase is essentially a process of analysis and discovery, the goal being to define the organizational and user data requirements of the system. Note that there are other parts to the overall design picture beyond the needs of the database design that will be part of the con- ceptual design phase (and all follow-on phases), but for this book, the design process will be discussed in a manner that may make it sound as if the database is all that matters. (As a reader of this book who is actually reading this chapter on fundamentals, you probably feel that way already.) Two of the core activities that make up this stage are as follows: • Discovering and documenting a set of entities and the relationships between them • Discovering and documenting the business rules that define how the data can and will be used and also the scope of the system that you are designing
  2. 4 CHAPTER 1 s INTRODUCTION TO DATABASE CONCEPTS Your conceptual design should capture, at a high level, the fundamental “sets” of data that are required to support the business processes and users’ needs. Entity discovery is at the heart of this process. Entities correspond to nouns (people, places, and things) that are fundamental to the busi- ness processes you are trying to improve by creating software. Consider a basic business statement such as the following: People place orders in order to buy products. Immediately, you can identify three conceptual entities (in bold) and begin to understand how they interact. Note too, phrases such as “in order” can be confusing, and if the writer of this spec were writing well, the phrase would have been “People place orders to buy products.” s Note An entity is not the same thing as a table. A table is an implementation-specific SQL construct. Sometimes an entity will map directly to a table in the implementation, but often it won’t. Some conceptual entities will be too abstract to ever be implemented, and sometimes they will map to two or more tables. It is a major (if somewhat unavoidable because of human nature) mistake at this point of the process to begin thinking about how the final database will look. The primary point of this note is simply that you should not rush the design process by worrying about implemen- tation details until you start to flip bits on the SQL Server. The next section of this chapter will establish the terminology in more detail. In the end, one section had to come first, and this one won. During this conceptual phase, you need to do the requisite planning and analysis so that the requirements of the business and its customers are met. The conceptual design should focus stead- fastly on the broader view of the system, and it may not correspond to the final, implemented system. However, it is a vital step in the process and provides a great communication tool for partic- ipants in the design process. The second essential element of the conceptual phase is the discovery of business rules. These are the rules that govern the operation of your system, certainly as they pertain to the process of creating a database and the data to be stored in the database. Often, no specific tool is used to doc- ument these rules, other than Microsoft Excel or Word. It is usually sufficient that business rules are presented as a kind of checklist of things that a system must or must not do, for example: • Users in group X must be able to change their own information. • Each company must have a ship-to address and optionally a bill-to address if its billing address is different. • A product code must be 12 characters in length and be in the format XXX-XXX-XXXX. From these statements, the boundaries of the final implemented system can be determined. These business rules may encompass many different elements of business activity. They can range from very specific data-integrity rules (e.g., the newly created order date has to be the current date) to system processing rules (e.g., report X must run daily at 12 a.m.) to a rule that defines part of the security strategy (e.g., only this category of users should be able to access these tables). Expanding on that final point, a security plan ought to be built during this phase and used to implement data- base security in the implementation phase. Too often, security measures are applied (or not) as an afterthought.
  3. CHAPTER 1 s INTRODUCTION TO DATABASE CONCEPTS 5 sNote It is beyond the scope of this book to include a full discussion of business rule discovery, outside of what is needed to shape and then implement integrity checks in the data structures. However, business rule discovery is a very important process that has a fundamental impact on the database design. For a deeper understanding of business rules, I suggest getting one of the many books on the subject. During this process, you will encounter certain rules that “have to” be enforced and others that are “conditionally” enforced. For example, consider the following two statements: • Applicants must be 18 years of age or older. • Applicants should be between 18 and 32 years of age, but you are allowed to accept people of any age if you have proper permission. The first rule can easily be implemented in the database. If an applicant enters an age of 17 years or younger, the RDBMS can reject the application and send back a message to that effect. The second rule is not quite so straightforward to implement. In this case, you would probably require some sort of workflow process to route the request to a manager for approval. T-SQL code is not interactive, and this rule would most certainly be enforced outside the database, probably in the user interface (UI). It pays to be careful with any rule, even the first. No matter what the initial rules state, the lee- way to break the rules is still a possibility. Unfortunately, this is just part of the process. The important thing to recognize is that every rule that is implemented in an absolute manner can be trusted, while breakable rules must be verified with every usage. sNote Ideally, the requirements at this point would be perfect and would contain all business rules, processes, and so forth, needed to implement a system. The conceptual model would contain in some form every element needed in the final database system. However, we do not live in a perfect world. Users generally don’t know what they want until they see it. Business analysts miss things, sometimes honestly, but often because they jump to conclusions or don’t fully understand the system. Hence, some of the activities described as part of building a conceptual model can spill over to the logical modeling phase. Logical The logical phase is a refinement of the work done in the conceptual phase. The output from this phase will be an essentially complete blueprint for the design of the relational database. Note that during this stage you should still think in terms of entities and their attributes, rather than tables and columns. No consideration should be given at this stage to the exact details of “how” the system will be implemented. As previously stated, a good logical design could be built on any RDBMS. Core activities during this stage include the following: • Drilling down into the conceptual model to identify the full set of entities that define the system. • Defining the attribute set for each entity. For example, an Order entity may have attributes such as Order Date, Order Amount, Customer Name, and so on. • Applying normalization rules (covered in Chapter 4).
  4. 6 CHAPTER 1 s INTRODUCTION TO DATABASE CONCEPTS • Identifying the attributes (or a group of attributes) that make up candidate keys (i.e., sets of attributes that could uniquely identify an instance of an entity). This includes primary keys, foreign keys, surrogate keys, and so on (all described in Chapter 5). • Defining relationships and associated cardinalities. • Identifying an appropriate domain (which will become a datatype) for each attribute and whether values are required. While the conceptual model was meant to give the involved parties a communication tool to discuss the data requirements and to start seeing a pattern to the eventual solution, the logical phase is about applying proper design techniques. The logical modeling phase defines a blueprint for the database system, which can be handed off to someone else with little knowledge of the sys- tem to implement using a given technology (which in our case is likely going to be some version of Microsoft SQL Server). sNote Before we begin to build the logical model, we need to introduce a complete data modeling language. In our case, we will be using the IDEF1X modeling methodology, described in Chapter 2. Implementation During the implementation phase, you fit the logical design to the tool that is being used (in our case, an RDBMS, namely, SQL Server). This involves choosing datatypes, building tables, applying constraints, writing triggers, and so on, to implement the logical model in the most efficient man- ner. This is where platform-specific knowledge of SQL Server, T-SQL, and other technologies becomes essential. Occasionally this phase will entail some reorganization of the designed objects to make them easier to implement or to circumvent some inherent limitation of the RDBMS. In general, I can state that for most designs there is seldom any reason to stray a great distance from the logical model, though the need to balance user load and hardware considerations can make for some changes to initial design decisions. Ultimately, one of the primary goals is that no data that has been specified or integrity constraints that have been identified in the conceptual and logical phases will be lost. Data can (and will) be added, often to handle the process of writing programs to use the data. The key is to not affect the designed meaning or, at least, not to take anything away from that original set of requirements. It is at this point in the project that constructs will be applied to handle the business rules that were identified during the conceptual part of the design. These constructs will vary from the favored declarative constraints such as defaults, check constraints, and so on, to less favorable but still useful triggers and occasionally stored procedures. Finally, this phase includes designing the security for the data we will be storing. We will work through the implementation phase of the proj- ect in Chapters 5, 6, 7, and 8. sNote In many modeling tools, the physical phase denotes the point where the logical model is actually generated in the database. I will refer to this as the implementation phase because the physical model is also commonly used to describe the process by which the data is physically laid out onto the hardware. I also do this because it should not be confusing to the reader what the implementation model is, regardless of the name they use to call this phase of the process.
  5. CHAPTER 1 s INTRODUCTION TO DATABASE CONCEPTS 7 Physical The goal of the physical phase is to optimize data access—for example, by implementing effective data distribution on the physical disk storage and by judicious use of indexes. While the purpose of the RDBMS is to largely isolate us from the physical aspects of data retrieval and storage, it is still very important to understand how SQL Server physically implements the data storage in order to optimize database access code. During this stage, the goal is to optimize performance, but to not change the logical design in any way to achieve that aim. This is an embodiment of Codd’s eleventh rule, which states the following: An RDBMS has distribution independence. Distribution independence implies that users should not have to be aware of whether a database is distributed. sNote We will discuss Codd’s rules in Appendix A. It may be that it is necessary to distribute data across different files, or even different servers, but as long as the published logical names do not change, users will still access the data as columns in rows in tables in a database. sNote Our discussion of the physical model will be reasonably limited. We will start by looking at entities and attributes during conceptual and logical modeling. In implementation modeling, we will switch gears to deal with tables, rows, and columns. The physical modeling of records and fields will be dealt with only briefly (in Chapter 8). If you want a deeper understanding of the physical implementation, check out Inside Microsoft SQL Server 2005: The Storage Engine by Kalen Delaney (Microsoft Press, 2006) or any future books she may have released by the time you are reading this. Relational Data Structures This section introduces the following core relational database structures and concepts: • Database and schema • Tables, rows, and columns • The Information Principle • Keys • Missing values (nulls) As a person reading this book, this is probably not your first time working with a database, and as such, you are no doubt somewhat familiar with some of these concepts. However, you may find there are quite a few points presented here that you haven’t thought about—for example, the fact that a table consists of unique rows or that within a single row a column must represent only a sin- gle value. These points make the difference between having a database of data that the client relies on without hesitation and having one in which the data is constantly challenged.
  6. 8 CHAPTER 1 s INTRODUCTION TO DATABASE CONCEPTS Database and Schema A database is simply a structured collection of facts or data. It need not be in electronic form; it could be a card catalog at a library, your checkbook, a SQL Server database, an Excel spreadsheet, or even just a simple text file. Typically, when a database is in an electronic form, it is arranged for ease and speed of search and retrieval. In SQL Server, the database is the highest-level container that you will use to group all the objects and code that serve a common purpose. On an instance of the database server, you can have multiple databases, but best practices suggest using as few as possible for your needs. At the next level down is the schema. You use schemas to group objects in the database with common themes or even common owners. All objects on the database server can be addressed by knowing the database they reside in and the schema (note that you can set up linked servers and include a server name as well): databaseName.schemaName.objectName Schemas will play a large part of your design, not only to segregate objects of like types but also because segregation into schemas allows you to control access to the data and restrict permissions, if necessary, to only a certain subset of the implemented database. Once the database is actually implemented, it becomes the primary container used to hold, back up, and subsequently restore data when necessary. It does not limit you to accessing data within only that one database; however, managing data in separate databases becomes a more manual process, rather than a natural, built-in RDBMS function. sCaution The term schema has another common meaning that you should realize: the entire structure for the databases is referred to as the schema. Tables, Rows, and Columns The object that will be involved in all your designs and code is the table. In your designs, a table will be used to represent something, either real or imaginary. A table can be used to represent people, places, things, or ideas (i.e., nouns, generally speaking), about which information needs to be stored. The word table has the connotation of being an implementation-oriented term, for which Dictionary.com (http://dictionary.reference.com) has the following definition: An orderly arrangement of data, especially one in which the data are arranged in columns and rows in an essentially rectangular form. A basic example of this form of table that most people are familiar with is a Microsoft Excel spreadsheet, such as that shown in Figure 1-1.
  7. CHAPTER 1 s INTRODUCTION TO DATABASE CONCEPTS 9 Figure 1-1. Excel table In Figure 1-1, the rows are numbered 1–6, and the columns are lettered A–F. The spreadsheet is a table of accounts. Every column represents an attribute of an account (i.e., a single piece of infor- mation about the account); in this case, you have a Social Security number, an account number, an account balance, and the first and last names of the account holder attributes. Each row of the spreadsheet represents one specific account. So, for example, row 1 might be read as follows: “John Smith, holder of account FR4934339903, with SSN 111-11-1111, has a balance of –$100.” (No offense if there is actually a John Smith with SSN 111-11-1111 who is broke—I just made this up!) This data could certainly have been sourced from a query that returns a SQL table. However, this definition does not actually coincide with the way you should think of a table when working with SQL. In SQL, tables are a representation of data from which all the implementa- tion aspects have been removed. The goal of relational theory is to free you from the limitations of the rigid structures of an implementation like an Excel spreadsheet. In the world of relational databases, these terms have been somewhere between slightly and greatly refined, and the different meanings can get quite confusing. Let’s look at the different terms and how they are presented from the following perspectives: • Relational theory • Logical/conceptual • Implementation • Physical Table 1-1 lists all of the names that tables are given from the various viewpoints.
  8. 10 CHAPTER 1 s INTRODUCTION TO DATABASE CONCEPTS Table 1-1. Table Term Breakdown Viewpoint Name Definition Relational Relation This term is seldom used by nonacademics, but some theory literature uses this term exclusively to mean what most programmers think of as a table. It consists of rows and columns, with no duplicate rows. There is absolutely no ordering implied in the structure of the relation, neither for rows nor for columns. Note: Relational databases take their name from this term; the name does not come from the fact that tables can be related. (Relationships are covered later in this chapter.) Logical/ Entity An entity can be loosely represented by a table with columns conceptual and rows. An entity initially is not governed as strictly as a table. For example, if you are modeling a human resources application, an employee photo would be an attribute of the Employees entity. During the logical modeling phase, many entities will be identified, some of which will actually become tables, and some of which will become several tables. The formation of the implementation tables is based on a process known as normalization, which we’ll cover extensively in Chapter 4. Implementation Recordset/ A recordset/rowset is a table that has been made physical for a rowset use, such as sending results to a client. Most commonly, it will be in the form of a tabular data stream that the user interfaces/middle tier objects can use. Recordsets do have order, in that usually (based on implementation) the columns and the rows can be accessed by position and rows by their location in the table of data. (However, it’s questionable that they should be accessed in this way.) Seldom will you deal with recordsets in the context of database design. A set in relational theory terms has no ordering, so technically a recordset is not a set per se. I didn’t come up with the name, but it’s common terminology. Implementation Table The term table is almost the same as a relation. It is a particularly horrible name, because the structure that this list of terms is in is also referred to as a table. These tables, much like the Excel tables, have order. It cannot be reiterated enough that tables have no order (the section “The Information Principle” later in this chapter will clarify this concept further). Another concern is that a table may technically have duplicate rows. It is up to you as the developer to apply constraints to make certain that duplicate rows are not allowed. Tables also have another usage, in that the results of a query (including the intermediate results that occur as a query is processing multiple joins and the other clauses of a query) are also called tables, and the columns in these intermediate tables may not even have column names. Note: This one naming issue causes more problems for new SQL programmers than any other. Physical File In many database systems (such as Microsoft FoxPro), each operating system file represents a table (sometimes a table is actually referred to as a database, which is just way too confusing). Multiple files make up a database.
  9. CHAPTER 1 s INTRODUCTION TO DATABASE CONCEPTS 11 During the conceptual and logical modeling phases, the process will be to identify the entities that define the system. Each entity is described by a unique set of attributes. An entity is often implemented as a table (but, remember, there is not necessarily a direct relationship between the two), with the attributes defining the columns of that table. You can think of each instance of an entity as analogous to a row in the table. Drilling into the table structure, we next will discuss columns. Generally speaking, a column is used to contain some piece of information about a row in a table. Atomic or scalar is the common term used to describe the type of data that is stored in a column. The key is that the column repre- sents data at its lowest level that you will need to work with in SQL. Another, clearer term—nondecomposable—is possibly the best way to put it, but scalar is quite often the term that is used by most people. Usually this means a single value, such as a noun or a word, but it can mean something like a whole chapter in a book stored in a binary or even a complex type such as a point with longitude and latitude. The key is that the column represents a single value that resists being broken down to a lower level than what is defined. So, having a column that is defined as two independent values, say Column.X and Column.Y, is perfectly acceptable, while defining a column to deal with values like '1,1' would not be, because that value needs to be broken apart to be useful. s Note The new datatypes, like XML, spatial types (geography and geography), hierarchyId, and even cus- tom-defined CLR types, really start to muddy the waters of atomic, scalar, and nondecomposable column values. Each of these has some value, but in your design, the initial goal is to use a scalar type first and one of the com- monly referred to as “beyond relational” types as a fallback for implementing structures that are overly difficult using scalars only. Table 1-2 lists all the names that columns are given from the various viewpoints, several of which we will use in the different contexts as we progress through the design process. Table 1-2. Column Term Breakdown Viewpoint Name Definition Logical/ Attribute The term attribute is common in the programming world. It conceptual basically specifies some information about an object. In early logical modeling, this term can be applied to almost anything, and it may actually represent other entities. Just as with entities, normalization will change the shape of the attribute to a specific basic form. Implementation Column A column is a single piece of information describing what the row represents. Values that the column is designed to deal with should be at their lowest form and will not be divided for use in the database system. The position of a column within a table must be unimportant to their usage, even though SQL does define a left-to-right order of column. All access to a column will be by name, not position. Physical Field The term field has a couple of meanings. One meaning is the intersection of a row and a column, as in a spreadsheet (this might also be called a cell). The other meaning is more related to early database technology: a field was the physical location in a record (we’ll look at this in more detail in Table 1-3). There are no set requirements that a field store only scalar values, merely that it is accessible by a programming language.
  10. 12 CHAPTER 1 s INTRODUCTION TO DATABASE CONCEPTS Finally, Table 1-3 describes the different ways to refer to a row. Table 1-3. Row Term Breakdown Viewpoint Name Definition Relational Tuple This is a finite set of related named value pairs. By “named,” theory (pronounced I mean that each of the values is known by a name (e.g., “tupple,” not Name: Fred; Occupation: Gravel Worker). Tuple is a term “toople”) seldom used except in academic circles, but you should know it, just in case you encounter it when you are surfing the Web looking for database information. In addition, this knowledge will make you more attractive to the opposite sex. (Yeah, if only . . .) Ultimately, tuple is a better term than row, since a row gives the impression of something physical, and it is essential to not think this way when working in SQL Server with data. Logical/ Instance Basically this would be one of whatever was being conceptual represented by the entity. Implementation Row This is essentially the same as a tuple, though the term row implies it is part of something (in this case, a row in a table). Each column represents one piece of data of the thing that the row has been modeled to represent. Physical Record A record is considered to be a location in a physical file. Each record consists of fields, which all have physical locations. This term should not be used interchangeably with the term row. A row has no physical location, just data in columns. If this is the first time you’ve seen the terms listed in Tables 1-1 through 1-3, I expect that at this point you’re banging your head against something solid, trying to figure out why such a great variety of terms are used to represent pretty much the same things. Many a newsgroup flame war has erupted over the difference between a field and a column, for example. I personally cringe now whenever a person uses the term field, but I also realize that it isn’t the worst thing if a person real- izes everything about how a table should be dealt with in SQL but misuses a term. The Information Principle The first of Codd’s rules for an RDBMS states simply that: All information in a relational database is represented explicitly at the logical level in exactly one way—by values in tables. This rule is known as the Information Principle (or Information Rule). It means that there is only one way to access data in a relational database, and that is by comparing values in columns. What makes this so important is that in your code you will rarely need to care where the data is. You simply address the data by its name, and it will be retrieved. Rearranging columns, adding new columns, and spreading data to different disk subsystems should be transparent to your SQL code. In reality, there will be physical tuning to be done, and occasionally you will be required to use physical hints to tune the performance of a query, but this should be a relatively rare occurrence (if not, then you are probably doing something wrong).
  11. CHAPTER 1 s INTRODUCTION TO DATABASE CONCEPTS 13 For example, the only way of knowing that employee A works for department B is by comparing the values in the relevant columns. There should be no backdoor way of finding this out (e.g., by accessing the data directly on disk). This leads nicely to Codd’s second rule, known as the Guaranteed Access Rule: Each and every datum (atomic value) in a relational database is guaranteed to be logically accessible by resorting to a table name, primary key value, and column name. The second thing that the Information Principle implies is that there is no order on tables in the database. Just because rows are retrieved from a table and seem to be in a given order, there is no contract between you and SQL Server to return rows in any given order, unless a given order is specified in a retrieval operation. Hence, it is not possible to access the row by its position in the table. There are two implications here. First, you should not need to go further than the column name to get a piece of data. So, encoding values in a column is considered wrong. Second, the phys- ical order of the data that a table uses shall be unimportant to the use of the data. This allows you to reorder the physical structures with no worry of causing problems for the implementation, a con- cept that you will find extremely useful when you need to do performance tuning. Imagine if you had to access your data by the directory in which it was located. I would rather not. The concept of order can be a big sticking point for many programmers. The confusion is made worse by the fact that data is always viewed in an arraylike format. For example, consider a table T with columns X and Y: SELECT X, Y FROM T This returns the following (assuming of course, that the table T consists of these values, which we will assume it does!): X Y ––– ––– 1 A 2 B 3 C It is easy to assume that this data is in this fixed order in the file where it is stored. A more “accurate” (but admittedly not easier) way to picture data storage is as a group of values floating about somewhere in space, as shown in Figure 1-2. Figure 1-2. Logical view of table data
  12. 14 CHAPTER 1 s INTRODUCTION TO DATABASE CONCEPTS As such, how the rows are output is a cross between the function of the commands you use to retrieve them and how it is easiest for the database engine to get to the data. So, the following view of the data is equivalent to the previous table shown: X Y ––– ––– 2 B 1 A 3 C It is, of course, the same data—it’s just ordered differently. Any database server has the right to return data in a different order if it is more convenient for it to do so. If you desire a guaranteed order, you must use an ORDER BY clause: SELECT X, Y FROM T ORDER BY X DESC which now returns the data in the order specified: X Y ––– ––– 3 C 2 B 1 A And, for completeness, the very same data is returned by this: SELECT Y, X FROM T ORDER BY Y Y Y ––– ––– A 1 B 2 C 3 Keep in mind that although the output of a SELECT statement has order, since the tables being selected from do not have order, a particular order cannot be assumed unless the order is forced by using an ORDER BY clause. Assuming the ordering of the result of a SELECT statement is one of the common mistakes made when dealing with SQL Server. Many programmers think that since they always receive the same results on their sandbox server (often a single processor/single disk chan- nel arrangement) that they will be guaranteed to get back data in the same order every time. Or they try to include an ORDER BY clause in a view definition (considered a virtual table in the SQL stan- dards) only to get back the data in a different order once the physical representation changes. Not to beat a dead horse (or even a live one), but the lack of data ordering in a relational data- base is an extremely important point to understand for your future implementation.
  13. CHAPTER 1 s INTRODUCTION TO DATABASE CONCEPTS 15 Domains The domain of a column is the set of valid values that the column is intended to store. For example, consider a column that is intended to store an employee’s date of birth. The following list covers the types of data and a few boundaries that you need to consider: • The value must be a calendar date with no time value. • The value must be a date prior to the current date. (Otherwise, the person will not have been born yet.) • The value of the date value should evaluate such that the person is at least 16 or 18 years old, since you couldn’t legally (and likely wouldn’t want to!) hire a 10-year-old, for example. • The value of the date value should be less than 70 years ago, since rarely will an employee (especially a new employee) be that age. • The value must be less than 120 years ago, since we certainly won’t have a new employee that old. Any value outside these bounds would clearly be in error. Together, these points could be taken to define the domain of the DateOfBirth column. In Chapter 6, we’ll cover how you might implement this domain, but in the logical phase of the design, you just need to document the domain. A great practice (not just a best practice!) is to have named domains to associate common attributes. For example, in this case there could be an employeeBirthDate domain. Every time the employee birth date is needed, it will be associated with this named domain. Admittedly, it is not always possible to do this, particularly if you don’t have a tool that will help you manage it. Domains do not have to be so specific, though. For example, you might have the following named domains: • positiveInteger: Integer values 1 and greater • date: Any valid date value • emailAddress: A string value that must be formatted as a valid e-mail address • 30CharacterString: A string of characters that can be no longer than 30 characters Keep in mind that if you actually define the domain of a string to any positive integer, the maxi- mum is theoretically infinity. Today’s hardware boundaries allow some pretty far out maximum values (e.g., 2,147,483,647 for a regular integer, and a really large number for a bigint type). It is fairly rare that a user will have to enter a value approaching 2 billion, but if you do not constrain the data within your domains, then reports and programs will need to be able handle such large data. It is a bit less rare that you might use a bigInt for a key of some sort, but frankly, in either case, the domain documentation will play a key role in the testing phase of system implementation. sNote Domains and columns need not contain only single scalar values. As long as the values are accessible only through predefined operations, you can have fixed vector values, such as a point in a plane (e.g., longitude and latitude). The ability to represent these values in SQL Server data was actually new to SQL Server 2005 and will be discussed in Chapter 5. In 2008, we now have spatial datatypes that represent a scalar (a point or a shape), but the internals can require a nonfixed number of points. The spatial datatypes are a bit of a challenge to the con- cept of a “fixed vector” of values, a topic we will discuss in more detail in Appendix C when the spatial datatypes are being introduced.
  14. 16 CHAPTER 1 s INTRODUCTION TO DATABASE CONCEPTS Metadata Metadata is data stored to describe other data. Knowing how to find information about the data stored in your system is very important. Codd’s fourth rule states the following: The database description is represented at the logical level in the same way as ordinary data, so that authorized users can apply the same relational language to its interrogation as they apply to regular data. This means you should be able to interrogate the system metadata using the same language you use to interrogate the user data (i.e., SQL). According to relational theory, a relation consists of two parts: • Heading: The set of column name/datatype name pairs that define the columns of the table • Body: The rows that make up the table In SQL Server—and most databases—it is common to consider the catalog as a collective description of the tables and other structures in the database. SQL Server exposes the heading information in a couple of ways: • In a set of views known as the information schema. It is best to use this as the primary means of viewing the properties of the objects in your database as far as is possible. It consists of a standard set of views used to view the system metadata and should exist on all database servers of any brand. • In the SQL Server–specific catalog (or system) views. These views give you information about the implementation of your objects and many more physical properties of your system. Keys In relational theory, a relation is not allowed to have duplicate tuples. In all RDBMS products, how- ever, there is no limitation that says that there must not be duplicate rows. However, it is strongly recommended that all tables have at least one candidate key defined to make certain that all rows are unique. SQL Server does not have to be used in a proper relational manner, and some people don’t mind having duplicate rows (if you are one of these people and still think that way after read- ing this book, please send us an e-mail at louis@drsql.org). However, in general, not defining keys and therefore allowing duplicate rows is very poor practice, as we will discuss in the next section. sNote Having a table without keys is useful when doing some operations, such as when you need to import data from a text file; then if uniqueness were enforced strictly in an RDBMS, you would have to do the cleansing of duplicate data in the text file. It can be much easier to import the data and do the cleansing inside the database using SQL commands. Purpose of Keys Every table should have at least one candidate key—an attribute (or combination of attributes) that can uniquely and unambiguously identify each instance of an entity (or, in the implementation model, each row in the table). In order to enforce this, the implementation of keys in the RDBMS will prevent duplicate values.
  15. CHAPTER 1 s INTRODUCTION TO DATABASE CONCEPTS 17 Consider the following table, T, with columns X and Y: X Y ––– ––– 1 1 2 1 If the design allowed the following INSERT operation: INSERT T (X,Y) VALUES (1,1) then there would be two identical rows in the table. This would be problematic for a couple of rea- sons: • Remember that rows in a table are unordered. Hence, without keys, there would be no way to tell which of the rows with value (1,1) in the preceding table was which. Hence, it would be impossible to distinguish between these rows, meaning that there would be no logical method of accessing a single row. This makes it tricky to use, change, or delete an individual row without resorting to “tricks” that Microsoft has allowed in SQL Server (such as the TOP operator in statements). • If more than one row has the same values, it describes the same object, so if you try to change one of the rows, then the other row should also change, and this becomes a messy situation. If you had defined a key on column X, then the previous INSERT would fail, as would any other insert of a value of 1 for the X column, such as VALUES (1,3). Alternatively, if you create a key based on both columns X and Y (known as a composite key), the (1,3) insert would be allowed, but the (1,1) insert would still be forbidden. sNote In a practical sense, no two rows can really be the same, because there are hidden attributes in the implementation details that prevent this situation from occurring (such as a row number or the exact location in the physical storage medium). However, this sort of physical thinking has no place in relational database design. In summary, a key defines the uniqueness of rows over a column or set of columns. A table may have as many keys as is required to maintain the uniqueness of its rows, and a key may have as many columns as is needed to define its uniqueness. The name candidate key might seem odd for this item, but it is so named because the keys defined may be used either as a primary key or as an alternate key. Primary and Alternate Keys A primary key (PK) is used as the primary identifier for an entity. It is used to uniquely identify every instance of that entity. It may be that you have more than one key that can perform this role, in which case, after the primary key is chosen, each remaining candidate key would be referred to as an alternate key (AK). For example, in the United States, you wouldn’t want two employees with the same Social Security number (unless you are trying to check “IRS agent” off your list of people you haven’t had a visit from). Every employee probably also has a unique, company-supplied identification number. One of these could be chosen as a PK (most likely the employee number), and the other would then be an AK.
ADSENSE

CÓ THỂ BẠN MUỐN DOWNLOAD

 

Đồng bộ tài khoản
2=>2