0% found this document useful (0 votes)
138 views16 pages

SQL Sample

I SQL Server Professional TM solutions for Microsoft(r) SQL ServerTM Professionals The Median Puzzle Alex Kozak offers both ANSI SQL and "no holds barred" solutions. Medians are sensitive to "outstanding" data values, which are far from most others. Arithmetic mean, or average, depends on relative position of the values, but not on actual values.

Uploaded by

Luis Albino
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
138 views16 pages

SQL Sample

I SQL Server Professional TM solutions for Microsoft(r) SQL ServerTM Professionals The Median Puzzle Alex Kozak offers both ANSI SQL and "no holds barred" solutions. Medians are sensitive to "outstanding" data values, which are far from most others. Arithmetic mean, or average, depends on relative position of the values, but not on actual values.

Uploaded by

Luis Albino
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 16

M

SQL Server Professional


TM

Solutions for Microsoft SQL ServerTM Professionals

The Median Puzzle


Alex Kozak
7
2000 2005
In columnist Tom Moreaus February 2001 column, Breaking a Tie, Finding the Median, Tom offered a fast SELECT TOP solution. Here, Alex Kozak offers both ANSI SQL and no holds barred solutions.

Sample Issue
1 The Median Puzzle Alex Kozak Dr. Toms Workshop: The Case Against SingleInstance Clusters Tom Moreau A First Look at Express Manager Rick Dobson SQL Essentials: Two New Background Technologies in SQL Server 2005 Ron Talmage Sample Issue Downloads

MAGINE you have a point-of-sale (POS) system installed in a store. You want to analyze how effectively the financial service provider processes the requests from your POS systemin other words, how long the customers are waiting for their transactions approval. You can start with either a time frame or a fixed number of transactions and calculate average approval time. From a technical point of view, you dont see any problem. Since the POS system registers all of the transactions in a transactional database, and because ANSI SQL and T-SQL are equipped by the aggregate functions, it shouldnt be difficult for you to calculate the averages or count the transactions. Unfortunately, it doesnt take you long to discover how sensitive averages are to outstanding data valuesoutliers that are far from most others in the data set. You start thinking about medians.

12

16

Means and medians


The median is the middle value of a data set, where half the values are above the median and half are below it. In the data set {10, 10, 15, 20, 22}, the median is 15, because the number of cases below and above 15 is equal. Unlike the average (or arithmetic mean), the median depends on the relative position of the values, but not on the actual values. Our sets arithmetic mean, or average, is 15.4. Lets add the values 5 and 500,000 to our set: {5, 10, 10, 15, 20, 22, 500,000}. The median remains the same, but the arithmetic mean jumps to over 70,000. Although the central tendency (the median didnt change) remains the same, we can conclude that some exceptional event has occurred in our system.

7
Applies to version 6.0/6.5 Applies to version 7.0

2000
Applies to version 2000

2005
Applies to version 2005 (Yukon)

VB
Applies to VB/VBScript

.NET
Applies to the .NET Framework

LINK
Accompanying files available online Link to other material available online

But theres a problem with medians: What if you have an even number of values in your data set? You can arbitrarily select the lower of the two middle values as a median; this is called statistical median. Or you can calculate the arithmetic mean (average) of the two middle values; this is called financial median. [Visual Basic Developer subscribers may want to read Rick Rothsteins April 2005 article, The Lesser Known Format Function, on the related issue of arithmetic rounding vs. bankers rounding, which is also discussed in Q196652.Ed.]

Calculating the median in SQL


As Tom pointed out in his column, calculating the median in SQL isnt a trivial task, especially when you try to find a set-based solution. The median is a positiondependent characteristic of whats by definition an order-independent data set. This helps explain why theres no median function in either SQL or T-SQL. (There is, however, a Median() function in SQL Server Analysis Services.) Of course, you can calculate the median, using T-SQL or any programming language, but its often difficult to come up with what Ill call performant solutions. For the remainder of the article, Ill limit my discussion to setbased SQL solutions only. In Listing 1, we create a sample table and populate it with some data.
Listing 1. Creating a table called POS_delays and populating it with some sample data.
CREATE TABLE POS_delays ( transaction_no int not null primary key, processing_delay int not null, register_no int not null) INSERT INTO POS_delays VALUES(1, 115, 3) INSERT INTO POS_delays VALUES(2, 110, 2) INSERT INTO POS_delays VALUES(3, 135, 1) INSERT INTO POS_delays VALUES(4, 130, 3) INSERT INTO POS_delays VALUES(5, 155, 3) INSERT INTO POS_delays VALUES(6, 150, 2) INSERT INTO POS_delays VALUES(7, 175, 3) INSERT INTO POS_delays VALUES(8, 170, 3) INSERT INTO POS_delays VALUES(9, 195, 1) INSERT INTO POS_delays VALUES(10, 190, 2) INSERT INTO POS_delays VALUES(11, 215, 3) INSERT INTO POS_delays VALUES(12, 210, 3) INSERT INTO POS_delays VALUES(13, 250, 3)

As you can see, the main job is done in the WHERE clause. The expression on the left-hand side of the equal operator calculates the middle positions number for the odd or even data sets. If, for example, the number of rows is 13, then the left operand will be (13+1)/2 = 7. For an even number of rowsfor instance, 12the result will be (12+1)/2 = 6.5 and then rounded to 6. In both cases, the result will be calculated properly according to the definition of the statistical median. The most interesting part of the WHERE clause is the correlated subquery that counts the cases where the current outer value is greater than or equal to the inner values. Since the outer and inner data sets are different copies of the POS_delays table, each outer value is just looking for its own position in the POS_delays table. When the position that makes the whole WHERE clause TRUE is found, the value from the outer table, corresponding to that position, will be taken as a median. (Note that we determine the position without doing any explicit ordering.) For our POS_delays table, the left operand in the WHERE clause will be (13+1)/2 = 7. The right operand also yields 7 when subquery processing the outer value 170 (170 > {110, 115,130,135,150,155}). In Listing 3, we calculate the financial median.
Listing 3. Solution for the financial median.
SELECT AvgValue = AVG(CAST(processing_delay AS FLOAT)) FROM POS_delays t1 WHERE (SELECT COUNT(*) FROM POS_delays t2 WHERE t1.processing_delay >= t2.processing_delay) IN((SELECT CASE (SELECT COUNT(*)%2 FROM POS_delays) WHEN 0 THEN (SELECT COUNT(*)/2 FROM POS_delays) ELSE 0 END),(SELECT COUNT(*)/2 + 1 FROM POS_delays))

The column transaction_no is the unique identifier for the rows, processing_delay is a delay in milliseconds, and register_no allows us to identify the transaction by location. In Listing 2, we calculate the statistical median for processing_delay.
Listing 2. Solution for the statistical median.
SELECT processing_delay FROM POS_delays t1 WHERE (SELECT (COUNT(*) + 1)/2 FROM POS_delays) = (SELECT COUNT(*) FROM POS_delays t2 WHERE t1.processing_delay >= t2.processing_delay)

The solution for the financial median looks more complicated than the solution for the statistical median but, in fact, isnt that much harder. The main idea is the same: The correlated subquery is looking for the corresponding position of the outer value. The tricky part is in the expressions of the IN operator. If we have an even number of rows, the second expression will do the whole job, but for an odd number of rows, both expressions will be involved. To test the solutions for statistical and financial medians, you can delete data from the POS_delays table and then insert it again row by row. During that process, you can stop inserting data and run both scripts on different even (odd) data sets.

Medians in groups
In real life, you generally need to calculate the medians in groups. In our POS system scenario, for example, wed probably want to calculate the medians for each register. Listing 4 provides a solution for the statistical median, and Listing 5 shows a solution for the financial median.
www.pinnaclepublishing.com

Microsoft SQL Server Professional Sample Issue

Listing 4. Solution for the statistical median with GROUP BY.


SELECT processing_delay , t2.register_no , [#RowsInGroup] = counter , [Pos.InGroup] = cnt FROM (SELECT cnt = (counter + 1)/2 ,register_no,counter FROM (SELECT counter = count(*) , register_no FROM POS_delays GROUP BY register_no) t1) t2 INNER JOIN POS_delays t3 ON t3.register_no = t2.register_no WHERE cnt = (SELECT COUNT(*) FROM POS_delays t4 WHERE t3.processing_delay >= t4.processing_delay AND t3.register_no = t4.register_no)

Listing 7. Create duplicates in table POS_delays.


-- drop current POS_delays table, re-run Listing 1 -- and then run this UPDATE POS_delays SET processing_delay = 150 WHERE transaction_no%2 = 0

Listing 5. Using GROUP BY to calculate the financial median for a single register.
SELECT AVG(CAST(processing_delay AS FLOAT)) as median , t2.register_no, [#RowsInGroup] = counter FROM (SELECT cnt1 = counter/2 + 1, cnt2 = CASE counter%2 WHEN 0 THEN counter/2 ELSE 0 END , register_no, counter FROM (SELECT counter = count(*), register_no FROM POS_delays GROUP BY register_no) t1) t2 INNER JOIN POS_delays t3 ON t3.register_no = t2.register_no WHERE (SELECT COUNT(*) FROM POS_delays t4 WHERE t3.processing_delay >= t4.processing_delay AND t3.register_no = t4.register_no) IN(cnt1,cnt2) GROUP BY t2.register_no, counter

Unfortunately, now if you run any of the scripts in Listings 2-5, youll get erroneous results. Thats because my implicit assumption was that I had unique data sets. Lets assume that we have a column with a unique identifier, a primary key or unique constraint. Now what? Well, I dont want to change my algorithms, since I know they work as long as I have unique data sets. The challenge is to convert my duplicate data into unique data. If we run this query on our updated POS_delays table:
select * from POS_delays order by processing_delay

well get the results shown in Table 1.


Table 1. Result of SELECT * FROM POS_delays ORDER BY processing_delays.
transaction_no 1 3 4 2 6 8 10 12 5 7 9 11 13 processing_delay 115 135 150 150 150 150 150 150 155 175 195 215 250 register_no 3 1 3 2 2 3 2 3 3 3 1 3 3

The same idea, based on a correlated subquery, is used in both GROUP BY solutions. Derived table t1 prepares the groups counters, making further calculations easier. Note that both solutions will work perfectly with many groups, including ones that are arranged hierarchically. For instance, you may have many stores with POS systems where you want to calculate the medians by store and by register. In that case, you just need to add one more group in each GROUP BY clause and add one more condition in the correlated subquery, as in Listing 6.
Listing 6. Fragment of code using GROUP BY store_no, register_no.
WHERE t3.processing_delay >= t4.processing_delay AND t3.store_no = t4.store_no AND t3.register_no = t4.register_no

Medians and duplicates


Youve probably noticed that I used only unique values for the column processing_delay when I loaded the sample table POS_delays. In reality, you may have the same delay value for different transactions, as evidenced by duplicates in the processing_delay column. Lets create a new POS_delays sample table by dropping the current POS_delays table, re-creating and reloading it with the script in Listing 1, and then running the script in Listing 7.
www.pinnaclepublishing.com

As you can see, we have six duplicates of the value 150 in the processing_delay column. To solve the problem, heres how I propose to proceed: When the correlated subquery processes the external value 150, Ill compare that value to all non-150 values in the same column processing_delay. But if the external and compared internal values are the same, Ill switch to another columnhere, comparing on corresponding unique values of the column transaction_no. Listing 8 shows the approach implemented for the financial median with groups.
Listing 8. Final solution for the financial median with GROUP BY.
SELECT median = AVG(CAST(processing_delay AS FLOAT)) , t2.register_no, [#RowsInGroup] = counter FROM (SELECT cnt1 = counter/2 + 1 ,cnt2=CASE counter%2 WHEN 0 THEN counter/2 ELSE 0 END , register_no, counter FROM (SELECT counter = count(*), register_no

Microsoft SQL Server Professional Sample Issue

FROM POS_delays GROUP BY register_no) t1) t2 INNER JOIN POS_delays t3 ON t3.register_no = t2.register_no WHERE (SELECT COUNT(*) FROM POS_delays t4 WHERE ((t3.processing_delay > t4.processing_delay) OR (t3.processing_delay = t4.processing_delay AND t3.transaction_no >= t4.transaction_no)) AND t3.register_no = t4.register_no) IN (cnt1,cnt2) GROUP BY t2.register_no, counter

The only difference relative to the original script in Listing 5 is the expression in the WHERE clause of the correlated subquery. I just replaced this in Listing 5:
t3.processing_delay >= t4.processing_delay

with this in Listing 8:


(t3.processing_delay > t4.processing_delay) OR (t3.processing_delay = t4.processing_delay AND t3.transaction_no >= t4.transaction_no)

If we make the same changes in all of the scripts, theyll work properly, as you can see in Listings 9-11 in the download file.

ANSI SQL compliant, set-based solutionsThese should be set-based solutions that will work with any ANSI-compliant RDBMS, probably with a single statement. These solutions may remind you of puzzles or the kinds of questions that university professors and job interviewers are likely to come up with. These highly portable solutions typically wont be highly performant, especially for large data sets. The solutions in this article belong to this class. Set-based RDBMS-specific solutionsAn example of this type is the highly performant one Tom proposed in his February 2001 column. Ive included his example as Listing 12 in the download. The best solution at any priceAnything goes. You can create and use any database object(s). You can change the existing database structure. You can write the scripts or stored procedures. You can do whatever you want. The only goal of such a solution is to gain the best performance, as illustrated in the following code (identified as Listing 13 in the download):
SELECT i = IDENTITY(int,1,1), processing_delay INTO #tmp -- Tom Moreau suggests INSERT SELECT FROM POS_delays_pool ORDER BY processing_delay SELECT processing_delay FROM #tmp WHERE i = (SELECT (COUNT(*)+1)/2 FROM POS_delays_pool)

Medians and performance


The article wouldnt be complete without a brief discussion of one more topicnamely, performancebut first, Id like to talk about the types of solutions. By my count, there are three main classes of solutions:

Continues on page 15

ALF H M CK BLA

PA

N NO N HA OS

D: A GE

Microsoft SQL Server Professional Sample Issue

www.pinnaclepublishing.com

SQL Server Dr. Toms Workshop Professional

The Case Against SingleInstance Clusters


Tom Moreau
You have a 24/7 operation where downtime just isnt an option. So how do you deal with service packs? This month, MVP columnist Tom Moreau suggests a solution that, while not eliminating downtime, at least might minimize it.

2000

2005

HE biggest problem DBAs face in the global economy is the fact that its always daylight somewhere on the planet. That means that a customer somewhere wants access to your database right now. I guess youll just have to cope with keeping the system up 100 percent of the time. The reality, of course, is that theres no such thing as 100 percent uptime. However, you can minimize the time youre off the air when the reason for the outage is to apply a service pack. Clearly, you have to do the usual things it takes to support a 24/7 operationand that means redundancy. This comes in the form of such things as uninterruptible power supplies (UPSs), RAID arrays of some stripe (pun intended), and, of course, clustered servers. Clustered servers are two or more servers acting as backups for one another. In the olden daysSQL Server 6.5 and 7.0we referred to clusters as being Active/Passive or Active/ Active. Lets go with that for a moment. [See the archived February 1997 whitepaper, Clustering Support for Microsoft SQL Server 6.5, where Microsoft described its Microsoft Cluster Server (MSCS) alias Wolfpack.Ed.]

using the same CPU power and memory that it had to run just one SQL Server instance. You can expect a performance hit, especially if the load on each node was already high. Since failures are unlikely, youll probably prefer to accept a bit of a hit in performance rather than take down an entire SQL Server instance. In a worst-case scenario, you might be using Address Windowing Extensions (AWE), whereby youve allocated, say, 7GB of RAM from a server that has 8GBand doing this on both nodes within the cluster. If you have to fail over, the failed node wont get its AWE memory, since the alreadyrunning instance has claimed it. Badvery bad.

Clustering in SQL Server 2000 and beyond


Fast forward to the present. We dont call them Active/ Passive or Active/Active anymore (now we call them Single Instance or Multi-Instance, since you can have 16 instances on a cluster), but they basically work as previously described. Depending on your operating system platform, you can have more than two physical nodes in the cluster. If you play things right, you can have the same performance after a failover as you did prior to the shutdown of the primary node. The way you do this is by having N-1 instances (or N+1 clusters) of SQL Server for N physical nodes. One node is acting as a passive node, while the others are up and running. SQL Server 2000 currently supports up to four physical nodes. One beneficial side effect is the ability to do a rolling upgrade, while ensuring uptime. You can do an OS upgrade on the backup node while the other nodes are carrying the ball. In the unlikely event of a primary node failure, the instance would fail over to one of the other live nodes. Sure, there might be degradation, but only for the time between the failure and completion of your OS upgrade. At that point, you can fail over to the newly upgraded box and then troubleshoot the downed server. (In the case of an N+1 cluster, you can minimize performance degradation by failing each active node to the passive/standby node in turn. That way, youd avoid ending up with any node having to house more instances than what it was configured to house.) If you noticed the title of this article, youre probably wondering why Im no fan of single-instance clusters. Whats wrong with a single-instance cluster? Wouldnt
Microsoft SQL Server Professional Sample Issue 5

Clustering in the SQL Server 6.5 and 7.0 worlds


In an Active/Passive cluster, SQL Server ran on one node, while the other node sat there, patiently waiting for the primary node to fail. When that failure happened, the backup node fired up SQL Server, took control of the disks, and behaved as if nothing had happened except for rolling back any transactions that had been underway at the time of the failure. You had to use identical hardware in each node [remember HCL (Hardware Compatibility List)?Ed.], simplifying the challenge of ensuring that, in the event of failure, the backup node would perform exactly as the primary had when it was alive. In an Active/Active cluster, both nodes are running an instance of SQL Server and essentially act as each others mutual backer-upper. However, in a failover, the live node is now hosting both instances of SQL Server
www.pinnaclepublishing.com

you always have a backup? Well... yes and no. What if SQL Server itself needed a service pack? This really does translate to downtimeat the instance level. Lets take a step back for a moment. In a real 24/7 environment, mere clustering isnt enough. Its great to have a contingency in case of server failure. Think 9/11. A cluster wouldnt have helped anyone in that disaster. We plan around such possibilities with log shipping. Here, you back up the database and restore it with NORECOVERY on another server. Ideally, this server is in another countyor at least somewhere that hasnt been impacted by the disaster. Then, in the log shipping scenario, every time you back up your transaction log, you just copy it to the other server and also restore the transaction log with NORECOVERY. When a disaster is declared, you just restore with RECOVERY and have the client software point to the remote server. In the log shipping scenario, youre down for as long as it takes to 1) copy and restore the last log and 2) point the SQL clients to the backup server. If the SQL clients are application servers in a multi-tier situation, that may simply involve changing connection string information in your web.config filesassuming youre using .NETand restarting the application servers. Planned properly (and assuming that you have well-qualified staff on board), you can pull this off in a few minutes. So where does this fit in with clusters? You can use that same technology within the cluster, instead of just shipping outside of your server room. What if you have to apply a service pack to SQL Server? That means being down, potentially up to an hour. Not good enough. Lets say you have a two-node cluster and are running one SQL Server instance: 1. On the backup node, you install a second instance of SQL Server at the same time you install the original instance. (Make sure that every time you add a login on the primary, you add the same login on the backup node. This will ensure that the login/user mappings are consistent across nodes.) 2. Next, apply the service pack to the second instance. The next part is where it all comes together. 3. Set up log shipping from the primary node to the backup node. 4. Do this for all application databases on the primary node. Now you have mirror images of your application databases, one set running on an instance at the old service pack level and another running at the current level. 5. At this point, you can cut over to the new instance, one database at a time. (Keep in mind that if an application requires access to multiple databases on the same SQL Server instance, then youll have to cut over all of those databases at the same time.) 6. Once youve migrated all app databases to the second instance, youre done!
6 Microsoft SQL Server Professional Sample Issue

Now, as the saying goes, nuthins for nuthin, and, clearly, since youve now duplicated your databases, youll need at least double the amount of total disk space. The new SQL Server instance will need dedicated disks to which only it has access in the same way you set up the original instance. Theres also the issue with respect to licensing for the extra SQL Server instance. Uptime comes with a pretty hefty price tag. One possible problem with this approach is what to do with msdb, the database in which your DTS packages are stored. You could have your DTS packages resident on the backup node while attempting to access the SQL Server instance on the primary node. Not good. With a bit of planning in your DTS application architecture, you can get around this one, too. I make it a policy not to hard-code data sources in my DTS packages. Rather, I pick them up from global variables or dynamic properties tasks. In the preceding scenario, if you keep an INI file on each of the two nodes, you can have a section that contains the virtual node name or TCP/IP address of the local
Continues on page 14

Additional Resources
Ron Talmages columns on log shipping and clusters: Some Log Shipping Best Practices (January 2003) Using Differential and Full Backups to Assist in SQL Server Log Shipping (November 2003) The Case of the Unsuppressed Log Shipping Alerts (June 2004) Making Sense of Clustered SQL Server Virtual Servers (September 2004) Linchi Sheas feature articles on log shipping: Beyond Nave Log Shipping: A Robust T-SQL Standby Utility (November 2002) Log Shipping Through a Firewalla Simple T-SQL Solution (April 2003) Eric Browns July 2004 Overview of SQL Server Beta 2 for the Database Administrator (see the section on database mirroring)www.microsoft.com/technet/ prodtechnol/sql/2005/maintain/sqlydba.mspx Transcript of an October 27, 2004, chat on database mirroring hosted by Stephen Dybing www.microsoft.com/technet/community/chats/trans/ sql/sql_102704.mspx Database Mirroring Support in the .NET Framework Data Provider for SQL Server (VS 2005 beta documentation with sections providing an overview of database mirroring and on determining the failover partner server)http://msdn2.microsoft.com/library/ 5h52hef8.aspx kw

www.pinnaclepublishing.com

A LP ATE UL G F ED R

: AD GE

www.pinnaclepublishing.com

Microsoft SQL Server Professional Sample Issue

SQL Server Professional

A First Look at Express Manager


Rick Dobson
Microsoft describes its new, free SQL Server 2005 Express Manager (XM) as a lightweight database management tool built on top of the .NET Framework 2.0. Better yet, not only can you use XM to manage SQL Server 2005 Developer and Express Edition databases on local and remote computers, you can also use it with SQL Server 2000 and SQL Server 2000 Desktop Engine (MSDE 2000) databases. In this article, Rick Dobson shows you what XM offers DBAs. ANY believe that the absence of graphical client management tools for MSDE hampered its widespread adoption, but with the December 2004 and subsequent Community Technology Preview (CTP) releases of Express Manager (XM), Microsoft offers a solution that works not only with MSDE and other versions of SQL Server 2000, but also SQL Server 2005 Express. You can use XM to run T-SQL statements for queries, data manipulation, and server administration as well as browse database objects on a server instance. This article gives you a first look at the February 2005 CTP release, which updates the initial December 2004 CTP release of XM. Youll learn where to find links for XM and SQL Server 2005 Express as well as how to install them. Links for subsequent releases are likely to be available from the same location with similar installation guidelines. Next, you get a systematic review of all the XM graphical design elements and demonstrations of selected elements. The demonstrations illustrate SQL Server 2005 Express and XM conventions for database object naming, T-SQL syntax, and a new wizard for creating databases.

MSDE

2000

2005

LINK

Getting and installing Express Manager


Running Express Manager with SQL Server 2005 Express is a four-step process: 1. Install SQL Server 2005 Express. 2. Install Express Manager. 3. Start SQL Server 2005 Express. 4. Start Express Manager. The Express suite, including SQL Server 2005 Express and related programs, such as XM, will remain in flux through final release, which is likely to be in the second half of 2005. At the time of this writing, XM runs with the February 2005 CTP release of SQL Server 2005 Express, but this will undoubtedly have changed by the time you read this article. You can get the latest information about
8 Microsoft SQL Server Professional Sample Issue

which releases of SQL Server 2005 Express to install for use with XM from http://lab.msdn.microsoft.com/ express/sql. After downloading the appropriate SQL Server 2005 Express, read and follow the installation instructions. You may also find useful installation and usage commentary at Microsofts SQL Server 2005 newsgroups (http://communities.microsoft.com/ newsgroups/default.asp?icp=sqlserver2005). A link at the SQL Server 2005 Express site will direct you to the download location for XM, and youll install expressmanager.msi. The default installation location for XM is C:\Program Files\Microsoft SQL Server 2005 Express Manager\. After installation, this folder will contain the xm.exe application file, a ReadMeSqlExpMgr .htm file, and a collection of DLL files. The ReadMeSql ExpMgr.htm file focuses on pre-installation details and known problems. After initially installing both SQL Server 2005 Express and XM on a computer, youll need to start SQL Server 2005 Express. You can do this with either the Computer Manager utility that installs with SQL Server 2005 Express, or the Services MMC that you can access from the Control Panel through Administrative Tools. For example, you can select the SQL Server 2005 Express service in the Services MMC, and either start the service or designate the service to start automatically. SQL Server 2005 Express installs securely by default, which turns off the service and disables network protocols. When youre connecting from an XM session to a local SQL Server 2005 Express instance, the network protocols arent necessary because XM connects via Shared Memory. SQL Server 2005 Express installs with the Shared Memory enabled by default. You can open XM by running xm.exe. During a normal installation, youll find a link to XM in the Microsoft SQL Server 2005 program group. The link points to an XM shortcut residing in the All Users folder of the Documents and Settings directory. Selecting the link launches XM, but only after configuring the local instance for the current Windows user. (You can speed up the launch process by placing another shortcut for xm.exe in the folder for the current Windows user and selecting this shortcut instead of the one installed by default in the All Users folder. Place your new shortcut in the Start Menu\Programs\Microsoft SQL Server 2005 path for the current Windows user folder within the Documents and Settings directory.) No matter which shortcut you use to start XM, youll
www.pinnaclepublishing.com

be prompted to name a server instance and authenticate yourself with the Connect to Server dialog box. If you installed SQL Server 2005 Express as a named instance using the default instance name, you can designate the name as .\sqlexpress. A Windows administrator for the local computer can also select Windows authentication. Figure 1 shows the Connect to Server dialog box with these settings. Clicking Connect launches your request to connect to the service instance via XM.

Express Manager GUI elements


There are four main elements to the Express Manager GUI: menus and toolbars, a Query Editor, an Object Explorer, and a wizard for creating new databases. XM has two windows. The Object Explorer window appears to the left of the Query Editor window. Each window includes a toolbar that lets you fine-tune the operation of the window.

The Object Explorer window


Figure 2 shows the Object Explorer window displaying the columns within the HumanResources.Employee table of the AdventureWorks sample database (available only as a separate download as of late February 2005). Notice the top line in the window denotes a connection to the .\sqlexpress server instance based on the login credentials for a computers administrator. The Object Explorer window lets you probe the metadata for the contents of a server instance in a hierarchical manner. In this example, you see an expansion of the AdventureWorks database detailing the HumanResources.Employee table. Tables in SQL Server 2005, including its SQL Server 2005 Express edition, can belong to schemas as well as to users and roles, and the Employee table belongs to the HumanResources schema. By making it easy for tables to be owned by schemas, SQL Server 2005 minimizes the need to change object ownership when employees leave an organization or move on to new responsibilities within an organization. Figure 2 confirms that the Object

Explorer window reflects table ownership by schema. In addition to having a Tables folder, Object Explorer also has folders named Views, Programmability, and Security. As youd expect, the Views folder shows the names of all the views within a database. The Programmability folder contains two foldersStored Procedures and Functionsand the Functions folder itself contains two foldersnamely, Table-valued Functions and Scalar-valued Functions. There are actually two Security foldersone nested within each Database folder and another at the same level as the Databases folder. The Security folder nested within each Database folder contains a Users folder that contains the user names for a database. The Security folder is at the same level as the Databases folder and has a Logins folder with the login names that a server instance can authenticate. You can right-click the folder for Databases or any of the folders within the Databases folder and choose Refresh to update Object Explorer with recent metadata changes on a server instance pertaining to that folder. The Refresh control, which is the second one on the Object Explorer toolbar, lets you refresh the Object Explorer display for all the database objects in the server instance at which the current connection points. A click to the Connect control on the Object Explorer toolbar opens the Connect to Server dialog box (see Figure 1). From this dialog box, you can connect to a new server instance or change the login for a connection to the same server. After completing the dialog box and clicking Connect, the Object Explorer window will reflect the objects for the new connection.

The Query Editor window


Figure 3 shows the Query Editor window with one tab named Query 1. As you can see, the tab offers two panesone for composing a query and another for
Figure 2. Express Managers Object Explorer window lets you probe the structure of the databases on a server instance.

Figure 1. The Connect to Server dialog enables you to connect to a SQL Server 2005 Express instance from Express Manager.
www.pinnaclepublishing.com Microsoft SQL Server Professional Sample Issue 9

showing results or messages returned to the client by the server. The Results pane defaults to displaying query return values in a grid, but, like Query Analyzer, you can also show the return values from a query in text mode and save the return values to a text file. The query statement in Figure 3 returns an e-mail address directory for the salespersons from the AdventureWorks company. The statement joins the Sales.SalesPerson and HumanResources.Employee tables by EmployeeID value. The Sales.SalesPerson table represents EmployeeID values with values from its SalesPersonID column. The join of Sales.SalesPerson and HumanResources.Employee tables makes available the ContactID column values from the HumanResources .Employee table for just the sales employees. A second join matches these ContactID column values with ContactID column values in the Person.Contact table to return FirstName, LastName, and EmailAddress column values for the sales employees, which reside in the Person.Contact table. The query statement in Figure 3 is available in the download. Notice that the lines within the Query pane in Figure 3 are numbered for easy reference. You can run all of the query statements within a Query pane by choosing Query | Execute (F5) or just a highlighted subset. The asterisk next to Query 1 indicates that the tabs statements havent been saved in a file. You can save a Query panes contents with the File | Save and File | Save As

commands. The latter command is for the contents of a Query pane that wasnt previously saved. After saving the contents of a pane, the asterisk disappears. The toolbar settings at the top of a query tab apply to just that tab. The first control on the tab for a Query panes toolbar is a drop-down box. This control sets the database context for the statements in the Query pane. The control operates like a USE statement in the Query pane. In fact, the first control reflects the execution of the last USE statement. If a Query Editor window has multiple tabs, each tab can have a different database context. You can create a new tab with the File | New Query command (Ctrl+N). The three adjacent toolbar controls to the drop-down box allow you to specify how to display or save the result set from a query. From the first to the last of these three controls, they display the results in text mode, in a grid (as shown in Figure 3), or in a text file. The last toolbar control lets you toggle the visibility of the Results pane within the Query Editor window.

Menu commands and toolbar controls


Express Manager complements the functionality provided by the Object Explorer and Query Editor toolbars with an application-level menu and toolbar. Many of the capabilities from the XM menu and toolbar are distinct from the capabilities provided by the toolbars for the Object Explorer and Query Editor windows, but theres some overlap in the features exposed. Choose the Save As item on the File menu to save the query statements in a query tab for the first time. The Save As dialog box lets you designate a folder and specify a file name. The default file extension is .sql. By choosing File | Close, you can remove the active query tab from an XM session. Later in that session or in another session, you can open a saved .sql file as a new query tab by invoking the File | Open Query command. The File menu also provides items for changing the connection for an XM session, saving the contents of a query tab that was previously saved, and closing an XM session. Especially noteworthy Edit menu commands let you undo and redo editing changes to the Query Editor window. You can also find and replace text in the Query Editor window. The Edit | Go To command lets you move to a specified line number in the active query tab. Other Edit menu commands enable standard cut, copy, and paste Windows Clipboard capabilities. You can additionally select all lines in a query tab or delete any selected lines in a query tab. The View menu offers three items. View | Object Explorer sets the focus to the Object Explorer window. View | Workspace switches to the Query Editor window so that the last active query tab gains focus. The View | Increase Text Size command changes the text size to one of five progressively larger text sizes. If the current text size is already the largest size, then text reverts to the
www.pinnaclepublishing.com

Figure 3. Express Managers Query Editor window lets you run queries and display their results.
10 Microsoft SQL Server Professional Sample Issue

smallest size. The Query menu offers commands to open a new query tab, change the way query results display or save, along with two additional functions. The Query | Execute command runs all or a selected subset of statements from the active query tab. The Query | Parse command checks the syntax for the T-SQL statements in the active query tab. The Tools and Help menus offer very limited functionality in the CTP releases. The sole item on the Tools menu is the Computer Manager utility, a standalone application from XM. Computer Manager provides a graphical interface for enabling and disabling network protocols as well as starting, stopping, and pausing SQL Server 2005 Express. The Help menus single item opens an About box. Theres no integrated Help in the current release. The application-level toolbar includes six controls that provide shortcuts for invoking selected menu commands. The tool control names and their menu commands are: New Query (File | New Query) Open Query (File | Open Query) Execute (Query | Execute) Parse (Query | Parse) Save (File | Save) Increase Text Size (View | Increase Text Size)

Creating, renaming, and deleting databases


The Databases folder and the folders for individual databases in Object Explorer offer special features from their right-click (context) menus. The items within these menus go beyond just providing the capability to refresh the Object Explorer window with the metadata from a server instance; you also gain a basic set of database manipulation capabilities, such as a wizard for creating a database. Subsequent XM releases are expected to add more wizardsperhaps for table creation, building a report with Reporting Services, or simplifying data import/export. Right-clicking the Databases folder in Object Explorer opens a New Database tab in the Query Editor window.

The tab contains a form for creating a new database for the current connection. The form has two textboxesone for the databases name and the other for the location of the database files, including the .mdf and .ldf files. The default database name for the first database created this way is Database_1, and the default location is the Data folder for the server instance in the Program Files directory. Figure 4 shows the New Database tab with the database renamed to Database_SP. Clicking the forms OK button creates the database. After the creation of the database, the new database is added to the list of databases in the Databases folder within Object Explorer. The Database wizard automatically assigns names to the .mdf and .ldf files based on the name for the database. In the case of the Database_SP database, the .mdf file name is Database_SP and the .ldf file name is Database_SP_log. Immediately after creation, the database has no user-defined objects. However, you can write T-SQL data definition statements in the Query Editor window to add objects to the database. Right-clicking the folder for a database exposes several context menu items, including Rename and Delete. With the Rename menu item, you can change the name for a database, such as from Database_SP to Database_Spa. In fact, you assign the new name in Object Explorer with the same conventions that you use for assigning a new file name in Windows Explorer. The databases .mdf and .ldf file names remain unchanged despite the fact that the database has a new name. Selecting the Delete item from the context menu for a database drops the database from a database server.

Summary
This article gives you first-look coverage of Express Manager, a lightweight client management tool to use with SQL Server 2005 Express and other types of SQL Server instances. XM is a simple client management tool for beginners and others who want to perform basic database tasks. Those DBAs who want to manage SQL Server 2005 Express with something other than sqlcmd now have an easy-to-use alternative for queries, data manipulation, and server administration implemented via T-SQL. In addition, XM can also help you manage your existing instances of MSDE. v 504RICK.SQL at www.pinnaclepublishing.com

LINK

www.microsoft.com/sql/express

Figure 4. Express Manager offers a form to simplify the creation of new databases.
www.pinnaclepublishing.com

Rick Dobson is an author/trainer/Webmaster. His practice, CAB, Inc., sponsors national and Web-based seminar tours. His most recent two books are Programming Microsoft Visual Basic .NET for Microsoft Access Databases and Programming Microsoft SQL Server 2000 with Microsoft Visual Basic .NET, and his most recent DVD title is Beginners SQL Server 2000 T-SQL Programming on DVD. www.programmingmsaccess.com, rickd@cabinc.net. Microsoft SQL Server Professional Sample Issue 11

SQL Server SQL Essentials Professional

Two New Background Technologies in SQL Server 2005


Ron Talmage
Theres so much thats new in SQL Server 2005, its hard to know where to start. This month, MVP columnist Ron Talmage helps by focusing on row versioning and endpoints. As you explore these new background technologies with Ron, youll also learn something about some of SQL Server 2005s new catalog views, the new OBJECT_DEFINITION function, and database mirroring.

2005

LINK

Schemas Catalog View Security Catalog Views Server-wide Configuration Catalog Views Service Broker Catalog Views XML Schemas (XML Type System) Catalog Views

QL Server 2005 has had a long private beta, and the public Beta 3 will be coming soon, probably this summer. Much has been said and written about the new features coming with the new version, and with Beta 3, well all be able to test them for ourselves. Its a big releasemore than five years in the makingwith a daunting array of new features. After having tested and explored many of them, and having tried my best to keep up with the growing body of information about them, Ive noticed that some of the most prominent new features of the SQL Server 2005 relational engine have something in common: row versioning and endpoints. So this month, I propose to examine them with you.

So, what does this mean for the DBA? Well, in SQL Server 2000, you can query sysobjects to find out information about all of the objects in a database as follows:
SELECT * FROM sysobjects

You can execute the same query in SQL Server 2005, and (happily) it will return the same results. However, gone is a table called sysobjects in the master database; instead, theres a system view called sys.sysobjects that provides the same output. Its prefixed by sys to indicate the schema that the view is associated with, the system schema.

The OBJECT_DEFINITION() function


How is that sys.sysobjects defined? Management Studio wont show the definition, but you can use the new T-SQL tool, a system function called OBJECT_DEFINITION(), as follows:
SELECT OBJECT_DEFINITION(OBJECT_ID('sys.sysobjects'))

The hidden resource database


One of the big changes is that the system tables are kept in a separate resource database called mssqlsystemresource, a database that we cant directly access. Instead, for backward compatibility, many of the legacy system tables have been implemented as read-only views, called catalog views. The set of catalog views, in turn, has been greatly expanded, exposing much more information about the internals of SQL Server 2005 than we ever had in SQL Server 2000. Heres a list: CLR Assembly Catalog Views Data Spaces and Fulltext Catalog Views Database Mirroring Catalog Views Databases and Files Catalog Views Extended Properties Catalog Views HTTP Endpoints Catalog Views Linked Servers Catalog Views Messages (For Errors) Catalog Views Objects Catalog Views Partition Function Catalog Views Scalar Types Catalog Views
12 Microsoft SQL Server Professional Sample Issue

Heres the underlying view definition, which Ive abbreviated a bit:


CREATE VIEW sys.sysobjects AS SELECT name , id ... FROM sys.sysschobjs WHERE ...

So the view gets its information from sys.sysschobjs, but if you now try this:
SELECT OBJECT_DEFINITION(OBJECT_ID('sys.sysschobjs'))

the result is NULL (with or without the schema qualification.)


www.pinnaclepublishing.com

Now, querying sysobjects in the master database:


SELECT * FROM sysobjects WHERE name = 'sysschobjs'

does show that an object by that name is a system table, with an xtype = S.

communication mechanism between the servers. You can inspect the database mirroring endpoints on either the principal or mirror with the catalog view sys.database_mirroring_endpoints. Lets explore this view for a moment. The OBJECT_DEFINITION of sys.database_ mirroring_endpoints is:
CREATE VIEW sys.database_mirroring_endpoints AS SELECT e.name ... FROM master.sys.sysendpts e LEFT JOIN sys.syssingleobjrefs o ON o.depid = e.id AND o.class = 60 AND o.depsubid = 0 LEFT JOIN sys.syspalvalues p ON p.class = 'EPPR' AND p.value = e.protocol LEFT JOIN sys.syspalvalues t ON t.class = 'EPTY' AND t.value = e.type LEFT JOIN sys.syspalvalues s ON s.class = 'EPST' AND s.value = e.bstat & 3 LEFT JOIN sys.syspalvalues ca ON ca.class = 'EPMR' AND ca.value = e.tstat & 7 WHERE e.type = 4 AND has_access('HE', e.id) = 1

Endpoints
One of the new background technologies in SQL Server 2005 is endpoints. Essentially, a SQL Server endpoint functions as a method of communicating with SQL Server through either TCP or HTTP. Some endpoints are built in, and an endpoint will be created for each type of connection that SQL Sever 2005 will support. [There are also new CREATE ENDPOINT and DROP ENDPOINT statements.Ed.] Endpoints support a number of technologies, as you can infer from the list of endpoint catalog views: sys.database_mirroring_endpoints sys.service_broker_endpoints sys.endpoints sys.soap_endpoints sys.endpoint_webmethods sys.tcp_endpoints sys.http_endpoints sys.via_endpoints sys.ip_exceptions Lets drill down into the one for database mirroring. Database mirroring is a new high-availability feature in SQL Server 2005 that provides real-time delivery of one databases log records to another servers database to keep the second database up-to-date with the first in near real-time. It has the great advantage that with a third witness server, you can have an automatic failover from a principal database server to a mirror server occur within seconds of a failure on the principal server. [Database mirroring involves immediately reproducing every update to a database (the principal database) onto a separate, full copy of the database (the mirror database) and only works with databases that use the full recovery model. The principal and mirror databases must reside on two instances of the SQL Server 2005 Database Engine, and the mirror server, which ideally resides on a different computer from the principal, provides a hot standby server for the database. Automatic failover requires a third instance of the SQL Server 2005 Database Enginethe witness server (or witness)which, ideally, will reside on a third computer. Unlike the two partners, the witness does not serve the database, but instead monitors the status of the principal and mirror server instances and can initiate automatic failover if the principal server instance fails.Ed.] When you set up database mirroring, you must establish special endpoints that determine the

Ive left the complete list of columns out, because were really interested in the underlying catalog view, which is sys.sysendpts. It, in turn, is defined as a view based on the system table, master.sys.sysendpts. So all of the aforementioned new features, which may be somewhat disparate in what they do, share a common background technology to enable their communication mechanisms, the endpoint.

Dynamic management views and functions


Indeed, in addition to the extensive set of new catalog views, SQL Server 2005 also provides a considerable set of dynamic management views and functions, prefixed by dm_. Just as with the new system views that support the legacy system tables, you can find the dynamic management objects using the Object Explorer in Management Studio and exploring the master database. Then you can use the OBJECT_DEFINITION() function to find the system tables and functions theyre based on. The dynamic management views and functions are important because they provide much more system information about SQL Server 2005 objects and behavior than we were able to get in SQL Server 2000. Plus, several dynamic management views and functions return information about the set of features that we want to look at next.

Row versioning
Another background technology that supports a number of new features is row versioning. According to the Beta 2 Books Online, SQL Server 2005 uses row versioning to support the following features: Snapshot Isolation Level READ COMMITTED with SNAPSHOT option Inserted and Deleted trigger tables

www.pinnaclepublishing.com

Microsoft SQL Server Professional Sample Issue

13

Multiple Active Results Sets (MARS) Online Indexing

Table 1. Dependencies for the Snapshot Isolation dynamic management functions.


Dynamic management function sys.dm_tran_version_store() sys.dm_tran_top_version_generators() sys.dm_tran_active_snapshot_database_transactions() sys.dm_tran_transactions_snapshot() Dependency VersionStore Sys.dm_tran_version_store() TransactionTable TransactionTableDetail

Just how all of these work with row versioning is yet to come out, but in the case of Snapshot Isolation [see Kim Tripps article at http:// msdn.microsoft.com/sql/default.aspx? pull=/library/en-us/dnsql90/html/sql2k5snapshotisol.aspEd.], there are four dynamic management functions that assist in monitoring the row versions: sys.dm_tran_version_store() sys.dm_tran_top_version_generators() sys.dm_tran_active_snapshot_database_transactions() sys.dm_tran_transactions_snapshot() Using the OBJECT_DEFINITION() function to find out what view or table theyre based on takes an interesting twist. For example, if you do the following:
SELECT OBJECT_DEFINITION(OBJECT_ID ('sys.dm_tran_version_store'))

you get the definition of the system function fn_get_sql():


SELECT text -- SQL Server 2000 only FROM syscomments WHERE id = (SELECT id FROM sysobjects WHERE name = 'fn_get_sql')

the result will be (with the column list abbreviated):


create function system_function_schema.fn_get_sql ( @handle binary(20) ) returns @tab table(...) as begin insert @tab select * from OpenRowset(FnGetSql, @handle) return end -- fn_get_sql

youll see this:


create function sys.dm_tran_version_store() returns table as return select * from OpenRowset(VersionStore)

In this case, the function does not SELECT from a known system view or table, but, instead, from the target of OPENROWSET(). Ive searched high and low but havent found any SQL Server 2005 table or view listed as VersionStore. Theres a similar story with the rest of the functions. You can see the complete set of dependencies for all the Snapshot Isolation dynamic management functions in Table 1. The second function, sys.dm_tran_top_version_ generators(), is just a grouping and summary of the first function, so it doesnt directly use OPENROWSET. The three targets of the OPENROWSET calls dont appear to be tables or views. One hypothesis is that theyre memory structures. You may have already seen that, in SQL Server 2000, if

Its known that fn_get_sql() directly queries memory, and so the FnGetSql object thats the target of the OPENROWSET call is probably some kind of memory structure. Perhaps the same is then true of the OPENROWSET calls in the Snapshot Isolation dynamic management functions. As we get more documentation about SQL Server 2005, and as the product refines into its final release form, well know much more. In the meantime, when youre working with the SQL Server 2005 relational engine in its beta form, keep in mind these background enabling technologies. They can help organize your understanding of the new features were looking forward to using. v

LINK

http://msdn.microsoft.com/SQL/2005/ default.aspx

Ron Talmage is a principal mentor with Solid Quality Learning and lives in Seattle. Hes a SQL Server MVP, the current president of the Pacific Northwest SQL Server Users Group, and a contributing editor for PASS InfoLink. ron@solidqualitylearning.com.

Single-Instance Clusters...
Continued from page 6

instance. Modify your DTS package to start with a dynamic properties task that reads the INI file and resets the instance name for your SQL Server data sources. This way, when the package kicks in, it picks up its settings

from the INI file and youre good to go. By the way, this makes your DTS package very portable. Once all of the app databases have been migrated, feel free to apply the service pack to the old boxand start log shipping going in the opposite direction. This way, youre ready for the next service pack. Oh, and if youre wondering whats coming down the pike with

14

Microsoft SQL Server Professional Sample Issue

www.pinnaclepublishing.com

SQL Server 2005, database mirroring will be a new feature with a similar functionality to log shippingand even shorter downtime. [Database mirroring with automatic failover, which will probably only be available in the Enterprise Edition, requires three serversa principal, a mirror, and a witnessbut they dont have to be either HCL or identical. Database mirroring will work between failover clusters but isnt expected to be integrated with SQL Agent.Ed.]

Drop me a line if youve come up with cool ways to enhance your uptime. v
Tom Moreau, B.Sc., Ph.D., MCSE, and MCDBA, is an independent consultant specializing in Microsoft SQL Server database administration, design, and implementation and is based in the Toronto area. Toms a SQL Server MVP and co-authorwith fellow MVP Itzik Ben-Ganof Advanced Transact-SQL for SQL Server 2000 (ISBN 1-893115-82-8). tom@cips.ca.

The Median Puzzle...


Continued from page 4

Conclusion
The median subject is pretty well developed. Many wellknown SQL gurus, including C.J. Date and Joe Celko, have offered solutions to the median puzzle. As Tom mentioned in his column [included in the download for your convenienceEd.], Joe Celkos SQL For Smarties: Advanced SQL Programming is worth owning. v 504ALEX.ZIP at www.pinnaclepublishing.com
Alex Kozak, who has an M.Sc. in computer science, is currently a senior DBA for Toronto/Philadelphia/Hong Kong-based Triversity Inc., a leading software and service provider for the retail industry. akozak@triversity.com.

Out of curiosity, I compared the results from this code to Toms on a data set with more than 51,000,000 rows, and, given the no holds barred ground rules, sure enough, it won, running about three times faster (including temporary table creation). So, as in many situations, theres no simple answer to the question, Whats the best solution? There are a lot of factors that can affect the performance, and your goal as a developer or DBA is to find the best solution in your specific situation.

Know a clever shortcut? Have an idea for an article for SQL Server Professional? Visit www.pinnaclepublishing.com and click on Write For Us to submit your ideas.

Dont miss another issue! Subscribe now and save!


Subscribe to SQL Server Professional today and receive a special one-year introductory rate: Just $179* for 12 issues (thats $20 off the regular rate)
NAME

COMPANY

u Check enclosed (payable to Pinnacle Publishing) u Purchase order (in U.S. and Canada only); mail or fax copy u Bill me later u Credit card: __ VISA __MasterCard __American Express
CARD NUMBER EXP. DATE

ADDRESS

CITY

STATE/PROVINCE

ZIP/POSTAL CODE SIGNATURE (REQUIRED FOR CARD ORDERS)

COUNTRY IF OTHER THAN U.S. E-MAIL PHONE (IN CASE WE HAVE A QUESTION ABOUT YOUR ORDER)

Detach and return to: Pinnacle Publishing v 316 N. Michigan Ave. v Chicago, IL 60601 Or fax to 312-960-4106
* Outside the U.S. add $30. Orders payable in U.S. funds drawn on a U.S. or Canadian bank.

INS5

Pinnacle, A Division of Lawrence Ragan Communications, Inc. v 800-493-4867 x.4209 or 312-960-4100 v Fax 312-960-4106

www.pinnaclepublishing.com

Microsoft SQL Server Professional Sample Issue

15

Sample Issue Downloads


504ALEX.ZIPSource code from Alex Kozaks feature on the median. Includes the PDF of Tom Moreaus February 2001 column, Breaking a Tie, Finding the Median. 504RICK.SQLShort T-SQL script to accompany Rick Dobsons article on SQL Server 2005 Express Manager. Other Goodies and Links CTP for Microsoft SQL Server 2005 Express Manager. www.microsoft.com/downloads/details.aspx?FamilyId= 67079BB3-4FD4-4638-B923-A13741179B98&displaylang=en Where to find recent MSDN and recent MSDN SQL Server articles. http://msdn.microsoft.com/recent/default.aspx and http://msdn.microsoft.com/sql/archive/ Excellent 61-page article, SQL Server 2005 Integration Services: Lessons from Project REAL. Candid advice on challenges associated with large volumes of data, upgrading from DTS. http://msdn.microsoft.com/sql/default.aspx?pull= /library/en-us/dnsql90/html/SQL05InSrREAL.asp. Related: www.sqlis.com and www.scalabilityexperts.com/blog Handy PDF comparing features in Crystal Reports Versions 6, 7, 8, 9, 10, and XIand all the Editions (for example, Standard, Pro, Dev, AdvDev, Server). www.businessobjects .com/global/pdf/products/crystalreports/crxi_feat_ver_ ed.pdf. Related: www.pinpub.com/vbblog (3/2/05 entry) Free eBook on the Best of SQLServerCentral.com, Vol. 1, from the folks at Red-Gate. https://www.red-gate.com/ dynamic/downloadsqlsvrcentral.aspx Jackie Goldsteins article on new DataSet features in Visual Studio 2005. http://msdn.microsoft.com/data/default.aspx ?pull=/library/en-us/dnvs05/html/NewDtaStVS05.asp Tony Lotons two-part article (with code) on Visual Studio 2005s Application Designer. http://msdn.microsoft.com/ library/default.asp?url=/library/en-us/dnvs05/html/ introappdesigner1.asp (and ...2.asp) Frank Rices excellent article, How Excel 2003 Infers XSDs When Importing XML Data. http://msdn.microsoft.com/ office/default.aspx?pull=/library/en-us/odc_xl2003_ta/ html/OfficeExcelInferDetails.asp Microsoft Solutions Framework (MSF) for Agile Development Beta. www.microsoft.com/downloads/ details.aspx?FamilyID=9F3EA426-C2B2-4264-BA0F35A021D85234&displaylang=en Microsofts Enterprise Library. www.microsoft.com/ downloads/details.aspx?FamilyId=0325B97A-9534-43498038-D56B38EC394C&displaylang=en. Related: http:// msdn.microsoft.com/architecture/default.aspx?pull=/ library/en-us/dnpag2/html/entlib.asp

For access to current and archive content and source code, log in at www.pinnaclepublishing.com.

Editor: Karen Watterson (karen_watterson@msn.com) CEO & Publisher: Mark Ragan Group Publisher: Michael King Executive Editor: Farion Grove

Microsoft SQL Server Professional (ISSN 1081-9355) is published monthly (12 times per year) by: Pinnacle Publishing A Division of Lawrence Ragan Communications, Inc. 316 N. Michigan Ave., Suite 300 Chicago, IL 60601
POSTMASTER: Send address changes to Lawrence Ragan Communications, Inc., 316 N. Michigan Ave., Suite 300, Chicago, IL 60601. Copyright 2005 by Lawrence Ragan Communications, Inc. All rights reserved. No part of this periodical may be used or reproduced in any fashion whatsoever (except in the case of brief quotations embodied in critical articles and reviews) without the prior written consent of Lawrence Ragan Communications, Inc. Printed in the United States of America. Brand and product names are trademarks or registered trademarks of their respective holders. Microsoft is a registered trademark of Microsoft Corporation and Microsoft SQL Server Professional is used by Lawrence Ragan Communications, Inc., under license from owner. Microsoft SQL Server Professional is an independent publication not affiliated with Microsoft Corporation. Microsoft Corporation is not responsible in any way for the editorial policy or other contents of the publication. This publication is intended as a general guide. It covers a highly technical and complex subject and should not be used for making decisions concerning specific products or applications. This publication is sold as is, without warranty of any kind, either express or implied, respecting the contents of this publication, including but not limited to implied warranties for the publication, performance, quality, merchantability, or fitness for any particular purpose. Lawrence Ragan Communications, Inc., shall not be liable to the purchaser or any other person or entity with respect to any liability, loss, or damage caused or alleged to be caused directly or indirectly by this publication. Articles published in Microsoft SQL Server Professional reflect the views of their authors; they may or may not reflect the view of Lawrence Ragan Communications, Inc. Inclusion of advertising inserts does not constitute an endorsement by Lawrence Ragan Communications, Inc., or Microsoft SQL Server Professional.

Questions?
Customer Service: Phone: 800-493-4867 x.4209 or 312-960-4100 Fax: 312-960-4106 Email: PinPub@Ragan.com Advertising: RogerS@Ragan.com Editorial: FarionG@Ragan.com Pinnacle Web Site: www.pinnaclepublishing.com

Subscription rates
United States: One year (12 issues): $199; two years (24 issues): $338 Other:* One year: $229; two years: $398 Single issue rate: $27.50 ($32.50 outside United States)*
* Funds must be in U.S. currency.

16

Microsoft SQL Server Professional Sample Issue

www.pinnaclepublishing.com

You might also like