Tuesday, February 12, 2008

Get Rid of These Bad Work Habits

Get Rid of These Bad Work Habits

By Anthony Balderrama, CareerBuilder.com writer
Who doesn’t have at least one habit that drives everyone else insane? You might bite your nails, slurp your coffee or tap your pen incessantly. Annoying? Yes. Serious problems? Not really.

In the office, however, some habits can go beyond annoying your co-workers; they can damage your career.

For the sake of your career and your co-workers’ sanity, here’s a list of bad work habits that can harm your career and how you can break them. If you’re guilty of one (or more), it’s time to get them under control.

Bad Habit: Missing deadlines.
What you think: “If it’s only a little late, it doesn’t mean anything.”
What it really says: Your colleagues and boss can’t count on you.
What to do: Don’t view deadlines as negotiable. Remind yourself that people are counting on you to do your job well, which includes completing tasks on time. Even if you just barely missed the deadline and everything turned out OK, you probably caused your teammates a lot of anxiety and extra work, which they won’t forget.

Bad Habit: Dressing unprofessionally.
What you think: “I’m the office free spirit with a quirky sense of style!”
What it really says: You don’t take the job seriously.
What to do: You don’t have to be a boring dresser to be professional, but you shouldn’t look like you’re about to go clubbing or strutting down a runway. Take a cue from your co-workers to see what’s considered acceptable in the office.

Bad Habit: Not being punctual.
What you think: “As long as I get all my work in, nobody cares.”
What it really says: You think your time is more important than everybody else’s.
What to do: Stick to the schedule. Everyone in your office would like to sleep in a little or leave early, but they don’t because people rely on them to be on time.

Bad Habit: Checking your e-mail, playing games, shopping.
What you think: “I’m discreet.”
What it really says: You’re not doing your job.
What to do: Keep the fun stuff to a minimum. Most employers don’t mind if you check your e-mail every once in awhile or read your favorite blog for a few minutes in the morning. They begin to care when you minimize that game of Scrabulous every time they walk by your desk. You’re being paid to work, not play.

Bad Habit: Gossiping.
What you think: “I’m just saying what I heard.”
What it really says: You can’t be trusted.
What to do: Sure, everybody gossips a little here and there, but it shouldn’t be your livelihood. Eventually you’ll gain a reputation for not keeping anything confidential –whether it’s a personal matter or work-related. Plus, your chattering could end up hurting somebody’s feelings or reputation.

Bad Habit: Being negative.
What you think: “Everybody complains.”
What it really says: You’re the person to avoid.
What to do: It’s natural to grumble about work once in awhile. If you gripe and moan when you’re asked to do anything, however, people will not only get annoyed, they’ll wonder why you don’t just quit. Keep in mind that work isn’t always fun; keep the complaints to a minimum.

Bad Habit: Trying to be everybody’s best friend.
What you think: “I’m just sociable.”
What it really says: You don’t know how to set boundaries.
What to do: It’s not uncommon for friendships to develop at work, but don’t expect it to happen with everybody. Unless you have reason to do otherwise, treat your superiors, colleagues and subordinates like professionals, not like drinking buddies.

Bad Habit: Burning bridges.
What you think: “I’ll never see them again.”
What it really says: You’re not a professional who thinks about the future.
What to do: As much as you dream of telling off your boss or co-workers after you’ve handed in your resignation, restrain yourself. People change jobs, companies merge – someone you dissed in the past may end up being your boss down the road.

Bad Habit: Always being the funny one.
What you think: “People love me.”
What it really says: You’re really annoying.
What to do: There’s nothing wrong with being funny – most people do like a good sense of humor. Just remember that not everybody wants to hear your sarcastic quips and “Godfather” impersonations every five minutes.

Bad Habit: Forgetting you have neighbors.
What you think: “I’m not as annoying as they are.”
What it really says: You’re inconsiderate.
What to do: Do unto your co-workers as you’d want them to do unto you. Your hour-long conference call on speakerphone is just as irksome to your cube mates as theirs are to you.

Anthony Balderrama is a writer and blogger for CareerBuilder.com. He researches and writes about job search strategy, career management, hiring trends and workplace issues.

Monday, February 11, 2008

Speeding Up Web Page Loading

Speeding Up Web Page Loading - Part I (1)


As more and more businesses go online, just having a web presence is no longer enough to succeed. It takes a reliable, high-performance Web site that loads quickly too. After all, nothing makes an Internet user leave a site quicker than having to wait ages for a web page to load.

A previous post briefly identified the factors that determine how fast (or slow) your web pages load, namely:

* Size (of your web page)

* Connectivity (quality of your host's network connections and bandwidth)

* Number (of sites sharing your server).

This article will now discuss ways that webmasters can ensure their sites' pages load quickly and efficiently, by focusing on the first factor.

File size - the total of the file sizes of all the parts of your web page (graphics, music file, html, etc.) should be small enough to download quickly. A reasonably fast loading page is sized at around 50 - 70Kb, with up to 120Kb for more graphics intensive pages. You can optimize your file size by:

1. Reducing page weight:
* Eliminate unnecessary whitespace (use tools like HTML Tidy to automatically strip leading whitespace and extra blank lines from valid HTML source) and comments
* Cut down on extras (buttons, graphics) and don't put a lot of graphics and big midi files on the same page
* Move webrings from your homepage to their own page
* Reduce the file size of some of your graphics (use GifBot, an on-line gif reducer at Net Mechanic)
* Redesign pages so it works over 2 different pages instead of just one
2. Reducing the number of inline scripts or /Moving them into external files - inline scripts slow down page loading since the parser must assume that an inline script can modify the page structure. You can:
* Reduce the use of document.write to output content
* Use modern W3C DOM methods to manipulate page content for modern browsers rather than older approaches based on document.write
* Use modern CSS and valid markup - CSS reduces the amount of markup as well as the need for images in terms of layout. It can also replace images which are actually only images of text. Valid markups stop browsers from having to perform "error correction" when parsing the HTML and allows free use of other tools which can pre-process your web pages.
* Minimize CSS/script files for performance while keeping unrelated CSS/scripts in separate files for maintenance
* Use External HTML Loading - involves using an IFrame for Internet Explorer and Netscape 6, and then shifting that content via innerHTML over to a
tag. Benefits: keeps initial load times down to a minimum and provides a way to easily manage your content. Downside: we have to load content along with all the interface elements, which can severely impair the user experience of the page. A tutorial on externally loading HTML can be found here.
3. Minimizing the number of files referenced in a web page to lower the number of HTTP connections required to download a page
4. Reducing domain lookups (since each separate domain costs time in a DNS lookup) - be careful to use only the minimum number of different domains in your pages as is possible
5. Chunking your content - the size of the full page is less important if the user can quickly start acting on some information. How?
* Replace table-based layout with divs
* Break tables into smaller ones that can be displayed without having to download the entire page's content
o Avoid nesting tables
o Keep tables short
o Avoid using tables to lay out elements on the page
o Exploit several coding techniques:
+ split the page layout into multiple independent tables to preserve the browsers' ability to render each of them step-by-step (use either vertically stacked or horizontally stacked tables)
+ use floating tables or regular HTML codes that flow around the floating objects
+ use the fixed table-layout CSS attribute
* Order page components optimally - successive transmission of the DHTML code enables the browser to render the page during loading
o download page content first (so users get the quickest apparent response for page loading) along with any CSS or script required for its display;
o disable any DHTML features that require the page to complete loading before being used initially and only enable it after the page loads;
o allow the DHTML scripts to be loaded after the page contents to improve the page load's overall appearance
6. Specifying image and table sizes - browsers are able to display web pages without having to reflow content if they can immediately determine the height and/or width of your images and tables
7. Using software and image compression technology
* Use tools that can "compress" JavaScript by reformatting the source or obfuscating the source and reducing long indentifiers to shorter versions
* Use mod_gzip, a compression module using the standard zlib compression library, to compress output - compressing the data being sent out from the Web server, and having the browser decompress this data on the fly reduces the amount of data sent and increases the page display speed; HTTP compression results in 150-160% performance gain (sizes of web pages can be reduced by as much as 90%, and images, up to 50%)
8. Caching previously received data/reused content - make sure that any content that can be cached is cached with appropriate expiration times since caching engines reduce page loading time and script execution by performing optimizations and various types of caching; cuts down latency by as much as 20-fold, by preventing dynamic pages from doing any repetitive work, and reducing the turnaround time for each request
9. Choosing your user agent requirements wisely - specify reasonable user agent requirements for projects; basic minimum requirements should be based upon modern browsers which support the relevant standards
The next post will focus on the other two factors, as well as other ways that webmasters can speed up their web page loading.

Speeding Up Web Page Loading - Part II (2)


In Part I, we detailed how webmasters can speed up the loading of their web pages by optimizing their file sizes. Here, some additional tips to make pages load faster will be discussed.

Another factor to consider is the speed at which the pages are served. What happens is that servers get bogged down if too many web surfers ask for the same page at the same time, resulting in a slowdown in loading speed.

Although there is no way to predict exactly how many people will visit a site at once, it is always a good idea to choose web hosting companies that tune its servers to make sure that enough computing power is given to the sites that get the most hits.

You can opt for hosts, like LyphaT Networks, that use caching and/or compression software to maximize the performance of their servers and minimize page loading times.

Another consideration is your host's connectivity or speed of Internet connection and bandwidth. Bandwidth refers to the amount of data that can be transmitted in a fixed amount of time and this actually fluctuates while you are surfing. Different users also have different access to the Internet (some might use dial-up or a dedicated T-1) so it is up to you to keep your file sizes down so that no matter who is viewing your site, they get as quick a download as possible.

Some ways you can do this is by:

* Testing your page loading time with low bandwidth emulation - you can use the mod_bandwidth module for this if you're running an Apache Web server. This module enables you to set connection bandwidth limits to better emulate typical modem speeds.
* Pinging your site (reply time should be 100 ms) and then running tracert - each hop/transient point should be less than 100ms, and if it takes longer or times out, then it could be slow at that point.

You can check your results against the table shown at the Living Internet site on the number of seconds it takes to download data of various sizes at varying speeds of Internet connections.
* Using the HTML Toolbox program at Net Mechanic, or the Web Page Analyzer - 0.82, a free web-based analyzer that calculates page size, composition and page download time.

Tracking Web Site Traffic


When you establish an online presence, you're basically after one thing, to get your message across to Internet users. You don't set up a website just so people can ignore it, do you?

Whether or not you are running mission critical ecommerce sites or online marketing campaigns, as a webmaster, you're naturally curious about your site's visitors.

But first, it is important to distinguish what kind of visitors go to your site. According to Yari McGauley, in his article Web Tracking & Visitor Stats Articles, websites get two kinds: normal visitors (people) and robots (or any kind of automatic 'web crawling' or 'spidering' programs), ranging from Search Engines, to Link and Website Availability Checkers to Spam/Email Harvesters.

So how can you find out more information about your visitors? There are a number of ways.

1. install a counter at your site - a counter simply provides an indication of the number of visitors to a particular page; usually counts hits (a hit is a single request from a browser to a server), which is not a reliable indicator of website traffic since many hits are generated by a single page visit (both for the request itself, and for each component of the page)
2. use logfiles - if your server is enabled to do it (check with your web host) then every action on the server is logged in logfiles (which are basically text files describing actions on the site); in their raw form, logfiles can be unmanageable and even misleading because of their huge size and the fact that they record every 'hit' or individual download; you need to analyze the data

There are 2 ways this can be done:
* Download the logfiles via FTP and then use a logfile analyzer to crunch the logfiles and produce nice easy to read charts and tables
* Use software that runs on the server that lets you look at the logfile data in real-time

Some log file analysers are available free from the Web (ex. Analog), though commercial analyzers tend to offer more features, and are more user-friendly, in terms of presentation (ex. Wusage, WebTrends, Sane Solutions' NetTracker, WebTracker)
3. use a tracker - generally, each tracker will require you to insert a small block of HTML or JavaScript into each page to be tracked; gives some indication of how visitors navigate through your site: how many visitors you had (per page); when they visited; where they came from; what search engine queries they used to find your site; what factors led them to your site (links, ads etc).

Tracking tools also:
* provide activity statistics - which pages are the most popular and which the most neglected
* aggregates visitor traffic data into meaningful reports to help make website management decisions on a daily basis (ex. content updates)
4. third party analysis - services exist which offer to analyze your traffic in real time for a monthly fee; this is done by:
* placing a small section of code on any page you want to track
* information generated whenever the page is viewed is stored by the third party server
* server makes the information available in real time for viewing in charts and tables form

*OpenTracker is a live tracking system that falls somewhere between 3 and 4. You might notice, however, that tracking services will report lower traffic numbers than log files. Why? Because good tracking services use browser cookies as basis, and so, do not recognize the following factors as unique visits or human events:

* repeat unique visitors (after 24 hours)
* hits
* robot and spider traffic
* rotating IP numbers (i.e. AOL)

It also distinguishes how many unique visitors are from either: the same ISP, or corporate firewalls, large organizations. Otherwise all these users will be counted as the same visitor. Log analyzers, on the other hand, record all measurable activity and do not distinguish between human and server activities.

So why are web traffic statistics important? Because they help you fine-tune your web marketing strategy by telling you:

* Which pages are most popular, which are least used
* Who is visiting your site
* Which browsers to optimize your pages for
* Which banner ads are bringing the most visitors
* Where errors or bad links may be occurring in your pages
* Which search engines are sending you traffic
* Which keywords are used to find your site
* Which factors affect your search engine rankings and results
* Where your traffic is coming from: search engines or other web sites
* Whether your efforts to generate new customers and sales leads (such as newsletter signups and free product trials) working or not
* Which are your most common entry pages and exit pages

Broken Link Checkers


One of the basic things that webmasters need to master is the use of links. It's what makes the Internet go round, so to speak. Links are simple enough to learn and code. But sometimes, we make mistakes and end up with broken links (particularly if we're coding manually) or even dead ones (if we don't update content that often).

To an Internet user, there's nothing more frustrating than clicking on links that give nothing but error messages (alongside those pop-up ads, of course), and as a result, they may leave your site. That's not so bad if it's just a hobby site, but what if you're running e-commerce sites? Or if you're trying to get your website registered with search engines?

I know manually checking for broken/dead links can be time consuming, not to mention migraine-inducing. So what's your recourse? Automated link checkers of course! There are a number of them available online.

Here are some (latest versions), available either for free or under GPL, for your consideration:

* LinkChecker v1.12.2 - a Python script for checking your HTML documents for broken links
* Checkbot v1.75 - written in Perl; a tool to verify links on a set of HTML pages; creates a report summarizing all links that caused some kind of warning or error
* Checklinks 1.0.1 - written in Perl; checks the validity of all HTML links on a Web site; start at one or more "seed" HTML files, and recursively test all URLs found at that site; doesn't follow URLs at other sites, but checks their existence; supports SSI (SHTML files), the latest Web standards, directory aliases, and other server options
* Dead Link Check v0.4.0 - simple HTTP link checker written in Perl; can process a link cache file to hasten multiple requests (links life is time stamp enforced); initially created as an extension to Public Bookmark Generator, but can be used by itself as is
* gURLChecker v0.6.7 - written in C; a graphical web links checker for GNU/Linux and other POSIX OS; under GPL license
* JCheckLinks v0.4b - a JavaT application which validates hyperlinks in web sites; should run on any Java 1.1.7 virtual machine; licensing terms are LGPL with the main app class being GPL
* Linklint v2.3.5 - an Open Source Perl program that checks links on web sites; licensed under the Gnu General Public License
* LinkStatus v0.1.1 - written in C++; an Open Source tool for checking links in a web page; discontinued and forked and to KLinkStatus, a more powerful application for KDE (which makes it hard for Windows and Mac users to build); KLinkStatus v0.1-b1 at KDE-Apps.org
* Xenu's Link SleuthT v1.2e - a free tool that checks Web sites for broken links; displays a continuously updated list of URLs sortable by categories; Platform(s): Win 95/98/ME/NT/2000/XP
* Echelon Link Checker - a free CGI & Perl script from Echelon Design; you simply edit a few variables at the top of the script, set a url to the page you want, and it'll go to that page, get all the links, and check each link to see if its "dead" or not; allows you to set what word or words define a dead page, such as 404 or 500; Platforms: All
* Link Checker (CMD or Web v1.4) - CMD version can check approximately 170 links in about 40 seconds; CGI version takes about a minute and 10 seconds; very accurate; scans for dead links (not just 404 errors but any error that prevents the page from loading Platform(s): ALL
* phplinkchecker - a modified freeware version of the old PHP Kung Foo Link Checker; reports the status (200, 404, 401, etc.) of a link and breaks the report down showing useful stats; used for finding broken links, or working links on any page; can be easily modified for any specific use Platform(s): Unix, Windows

You can also have your URL's links checked (for free) at the following sites:

* 2bone's LinkChecker 1.2 - allows site owners to quickly and easily check the links on their pages; allows users to add their link to 2bone's links section; added (as of Jan 2004) an option to see all results returned on a single page or use the quicker 10 links per results page
* Search Engine Optimising - via its Website Broken Links Checker Platform(s): All
* Dead-Links.com - via its Free Online Broken Link Checker from Dead-Links.com; spider-based technology and super fast online analysis

With all these resources available at no cost to you, there's really no reason why you should still have those broken and dead links around.

Caching Web Site for Speed


MarketingTerms.com defines caching as the 'storage of Web files for later re-use at a point more quickly accessed by the end user,' the main objective of which is to make efficient use of resources and speed the delivery of content to the end user.

How does it work?

Well, Guy Provost offers a more detailed explanation of How Caching Works, but simply put, a web cache, situated between the origin Web servers and the client(s), works by saving for itself a copy of each HTML page, image and file (collectively known as objects), as they are requested, and uses this copy to fulfill subsequent requests for the same object(s), instead of asking the origin server for it again.

Advantages:

* if planned well, caches can help your Web site load faster and be more responsive by reducing latency - since responses for cached requests are available immediately, and closer to the client being served, there is less time for the client to get the object and display it, which will result in
* users visiting more often (since they appreciate a fast-loading site)
* can save load on your server - since there are fewer requests for a server to handle, it is taxed less and so reduces the cost and complexity of that datacenter (which is why web-hosting companies with large networks and multiple datacenters offer caching servers at various datacenters in their network; caching servers automatically update themselves when files are updated, which takes the load off the central server or cluster of servers)
* reduces traffic/bandwidth consumption - since each object is only gotten from the server once, there are fewer requests and responses that need to go over the network
* you don't have to pay for them

There are some concerns with its use, however:

* webmasters in particular fear losing control of their site, because a cache can 'hide' their users from them, making it difficult to see who's using the site
* could result in undercounts of page views and ad impressions (though this can be avoided by implementing various cache-busting techniques to better ensure that all performance statistics are accurately measured)
* danger of serving content that is out of date, or stale

There are two kinds:

* Browser Caches
o client applications built in to most web browsers
o let you set aside a section of your computer's hard disk to store objects that you've seen, just for you, and will check to make sure that the objects are fresh, usually once a session
o settings can be found in the preferences dialog of any modern browser (like Internet Explorer or Netscape)
o useful when a client hits the 'back' button to go to a page they've already seen & if you use the same navigation images throughout your site, they'll be served from the browser cache almost instantaneously
* Proxy Caches
o serve many users (clients) with cached objects from many servers
o good at reducing latency and traffic (because popular objects are requested only once, and served to a large number of clients)
o usually deployed by large companies or ISPs (often on their firewalls) that want to reduce the amount of Internet bandwidth that they use
o can happen at many places, including proxies (i.e. the user's ISP) and the user's local machine but often located near network gateways to reduce the bandwidth required over expensive dedicated internet connections
o many proxy caches are part of cache hierarchies, in which a cache can inquire from neighboring caches for a requested document to reduce the need to fetch the object directly
o although some proxy caches can be placed directly in front of a
particular server (to reduce the number of requests that the server
must handle), they are called differently (reverse cache, inverse
cache, or httpd accelerator) to reflect the fact that it caches
objects for many clients but from (usually) only one server

Hacking Attacks - Prevention


The first three steps are suggested by security consultant Jay Beale in his interview with Grant Gross, when asked how administrators can protect themselves from system attacks.

1. Harden your systems (also called "lock-down" or "security tightening") by

* Configuring necessary software for better security
* Deactivating unnecessary software - disable any daemons that aren't needed or seldom used, as they're the most vulnerable to attacks
* Configuring the base operating system for increased security

2. Patch all your systems - Intruders can gain root access through the vulnerabilities (or "holes") in your programs so keep track of "patches" and/or new versions of all the programs that you use (once the security hole is found, manufacturers usually offer patches and fixes quickly before anyone can take advantage of the holes to any large extent), and avoiding using new applications or those with previously documented vulnerabilities.

3. Install a firewall on the system, or at least on the network - Firewalls refer to either software (ex. ZoneAlarm) and/or hardware (ex. Symantec-Axent's Firewall/VPN 100 Appliance) that block network traffic coming to and leaving a system, and give permission to transmit and receive only to user-authorized software. They work at the packet level and can not only detect scan attempts but also block them.

You don't even need to spend a lot of money on this. Steve Schlesinger expounds on the merits of using open source software for a firewall in his article, Open Source Security: Better Protection at a Lower Cost.

At the very least, you should have a packet-filtering firewall as it is the quickest way to enforce security at the border to the Internet.

EPLS offers the following suggestions/services for Stopping Unauthorized Access, using firewalls:

* Tighten the Routers at your border to the Internet in terms of packets that can be admitted or let out.
* Deploy Strong Packet Filtering Firewalls in your network (either by bridge- or routing mode)
* Setup Proxy Servers for services you allow through your packet-filtering firewalls (can be client- or server-side/reverse proxy servers)
* Develop Special Custom Made Server or Internet services client and server software

4. Assess your network security and degree of exposure to the Internet. You can do this by following the suggestions made by EPLS.

* portscan your own network from outside to see the exposed services (TCP/IP service that shouldn't be exposed, such as FTP)
* run a vulnerability scanner against your servers (commercial and free scanners are available)
* monitor your network traffic (external and internal to your border firewalls)
* refer to your system log - it will reveal (unauthorized) services run on the system and hacking attempts based on format string overflow usually leave traces here
* check your firewall logs - border firewalls log all packets dropped or rejected and persistent attempts should be visible

Portmapper, NetBIOS port 137-139 and other dangerous services exposed to the Internet, should trigger some actions if you check all the above.

Also, more complex security checks will show whether your system is exposed through uncontrolled Internet Control Message Protocol (ICMP) packets or if it can be controlled as part of DDoS slaves through ICMP.

5. When using passwords don't use

* real words or combinations thereof
* numbers of significance (eg birthdates)
* similar/same password for all your accounts

6. Use encrypted connections - encryption between client and server requires that both ends support the encryption method

* don't use Telnet, POP, or FTP programs unless strongly encrypted passwords are passed over the Internet; encrypt remote shell sessions (like Telnet) if switching to other userIDs/root ID
* use SSH (instead of Telnet or FTP)
* never send sensitive information over email

7. Do not install software from little known sites - as these programs can hide "trojans"; if you have to download a program, use a checksum, typically PGP or MD5 encoded, to verify its authenticity prior to installation

8. Limit access to your server(s) - limit other users to certain areas of the filesystem or what applications they can run

9. Stop using systems that have already been compromised by hackers - reformat the hard disk(s) and re-install the operating system

10. Use Anti-Virus Software (ex. Norton Anti-Virus or McAffee) and keep your virus definitions up-to-date. Also, scan your system regularly for viruses.

Some of the ways by which Web hosting providers' Security Officers Face Challenges, are discussed by Esther M. Bauer. These include:

* looking at new products/hacks
* regularly reviewing policies/procedures
* constant monitoring of well known ports, like port 80, that are opened in firewalls
* timely installation of patches
* customized setup of servers that isolate customers from each other - "In a hosting environment the biggest threat comes from inside - the customers themselves try to break into the system or into other customers' files"
* investment in firewall, VPN devices, and other security measures, including encrypted Secure Sockets Layer (SSL) communication in the server management and account management systems
* installation of secure certificates on web sites
* purchase and deployment of products according to identified needs
* monitoring suspicious traffic patterns and based on the customer's service plan, either shunting away such traffic as bad, or handling it through a content-distribution system that spreads across the network

SQL: The Standard and the Language

SQL: The Standard and the Language

by
Jim Melton

The SQL Query Language

SQL is a language designed especially for access to relational database systems; it is arguably the most significant relational database language in existence...although it does not strictly implement the relational model at all! SQL, which was initiated by IBM as part of its System R research prototype in the late 1970s, has been implemented by more than 20 vendors and is available in some form on virtually every important computer platform in the world. SQL's model of data is one of "tables", corresponding to relations in the relational model; this tabular view of data provides columns (having names and data types) and rows (containing the actual data) and a uniquely intuitive way of looking at data. Its most important improvement over earlier data models is that operations on data are specified in non-procedural ways and entire collections ("sets") of data are manipulated with single statements. This notion removed from application writers the burden of "navigating" around a complex database structure and significantly reduced the amount of code that had to be written for any specific purpose.

In 1986, the American National Standards Institute (ANSI) published the first de jure SQL standard, X3.135-1986; the International Organization for Standardization (ISO) published a technically-identical standard, ISO 9075-1987, a few months later in early 1987. That standard was widely viewed as a "least common denominator" of existing SQL implementations and consequently failed to standardize many popular and necessary features of the language. This standard provided the ability to invoke SQL capabilities from four programming languages: COBOL, FORTRAN, Pascal, and PL/I.

In 1989, both ANSI and ISO published replacement standards (ANSI X3.135-1989 and ISO/IEC 9075:1989) that slightly enhanced the language and also added an optional capability called referential integrity, which allowed database designers to require certain relationships between data in different parts of the database. In the same year, ANSI published X3.135-1989 to add support for two additional programming languages, Ada and C.

In this general timeframe, X/Open published its first specification of SQL, not directly tied to the ANSI or ISO standards, but specifically designed to reflect the existing products provided by X/Open shareholders to their customers. For several years, X/Open's Portability Guides kept this "what's available" approach, but it became apparent that vendors were using the absence of X/Open requirements as a reason for not implementing features required by the de jure standards: the de facto standard began to inhibit technological advances. As a result of this observation---coupled with X/Open's adoption of a strategy of alignment with de jure standards wherever applicable---the SQL specification began to evolve to a "programmer's guide" based very closely on SQL-89. The value added by X/Open's document included a much more thorough specification of "limits", such as the minimum number of tables that an implementation must support, a precise selection of precisions for numeric values and lengths for character strings, and so forth. X/Open SQL continued to contain certain important (and widely implemented) features, like indexes, that the standard does not address.

Levels of SQL

In 1992, when ANSI and ISO published the third revision of the SQL standard (X3.135-1992 and ISO/IEC 9075:1992), X/Open quickly moved to align with the lowest level, Entry SQL, of that standard, while taking important features from Intermediate SQL as well (notable dynamic SQL, the remote connection capability, and certain diagnostics capabilities). In the United States, NIST (the National Institute of Standards and Technology) had developed a conformance testing suite for SQL-89 whose requirements were reflected in a Federal Information Processing Standard (FIPS 127-1); the publication of SQL-92 was accompanied by a revised FIPS 127-2 and an updated test suite. Both FIPS 127-2 and the test suite reflect NIST's belief that the step from Entry SQL to the next level in the standard, Intermediate SQL, was too large for vendors to accomplish in a year or two; NIST therefore specified a level called Transitional SQL for which conformance could be claimed for a limited time (hence the name "Transitional").

In 1993, after adoption of FIPS 127-2, X/Open began development of the next version of its SQL Common Application Environment specification, designed to align with Transitional SQL. In addition to maintaining the traditional X/Open additions like indexes, this CAE-in-progress is expected to reach into higher levels of SQL-92 and even the unfinished next generation of the language, "SQL3". One especially important feature currently being adapted by X/Open is the "stored routine" capability in development for the SQL standard; using stored routines, an application can gain tremendous performance advantages and even move entire portions of application code into the database server.

Testing and branding

X/Open has a long tradition of branding products that are based on their specifications; SQL products are no exception to this practice. However, in the past, X/Open has not developed a branding suite for SQL, so vendors were forced into doing their own best-effort testing and claiming conformance to get the X/Open brand. That brought with it a contractual requirement to correct any conformance deficiencies within a specified period following their reporting by any interested party. The inconveniences---to users and to vendors---of this approach made the existence of a testing capability very important. Even though NIST has a conformance test suite for SQL implementations, it is closely tied to U.S. Federal Government requirements and is not always comfortably viewed by users in other nations. More important, the NIST suite doesn't test the X/Open features like indexes, so neither vendors nor users were protected in such areas by NIST's testing. To resolve this problem, X/Open has entered into an agreement with both NIST and with the National Computing Centre (NCC) in the UK to develop a more comprehensive test suite, with NIST providing a Transitional SQL test capability, X/Open providing additional tests for X/Open-only features, and NCC adding tests to cover Intermediate SQL. Appropriate licensing agreements will make it possible for independent testing laboratories to test SQL products, issue conformance certificates and reports that will X/Open to provide product brands. All this will be backed up by legal obligations, including a Conformance Statement Questionnaire that every vendor must complete for a product to be branded, identifying all options, limits, and so on that the product provides.

Benefits of X/Open SQL

All this sounds like a lot of trouble, doesn't it? Why would a vendor want to go to all this pain just to have the privilege of putting the X/Open logo onto his products and advertising? Indeed, why would a user out shopping for an SQL product bother to look for the X/Open logo?

It all boils down to the voluntary nature of standards. Standards provide a (usually!) well-defined way to do things, but they can't force anybody to do it that way. However, the computer industry's focus on "openness" has led both vendors and purchasers to look for ways to reduce costs (development costs and acquisition/use costs, respectively) while increasing their options (marketplaces and sources, respectively). Standards---both de jure and de facto---provide a path towards this goal. But that path has proved to be remarkably rocky and ill-defined.

It is through the existence fo branding and certification facilities like X/Open's that users can cut through the metaphorical underbrush and locate products---without going to enormous trouble---that implement the capabilities they need...and that they have specified. If users know that, by looking for the X/Open brand, they are specifying and then purchasing a product with known capabilities that will run their applications, then they can make purchases with less effort and more quickly. And they run a far lower risk of being disappointed.

Similarly, vendors can know that their display of X/Open's brand guarantees them a reasonable opportunity to sell their products to a well-known market...customers who are aware of the brand. These vendors don't have to worry about "what set of features should we implement for market x" and can focus on "how can we implement that required set of features most efficiently" or "how can we get the product to market most quickly". In short, everybody benefits (except, perhaps, those most interested in the proprietary world!). Particularly with X/Open's increased thrust towards state-of-the-art technology instead of least-common-denominator specs, vendors can find themselves challenged to keep up instead of worrying about inventing new features that might---or might not---be interesting to broad categories of users.

What will the future hold?

Nothing stands still, least of all technology. (Even mountains move: the early-1994 earthquake in Southern California resulted in at least one 1000+ meter peak being raised by over 40 cm and moving to the northwest by about 6 cm!)

The SQL standard continues to evolve, with another replacement expected to be published in 1996 or 1997 (my money's on the latter). X/Open is actively participating in the development of SQL3, partly by early adoption of the stored routine capability mentioned above and giving feedback to ANSI and ISO during this work.

Undoubtedly, the most dramatic development in SQL since its inception is SQL's addition of object-oriented capabilities to the language. While this is extremely controversial and, like much of the object-oriented world, there is not yet uniform agreement on exactly what must be done (never mind how), there is unanimous agreement among the SQL vendors that this will be the shape of the future. The SQL standard is also adopting that great missing data type so widely implemented by vendors, the BLOB (Basic Large OBject)...whose name has nothing to do with object orientation (surely this doesn't suggest that we've reached the limits of English expressitivity?). BLOBs are used to capture very large chunks of data that are not (usually) otherwise acted on by the database system---they provide only a storage mechanism. X/Open is poised to adopt this specification early on. Beyond that, X/Open will, as always, study the marketplace, work directly with users, gathering requirements, analyzing them, and helping the formal standards efforts with that knowledge.

Naming Guidelines

Naming Guidelines

1). Private Variables (Fields in C#) Naming Guidelines

Naming guidelines

Prefix private variables with a "_" and Hungarian-style notation.

Case guidelines

Use camel case as a general rule, or uppercase for very small words

Example:

_strFirstName, _dsetEmployees

// Field
private OleDbConnection _connection;

// Property
public OleDbConnection Connection
{
get { return _connection; }
set { _connection = value; }
}

2). Local Variables Naming Guidelines

Naming guidelines

Prefix private or local variables with Hungarian-style notation.

Case guidelines

Use camel case as a general rule, or uppercase for very small words

Example:

strFirstName, dsetEmployees

3). Namespace Naming Guidelines

Naming guidelines

The general rule for naming namespaces is to use the company name followed by the technology name and optionally the feature and design as follows:

CompanyName.TechnologyName[.Feature][.Design]

Prefixing namespace names with a company name or other well-established brand avoids the possibility of two published namespaces having the same name. Use a stable, recognized technology name at the second level of a hierarchical name.

Example:

Akadia.Traffic, System.Web.UI, System.Windows.Forms

Case guidelines

Use Pascal case as a general rule, or uppercase for very small words.

Example:

System.Windows.Forms, System.Web.UI

4). Class Naming Guidelines

Naming guidelines

Use a noun or noun phrase to name a class.
Do not use a type prefix, such as C for class, on a class name.
Do not use the underscore character (_).

Case guidelines

Use Pascal case. Example:

FileStream, Button

5). Interface Naming Guidelines

Naming guidelines

Prefix interface names with the letter "I", to indicate that the type is an interface.
Do not use the underscore character (_).

Case guidelines

Use Pascal case. Example:

IServiceProvider, IFormatable

6). Parameter Naming Guidelines

Naming guidelines

Use descriptive parameter names. Parameter names should be descriptive enough that the name of the parameter and its type can be used to determine its meaning in most scenarios. To distinguish parameters from other variables the prefix "p" should be used.

Do not prefix parameter names with Hungarian type notation.

Do not use a prefix for parameter names of an event handler and exceptions.

Case guidelines

Use camel case. Example:

pTypeName, pNumberOfItems

7). Method Naming Guidelines

Naming guidelines

Use verbs or verb phrases to name methods.

Case guidelines

Use Pascal case. Example:

RemoveAll(), GetCharAt()

8). Property / Enumerations Naming Guidelines

Naming guidelines

Use a noun or noun phrase to name properties.
Do not use Hungarian notation.

Case guidelines

Use Pascal case. Example:

BackColor, NumberOfItems

9). Event Naming Guidelines

Naming guidelines

Use an EventHandler suffix on event handler names.

Specify two parameters named sender and e. The sender parameter represents the object that raised the event. The sender parameter is always of type object, even if it is possible to use a more specific type. The state associated with the event is encapsulated in an instance of an event class named "e". Use an appropriate and specific event class for the e parameter type.

Name an event argument class with the EventArgs suffix.

Case guidelines

Use Pascal case. Example:

public delegate void MouseEventHandler(object sender, MouseEventArgs e);

Exception Naming Guidelines

Naming guidelines

Event handlers in Visual Studio .NET tend to use an "e" parameter for the event parameter to the call. To ensure we avoid a conflict, we will use "ex" as a standard variable name for an Exception object.

Example

catch (Exception ex)
{
// Handle Exception
}

Constant Naming Guidelines

The names of variables declared class constants should be all uppercase with words separated by underscores. It is recommended to use a grouping naming schema.

Example (for group AP_WIN):

AP_WIN_MIN_WIDTH, AP_WIN_MAX_WIDTH, AP_WIN_MIN_HIGHT, AP_WIN_MAX_HIGHT

Visual Control Type Notation

Assembly asm
Boolean bln
Button btn
Char ch
CheckBox cbx
ComboBox cmb
Container ctr
DataColumn dcol
DataGrid dgrid
DataGridDateTimePickerColumn dgdtpc
DataGridTableStyle dgts
DataGridTextBoxColumn dgtbc
DataReader dreader
DataRow drow
DataSet dset
DataTable dtable
DateTime date
Dialog dialog
DialogResult dr
Double dbl
Exception ex
GroupBox gbx
HashTable htbl
ImageList iml
Integer int
Label lbl
ListBox lbx
ListView lv
MarshallByRefObject rmt
Mainmenu mm
MenuItem mi
MDI-Frame frame
MDI-Sheet sheet
NumericUpDown nud
Panel pnl
PictureBox pbx
RadioButton rbtn
SDI-Form form
SqlCommand sqlcom
SqlCommandBuilder sqlcomb
SqlConnection sqlcon
SqlDataAdapter sqlda
StatusBar stb
String str
StringBuilder strb
TabControl tabctrl
TabPage tabpage
TextBox tbx
ToolBar tbr
ToolBarButton tbb
Timer tmr
UserControl usr
WindowsPrincipal wpl

Subconscious Mind!

What if I told you that there was a part of your mind that is always working, even when you are asleep? This part of your mind is known as...