Saturday, January 31, 2009

Desktop,Laptop And PDA

Desktops versus Laptops versus PDAs
The choice of whether to purchase a desktop or laptop system should be based upon your sense of how you will be using your computer system. If you wish to get the most "bang for the buck" you’ll probably want to consider a desktop system. Dollar for dollar, you can get more computing power and expandability with a desktop. This is particularly important if you ever intend upon adding peripheral devices such as a scanner. If you plan on carrying your computer system with you, the obvious choice will be to purchase a laptop system. If you decide to buy a laptop, we recommend that you purchase it with a built-in ethernet card and a wireless card.
Handheld PDAs are increasing in popularity in medicine. We recommend that you do not purchase a PDA unless 1) you already have either a desktop or laptop system, and 2) have some very specific personal or professional uses for a PDA. Given that this technology is rapidly changing, any need for a PDA during your clinical years would be better served by waiting until then to purchase a system.Most computer systems are sold as complete system packages including computer, monitor, keyboard, modem, cables, and software. Many vendors have a series of systems to choose from, and which can be customized. For example, one common way to customize a pre-packaged system is to add extra random access memory (e.g., from 512 MB to 1000 MB); another is to add a printer. Many systems are "multimedia" with speakers and other hardware, which are ready to play CD-ROMs to their best effect.

Network Types

Introduction to Network Types
LAN, WAN and Other Area Networks
One way to categorize the different types of computer network designs is by their scope or scale. For historical reasons, the networking industry refers to nearly every type of design as some kind of area network. Common examples of area network types are:
• LAN - Local Area Network
• WLAN - Wireless Local Area Network
• WAN - Wide Area Network
• MAN - Metropolitan Area Network
• SAN - Storage Area Network, System Area Network, Server Area Network, or sometimes Small Area Network
• CAN - Campus Area Network, Controller Area Network, or sometimes Cluster Area Network
• PAN - Personal Area Network
• DAN - Desk Area Network
LAN and WAN were the original categories of area networks, while the others have gradually emerged over many years of technology evolution.
Note that these network types are a separate concept from network topologies such as bus, ring and star.
LAN - Local Area Network
A LAN connects network devices over a relatively short distance. A networked office building, school, or home usually contains a single LAN, though sometimes one building will contain a few small LANs (perhaps one per room), and occasionally a LAN will span a group of nearby buildings. In TCP/IP networking, a LAN is often but not always implemented as a single IP subnet.
In addition to operating in a limited space, LANs are also typically owned, controlled, and managed by a single person or organization. They also tend to use certain connectivity technologies, primarily Ethernet and Token Ring.
WAN - Wide Area Network
As the term implies, a WAN spans a large physical distance. The Internet is the largest WAN, spanning the Earth.
A WAN is a geographically-dispersed collection of LANs. A network device called a router connects LANs to a WAN. In IP networking, the router maintains both a LAN address and a WAN address.
A WAN differs from a LAN in several important ways. Most WANs (like the Internet) are not owned by any one organization but rather exist under collective or distributed ownership and management. WANs tend to use technology like ATM, Frame Relay and X.25 for connectivity over the longer distances.
LAN, WAN and Home Networking
Residences typically employ one LAN and connect to the Internet WAN via an Internet Service Provider (ISP) using a broadband modem. The ISP provides a WAN IP address to the modem, and all of the computers on the home network use LAN (so-called private) IP addresses. All computers on the home LAN can communicate directly with each other but must go through a central gateway, typically a broadband router, to reach the ISP.
Other Types of Area Networks
While LAN and WAN are by far the most popular network types mentioned, you may also commonly see references to these others:
• Wireless Local Area Network - a LAN based on WiFi wireless network technology
• Metropolitan Area Network - a network spanning a physical area larger than a LAN but smaller than a WAN, such as a city. A MAN is typically owned an operated by a single entity such as a government body or large corporation.
• Campus Area Network - a network spanning multiple LANs but smaller than a MAN, such as on a university or local business campus.
• Storage Area Network - connects servers to data storage devices through a technology like Fibre Channel.
• System Area Network - links high-performance computers with high-speed connections in a cluster configuration. Also known as Cluster Area Network.

Friday, January 30, 2009

Network Topologies

In computer networking, topology refers to the layout of connected devices. This article introduces the standard topologies of networking.
Topology in Network Design
Think of a topology as a network's virtual shape or structure. This shape does not necessarily correspond to the actual physical layout of the devices on the network. For example, the computers on a home LAN may be arranged in a circle in a family room, but it would be highly unlikely to find a ring topology there.
Network topologies are categorized into the following basic types:
· bus
· ring
· star
· tree
· mesh
More complex networks can be built as hybrids of two or more of the above basic topologies.
Bus Topology
Bus networks (not to be confused with the system bus of a computer) use a common backbone to connect all devices. A single cable, the backbone functions as a shared communication medium that devices attach or tap into with an interface connector. A device wanting to communicate with another device on the network sends a broadcast message onto the wire that all other devices see, but only the intended recipient actually accepts and processes the message.
Ethernet bus topologies are relatively easy to install and don't require much cabling compared to the alternatives. 10Base-2 ("ThinNet") and 10Base-5 ("ThickNet") both were popular Ethernet cabling options many years ago for bus topologies. However, bus networks work best with a limited number of devices. If more than a few dozen computers are added to a network bus, performance problems will likely result. In addition, if the backbone cable fails, the entire network effectively becomes unusable.
Ring Topology
In a ring network, every device has exactly two neighbors for communication purposes. All messages travel through a ring in the same direction (either "clockwise" or "counterclockwise"). A failure in any cable or device breaks the loop and can take down the entire network.
To implement a ring network, one typically uses FDDI, SONET, or Token Ring technology. Ring topologies are found in some office buildings or school campuses.
Star Topology
Many home networks use the star topology. A star network features a central connection point called a "hub" that may be a hub, switch or router. Devices typically connect to the hub with Unshielded Twisted Pair (UTP) Ethernet.
Compared to the bus topology, a star network generally requires more cable, but a failure in any star network cable will only take down one computer's network access and not the entire LAN. (If the hub fails, however, the entire network also fails.)
Tree Topology
Tree topologies integrate multiple star topologies together onto a bus. In its simplest form, only hub devices connect directly to the tree bus, and each hub functions as the "root" of a tree of devices. This bus/star hybrid approach supports future expandability of the network much better than a bus (limited in the number of devices due to the broadcast traffic it generates) or a star (limited by the number of hub connection points) alone.
Mesh Topology
Mesh topologies involve the concept of routes. Unlike each of the previous topologies, messages sent on a mesh network can take any of several possible paths from source to destination. (Recall that even in a ring, although two cable paths exist, messages can only travel in one direction.) Some WANs, most notably the Internet, employ mesh routing.
A mesh network in which every device connects to every other is called a full mesh. As shown in the illustration below, partial mesh networks also exist in which some devices connect only indirectly to others.

How Modem Software Problems occurs

How Modem Software Problems occurs?
Problems with modem communications often are caused by software. If both parties are not using the same settings, the software can be used to change the settings so that they match. Both parties do not have to use the same software to communicate. Some common problems are:

Not being able to connect to the other computer.
Connecting but receiving garbage.
Connecting but getting random characters in addition to the "real" data.
Connecting but seeing dropped or missing characters.
Getting two characters for every one you are keying.
Not being able to see what you are typing.

We will discuss each of these issues in turn.
Inability to Connect

If you are not able to connect to the other computer, check to see that both modems are communicating at the same speed. The communication software may be forcing one of the modems to connect at a given speed with the command:
&Nx
The value x is between 0 and 9. The &Nx command tells the modem what speed to connect with another modem. Setting the modem with &N0 allows the modem to determine what the highest possible connect speed is and uses that speed to connect. Using commands other than &N0, such as &N2, forces the modem to connect only at one speed. See the table below for various forced connect speeds.

Command Connect Rate
&N0 Variable
&N1 300 bps
&N2 1200 bps
&N3 2400 bps
&N4 4800 bps
&N5 7200 bps
&N6 9600 bps
&N7 12 Kbps
&N8 14.4 Kbps
&N9 16.8 Kbps
If the modem is set to a forced speed, try setting the modem to the variable connection rate and reconnecting. If that does not work, try connecting to a different service or modem and see if the problem persists. If you can connect to other computers, you know the trouble is at the other end of the line.

Receiving Garbage

If you can connect, but get garbled characters on the screen, check to see that both modems are set for the same number of data bits. If the data bits are set correctly, consider a line noise problem. Use a different telephone line and see if the problem repeats itself.

Receiving Random Characters

If you can connect, but you are getting random characters on the screen in addition to the "real" data, consider the following

Is there a telephone extension on the line you are using? If so, did someone pick up the extension?
Is the modem in question an external modem? If so, is the modem cable near a fluorescent light or some other source of EMI?
Is the terminal emulation set properly?
Consider a line noise problem.

Dropped or Missing Characters

If you can connect, but are seeing dropped or missing characters, consider the following:
There may be a flow control or handshaking problem between your computer and the modem. Check the cabling for good connections and try another cable of the appropriate type.
There may be a problem with the UART chip on your serial port. Try setting your modem to communicate at a slower speed or replace the current UART with a 16550 model.
There may be an interrupt latency problem. Attempt to establish the communication link with as few open programs as possible.

Double Characters/No Characters

Echo on/off can cause you to not see messages on the screen or to see double characters for everything you type. Local echo causes the modem to repeat back every character it receives. This feature makes the text you type appear on the screen. If the computer you are connected to has "remote echo" enabled, its modem repeats every character it receives. If both local echo and remote echo are enabled, you will see every character that you type twice. This problem is corrected by turning local echo off.

If the software you are using does not support a local echo option, try using "full-duplex" as a substitute for echo off. Conversely, you may use half-duplex as a substitute for echo on. Some software packages have mistakenly used the term duplex in place of local echo options. Duplexing refers to data transmission being limited to transmitting and receiving as single tasks, or simultaneous two-way transmissions.
If you are connected but cannot see what you are typing, the other computer does not have remote echo enabled, and your local echo is not enabled. To correct the problem, turn local echo on.

Thursday, January 29, 2009

Flash memory

Flash memory is non-volatile computer memory that can be electrically erased and reprogrammed. It is a technology that is primarily used in memory cards, USB flash drives (thumb drives, handy drive, memory stick), which are used for general storage and transfer of data between computers and other digital products. Unlike EEPROM, it is erased and programmed in blocks consisting of multiple locations (in early flash the entire chip had to be erased at once). Flash memory costs far less than EEPROM and therefore has become the dominant technology wherever a significant amount of non-volatile, solid-state storage is needed. Examples of applications include PDAs and laptop computers, digital audio players, digital cameras and mobile phones. It has also gained some popularity in the game console market, where it is often used instead of EEPROMs or battery-powered static RAM (SRAM) for game save data.
Principles of operation
Flash memory stores information in an array of floating gate transistors, called "cells", each of which traditionally stores one bit of information. Newer flash memory devices, sometimes referred to as multi-level cell devices, can store more than 1 bit per cell, by using more than two levels of electrical charge, placed on the floating gate of a cell.
In NOR gate flash, each cell looks similar to a standard MOSFET, except that it has two gates instead of just one. One gate is the control gate (CG) like in other MOS transistors, but the second is a floating gate (FG) that is insulated all around by an oxide layer. The FG is between the CG and the substrate. Because the FG is isolated by its insulating oxide layer, any electrons placed on it get trapped there and thus store the information. When electrons are on the FG, they modify (partially cancel out) the electric field coming from the CG, which modifies the threshold voltage (Vt) of the cell. Thus, when the cell is "read" by placing a specific voltage on the CG, electrical current will either flow or not flow, depending on the Vt of the cell, which is controlled by the number of electrons on the FG. This presence or absence of current is sensed and translated into 1s and 0s, reproducing the stored data. In a multi-level cell device, which stores more than 1 bit of information per cell, the amount of current flow will be sensed, rather than simply detecting presence or absence of current, in order to determine the number of electrons stored on the FG.
A NOR flash cell is programmed (set to a specified data value) by starting up electrons flowing from the source to the drain, then a large voltage placed on the CG provides a strong enough electric field to suck them up onto the FG, a process called hot-electron injection. To erase (reset to all 1s, in preparation for reprogramming) a NOR flash cell, a large voltage differential is placed between the CG and source, which pulls the electrons off through quantum tunneling. In single-voltage devices (virtually all chips available today), this high voltage is generated by an on-chip charge pump. Most modern NOR flash memory components are divided into erase segments, usually called either blocks or sectors. All of the memory cells in a block must be erased at the same time. NOR programming, however, can generally be performed one byte or word at a time.
NAND gate flash uses tunnel injection for writing and tunnel release for erasing. NAND flash memory forms the core of the removable USB interface storage devices known as USB flash drives.
As manufacturers increase the density of flash devices, individual cells shrink and the number of electrons in any cell becomes very small. Coupling between adjacent floating gates can change the cell write characteristics. New designs, such as charge trap flash, attempt to provide better isolation between adjacent cells.
Limitations
One limitation of flash memory is that although it can be read or programmed a byte or a word at a time in a random access fashion, it must be erased a "block" at a time. This generally sets all bits in the block to 1. Starting with a freshly erased block, any location within that block can be programmed. However, once a bit has been set to 0, only by erasing the entire block can it be changed back to 1. In other words, flash memory (specifically NOR flash) offers random-access read and programming operations, but cannot offer arbitrary random-access rewrite or erase operations. A location can, however, be rewritten as long as the new value's 0 bits are a superset of the over-written value's. For example, a nibble value may be erased to 1111, then written as 1110. Successive writes to that nibble can change it to 1010, then 0010, and finally 0000. Although data structures in flash memory can not be updated in completely general ways, this allows members to be "removed" by marking them as invalid. This technique must be modified somewhat for multi-level devices, where one memory cell holds more than one bit.
Another limitation is that flash memory has a finite number of erase-write cycles (most commercially available flash products are guaranteed to withstand 1 million programming cycles). This effect is partially offset by some chip firmware or file system drivers by counting the writes and dynamically remapping the blocks in order to spread the write operations between the sectors. This technique is called wear levelling. Another mechanism is to perform write verification and remapping to spare sectors in case of write failure, which is named bad block management (BBM).
Low-level access
Low-level access to a physical flash memory by device driver software is different from accessing common memories. Whereas a common RAM will simply respond to read and write operations by returning the contents or altering them immediately, flash memories need special considerations, especially when used as program memory akin to a read-only memory (ROM).
While reading data can be performed on individual addresses on NOR memories unlocking (making available for erase or write), erasing and writing operations are performed block-wise on all flash memories. A typical block size will be 64, 128, or 256 KiB. Reading of individual addresses cannot be done with NAND memories.
One group called Open NAND flash Interface Working Group (ONFI) aims to develop a standardized low-level NAND flash interface that allows interoperability between NAND devices from various vendors. The goals of this group include developing a standardized chip-level interface (pin-out) for NAND flash, a standard command set and a self identification mechanism (à la SDRAM's SPD EEPROM). The specification was released on January 22, 2007.
NOR memories
The read-only mode of NOR memories is similar to reading from a common memory, provided address and data bus is mapped correctly, so NOR flash memory is much like any address-mapped memory. NOR flash memories can be used as execute in place (XIP) memory, meaning it behaves as a ROM memory mapped to a certain address. NOR flash memories have no intrinsic bad block management, so when a flash block is worn out, either the software using it has to handle this, or the device breaks.
When unlocking, erasing or writing NOR memories, special commands are written to the first page of the mapped memory. These commands are defined as the Common Flash memory Interface (CFI) (defined by Intel) and the flash circuit will provide a list of all available commands to the physical driver.
Apart from being used as a ROM, the NOR memories can also be partitioned with a file system and used as any storage device. However, NOR file systems are typically very slow to write when compared with NAND file systems.
NAND memories
NAND flash architecture was introduced by Toshiba in 1989. NAND flash memories cannot provide execute in place due to their different construction principles. These memories are accessed much like block devices such as hard disks or memory cards. The pages are typically 512 or 2,048 bytes in size. Associated with each page are a few bytes (typically 12–16 bytes) that should be used for storage of an error detection and correction checksum.
The pages are typically arranged in blocks. A typical block would be 32 pages of 512 bytes or 64 pages of 2048 bytes.
While programming is performed on a page basis, erasure can only be performed on a block basis.
NAND devices typically have software-based bad block management. This means that when a logical block is accessed it is mapped to a physical block, and the device has a number of blocks set aside for compensating bad blocks and for storing primary and secondary mapping tables.
The error-correcting and detecting checksum will typically correct an error where one bit per 256 bytes is incorrect. When this happens, the block is marked bad in a logical block allocation table, and its undamaged contents are copied to a new block and the logical block allocation table is altered accordingly. If more than one bit in the memory is corrupted, the contents are partly lost, i.e. it is no longer possible to reconstruct the original contents.
Most NAND devices are shipped from the factory with some bad blocks which are typically identified and marked according to a specified bad block marking strategy. By allowing some bad blocks, the manfacturers achieve far higher yields than would be possible if all blocks were tested good. This significantly reduces NAND flash costs and increases the size of the parts.
The first error-free physical block (block 0) is always guaranteed to be readable and free from errors. Hence, all vital pointers for partitioning and bad block management for the device must be located inside this block (typically a pointer to the bad block tables etc). If the device is used for booting a system, this block may contain the master boot record.
When executing software from NAND memories, virtual memory strategies are used: memory contents must first be paged or copied into memory-mapped RAM and executed there. A memory management unit (MMU) in the system is helpful, but this can also be accomplished with overlays. For this reason, some systems will use a combination of NOR and NAND memories, where a smaller NOR memory is used as software ROM and a larger NAND memory is partitioned with a file system for use as a random access storage area. NAND is best suited to flash devices requiring high capacity data storage. This type of flash architecture offers storage space up to 512-MB and has faster erase, write, and read capabilities over NOR architecture.
Serial flash
Serial flash is a small, low-power flash memory that uses a serial interface, typically SPI, for sequential data access. Serial flash requires fewer wires on the printed circuit board (PCB) than parallel flash memories to transfer data. A reduction in board space, power consumption and system cost are some of the benefits of the lower pin-count interface.
A saving of pins translates into multiple cost reductions. Many ASIC/controller designs are pad-limited. In many designs the number of bond pads, rather than the amount of gates used for the core and logic, dictates the size of the die. Eliminating bond pads allows for a more compact ASIC/controller design that results in a reduced die size, which lowers the die cost and increases the die per wafer count. Additionally, reducing the number of active pins allows lower pin-count packages and reductions in assembly and package costs. Of course, the package size of the flash device itself also drastically changes when going from large parallel flash to serial flash. With smaller and lower pin-count packages come reduced PCB area and simplified routing, both of which help lower system costs.
As CPU performance increases, the access times (45 ns+) of traditional parallel flash are not fast enough to directly execute the program code. At the same time embedded SRAM technology allows sub-10 ns access times and DDR2 allows 20 ns access times. Slowness of the flash makes "code shadowing" - storing code in the RAM - inevitable in many devices. In many instances it is still more cost-effective to double the SDRAM density rather than doubling the flash density because of SDRAM’s lower cost-per-bit factor and keep the code compressed on the flash.
Among typical applications are firmware storage for hard drives, Ethernet controllers, DSL modems, wireless modems, and so on. In these systems the code is shadowed in the RAM. After the system powers up, the ASIC simply selects the serial flash, sends it one command to start reading the memory, and then continues to clock the serial flash until all of the necessary code has been output. The serial flash implements "bulk read" mode and incorporates an internal address counter so that on every clock cycle the flash device outputs the next bit of data.
The industry’s average speeds for serial buses are 50 MHz. These devices are capable of sustaining read throughputs at 50 Mbps, or 5 MB per second. With such throughputs, an entire 64-Mbit device can be read in less than two seconds.
Flash file systems
Because of the particular characteristics of flash memory, it is best used with specifically designed file systems which spread writes over the media and deal with the long erase times of NOR flash blocks. The basic concept behind flash file systems is: When the flash store is to be updated, the file system will write a new copy of the changed data over to a fresh block, remap the file pointers, then erase the old block later when it has time. One of the earliest flash file systems was Microsoft's FFS2 (presumably preceded by FFS1), for use with MS-DOS in the early 1990s. Around 1994, the PCMCIA industry group approved the FTL (Flash Translation Layer) specification, which allowed a flash device to look like a FAT disk, but still have effective wear levelling. Other commercial systems such as FlashFX by Datalight were created to avoid patent concerns with FTL.
JFFS was the first flash-specific file system for Linux, but it was quickly superseded by JFFS2, originally developed for NOR flash. Then YAFFS was released in 2003, dealing specifically with NAND flash, and JFFS2 was updated to support NAND flash too. In practice, these filesystems are only used for "Memory Technology Devices" ("MTD"), which are embedded flash memories which do not have a controller. Removable flash media, such as SD and CF cards and USB flash drives, have a controller (often built into the card) to perform wear-levelling and error correction, so use of JFFS2 or YAFFS does not add any benefit. These removable flash memory devices are often used with the old FAT filesystem for compatibility with cameras and other portable devices. Controllerless removable flash memory devices also exist; For example, SmartMedia is even electrically compatible with the Toshiba TC58 series of NAND flash chips.
Capacity
Common flash memory parts (individual internal components or "chips") range widely in capacity from kilobits to several gigabits each. Multiple chips are often arrayed to achieve higher capacities for use in devices such as the iPod nano or SanDisk Sansa e200. The capacity of flash chips generally follows Moore's law because they are produced with the same processes used to manufacture other integrated circuits. However, there have also been jumps beyond Moore's law due to innovations in technology.
In 2005, Toshiba and SanDisk developed a NAND flash chip capable of storing 1 gigabyte of data using MLC (multi-level cell) technology, capable of storing 2 bits of data per cell. In September 2005, Samsung Electronics announced that it had developed the world’s first 2 gigabyte chip
In March 2006, Samsung announced flash hard drives with a capacity of 4 gigabytes, essentially the same order of magnitude as smaller laptop hard drives, and in September of 2006, Samsung announced an 8 gigabyte chips produced using a 40 nm manufacturing processFor some flash memory products such as memory cards and USB-memories, as of mid 2006, 256 megabyte and smaller devices have been largely discontinued. 1 GB capacity flash memory has become the normal storage space for people who do not extensively use flash memory, while more and more consumers are adopting 2 GB, 4 GB, or 8 GB flash drives.
Hitachi (formerly the hard disk unit of IBM) has a competing hard-drive mechanism, the Microdrive, that can fit inside the shell of a type II CompactFlash card. It has a capacity up to 8 GB. BiTMicro offers a 155 GB 3.5" Solid-State disk named the "Edisk”.
Speed
Flash memory cards are available in different speeds. Some are specified the approximate transfer rate of the card such as 2 MB per second, 12 MB per second, etc. The exact speed of these cards depends on which definition of "megabyte" the marketer has chosen to use.
Many cards are simply rated 100x, 130x, 200x, etc. For these cards the base assumption is that 1x is equal to 150 kibibytes per second. This was the speed at which the first CD drives could transfer information, which was adopted as the reference speed for flash memory cards. Thus, when comparing a 100x card to a card capable of 12 MiB per second the following calculations are useful:
150 KiB x 100 = 15000 KiB per second = 14.65 MiB per second.
Therefore, the 100x card is 14.65 MiB per second, which is faster than the card that is measured at 12 MiB per second.
Data corruption and recovery
The most common cause of data corruption is removal of the flash memory device while data is being written to it. The situation is aggravated by the usage of unsuitable file systems that are not designed for removable devices, or that are mounted async (where there is data still waiting to write when the device is removed).
Data recovery from flash memory devices can be achieved in some cases. Heuristic and Brute Force methods are examples of recovery that may yield results for general data on a compact flash card.
Flash memory as a replacement for hard drives
An obvious extension of flash memory would be as a replacement for hard disks. Flash memory does not have the mechanical limitations and latencies of hard drives, so the idea of a solid state drive, or SSD, is attractive when considering speed, noise, power consumption, and reliability.
There remain some aspects of flash-based SSD's that make the idea unattractive. For example, the cost per storage ratio of flash memory remains significantly higher than that of platter-based hard drives. Although this ratio is decreasing rapidly for flash memory, it will take some time for flash memory to catch up to the capacities and affordabilities offered by platter-based storage, but as research and development shifts toward the newer technology, this issue might dissolve.
There is also some concern that the finite number of erase/write cycles of flash memory would render flash memory unable to support an operating system. This seems to be a decreasing issue as warranties on flash-based SSD's are trending to equal or exceed those of current hard drives.
As of May 24, 2006, South Korean consumer-electronics manufacturer Samsung Electronics had released the first flash-memory based PCs, the Q1-SSD and Q30-SSD, both of which have 32GB SSDs.
At Las Vegas CES 2007 Summit Taiwanese memory company A-DATA showcased SSD hard disk drives based on Flash technology in capacities of 32 GB, 64 GB and 128 GB. Sandisk announced an OEM 32GB 1.8" SSD drive at CES 2007
Rather than entirely replacing the hard drive, hybrid techniques such as hybrid drive and ReadyBoost attempt to combine the advantages of both technologies, using flash as a high-speed cache for files on the disk that are often referenced, but rarely modified, such as application and operation system executable files.

How to Make CROSS & STRAIGHT cable

________________________________________
STEP 1: Choose the right cable…
1. To Connect PC to PC Cross Cable.

2. To Connect PC to HUB/SWITCH/ROUTER Straight Cable.

3. To Connect HUB/SWITCH/ROUTER to HUB/SWITCH/ROUTER Straight
Cable

STEP 2: Understanding CAT 5 Cables…

Wires: CAT 5 Cable has 4 pairs of copper wire inside it.

Colors: Standard cables has BROWN, BROWN WHITE, GREEN, GREEN-
WHITE, BLUE, BLUE WHITE, ORANGE, ORANGE WHITE.

STEP 3: Making Straight Cable…

Nomenclature: let us first give a number scheme for cabling which we will
follow throughout this tuto. BROWN (8), BROWN WHITE (7),
GREEN (6), GREEN WHITE (3), BLUE (4), BLUE WHITE (5),
ORANGE (2), ORANGE WHITE (1)

Requirements: Two RJ45 Connectors, Crimping tool & CAT 5 cable of desired
length(less than 250 meters).

STEP 3.1:

There are two standards adopted for Cabling EIA/TIA 568A & EIA/TIA 568B.

When you use single standard (either EIA/TIA 568A or EIA/TIA 568B) on both the end of cable then the resulting cable is STRAIGHT CABLE.

On the other hand if you use different cabling standard on the ends of cable then the resulting cable is CROSS CABLE

I’ll use EIA/TIA 568B standard for creating cross and straight cable

1. Remove the covering of CAT 5 cable.
2. Straighten the eight wires of the cable.
3. Using Crimping tool’s cutter cut the end of wires so that they are of same length
4. Arrange the wire in order 1, 2, 3, 4, 5, 6, 7 & 8 respectively as I have mention or as shown in the diagram.
5. Insert the arranged cable in the RJ45 connector with clip pointing down exactly as shown in the figure.
6. In crimping tool insert the head of RJ45 connector and crimp (press) it hardly.
7. Follow same step with same color order for the other end of cable too.
8. The wire you made by following these steps is a STRAIGHT cable.

STEP 4: Making CROSS Cable…

Of the Eight wires in Cat 5 not all are used for data transfer when using 100Mbps Ethernet card. Only 2 pairs of cable are used i.e. 2 wire for transmitting signal and two wires for receiving signal.

So now you can guess why we have to make CROSS CABLE for connecting same kind of devices. Because if use same color coding on both the side than transmitter of one m/c will send data to transmitter of another and data packets will lost, so we have to change wiring code so that transmitter of one connects to reciver of other and vice-versa.

Here are the Steps:
Steps 1 to 6 are same as for STRAIGHT through cables
7. Only difference is in color coding of other side of wire.
8. Wire that is on 1st number on A-side (one end) should be on 3rd number on B-
side (other side) & vice-versa.
9. Wire that is on 2st number on A-side (one end) should be on 6rd number on B-
side (other side) & vice versa.
10. Now Crimp the RJ45 connector.
11. Your CROSS wire is completed.

How to Cabling Your Network


There are two main ways of connecting PCs together to form a network. There are others, but for now, we will consider only the Ethernet alternatives:

  • Coaxial Ethernet

Coaxial ethernet is really a fading concept. Two types are available, Thick-wire and Thin-wire. Thick-wire is very unlikely to be found on modern networking equipment but thin-wire is fairly common. Thin-wire ethernet consists of lengths of 50ohm coax cable that are terminated in BNC bayonet connectors. Thin-wire compatible equipment sport a round barrel that the coax is plugged into. Unfortunately, connecting thin-wire is not always so simple. It is important that a thin-wire cable is correctly terminated and not all thin-net NICs are able to automatically terminate a cable. In this case, it is necessary to use a t-piece c/w a terminator so that a cable impedance of 50ohm is maintained. Failure to observe this will result in communication problems between the network devices.

Note: Thin-wire ethernet is also known by it's technical notation of 10base2.

  • TP Ethernet

TP, or Twisted Pair Ethernet is the modern equivalent of 10base2 cable systems. Far more flexible, neater and less prone to network faults, TP appears on a myriad of networking and communications equipment. If you have your single PC already connected to your CM then you are already using RJ45 TP cabling and it will almost probably feature in your network. CAT5 cable consists of 4 pairs of wires, with each pair being two insulated copper wires twisted together. These 'twisted-pairs' are then sheathed in a plastic outer sleeve that come in a variety of colours, although 'computer' beige is probably the most common;-). The standards for ethernet over Cat5 cabling define a maximum length of 100 metres for operation at 10MBps, but in practice it is perfectly possible to extend this maximum by 20 or 30 metres without detriment to network communication.

RJ45 refers to the connector that is crimped onto the end of the CAT 5 cable. The connector is rectangular in shape and has a tab at the top. The cable is inserted so that the tab latches onto a small recess in the socket, rather like the side latch on the ubiquitos BT telephone plug.

Almost all of the network set-ups featured on this site use RJ45 cabling exclusively, with each cable being of the 'straight' type. Where necessary, x-over cables are also employed. The following diagrams show how the two types of ethernet detailed above can be used in a network, with straight RJ45 cables depicted by BLUE lines and cross-overs in RED. Thinwire Co-ax cable is shown in grey.

Connecting a Single PC to a CM connected PC

  • With Thinwire


For this set-up a single piece of thinwire co-ax is used to connect two PCs, with each end of the cable physically connected to a T-piece, with the 'spare' connector capped with a terminator to maintain the cable impedance.

It is important to use the correct cable type for thinwire so that the impedance is correct. The official designation is RG58.

  • With RJ45


Where two PCs are connected using an RJ45 cable, a cross-over cable needs to be used. An RJ45 cross-over cable actually crosses the transmit and receive pairs in the cable so that one NICs transmit connects to the other NICs receive, and vice versa.

Connecting Multiple PCs to a CM connected PC

  • With Thinwire


To add additional clients to the network, remove one of the t-pieces and connect another thinwire coax cable to the vacant connector and replace the terminator at the t-piece of the last device.

Note that some network cards have an on-board termination setting.

  • With RJ45


In an RJ45 cabled network, adding additonal clients requires the use of an intermediary device such as a hub or a switch. PCs connect to the hub/switch using straight cables and these are, in turn, connected internally within the hub or switch.

In this environment, there is no requirement for RJ45 cross-over cables.

Straight v. X-over Cables

The requirement for RJ45 cross-over, or x-over, cables is dictated by the type of devices that are being connected. There are two interface types associated with networking equipment, DTE (Data Terminating Equipment) and DCE (Data Communications Equipment). DTE devices mainly consist of PC NICs and Routers. When connecting a DTE device to a DCE device, e.g., a PC to a Hub, a straight cable is required. When the two connecting devices have the same interface type, i.e., both DCE or both DTE, then a x-over cable is necessary.

Device

I/F Type

Device

I/F Type

Cable Type

PC

DTE

Hub Port

DCE

Straight

PC

DTE

Cable Modem

DCE

Straight

PC

DTE

PC

DTE

X-Over

Hub Port

DCE

Hub Port

DCE

X-over

Unfortunately, these examples do not constitute hard and fast rules. Some Cable Modems, especially those integrated in Set-Top Boxes, have DTE interfaces, so any PC or Router that connects to it will need a x-over cable. Also, when connecting two hubs together a x-over cable may not be necessary if one of the hubs has an uplink port. An uplink port will have a DTE type interface, so a straight cable can be used to connect to another DCE port, such as a hub port. On many hubs, one of the ports may have a port that is switchable between DCE and DTE. This function can be manual, so a switch has to be activated, or an interface can auto-detect what type of interface it needs to be.

The following diagram shows the necessary cabling required for both straight and x-over CAT5 cables. Each of the four pairs in a cable are colour coded for easy identification, although the colours may vary between different cables.


The Tx and Rx refer to Transmit and Receive respectively, with the + and - symbols refering to the polarity of the signals. A DTE device will transmit data using cables 1 and 2, whilst a DCE device will transmit on Pins 3 and 6. The transmit cables at one end must be connected to the receive cables at the other end for the connection to work. When constructing cables, it is important that the polarities are maintained so that the cable is not affected by interference.

Wednesday, January 28, 2009

Desktop Icons Transparent

How to make your Desktop Icons Transparent
Go to control Panel > System, > Advanced > Performance area > Settings button Visual Effects tab "Use drop shadows for icon labels on the Desktop

Math Coprocessor

Math Coprocessor
The Math Coprocessor is a second processor in your computer that does nothing but number crunching for the system. Addition, subtraction, multiplication, and division of simple numbers is not the coprocessors job. It does all the calculations involving floating point (decimal) numbers, such as scientific calculations and algebraic functions.
These functions and calculations are used in much of the computer's routines and just about every software available. Spreadsheets contain statistical calculations, word processors deal with line spacing, font size and justification, and of course, any graphics or animation software is relying heavily on number crunching. The Central Processing Unit (CPU) is perfectly capable of doing these functions and calculations. As a matter of fact, that used to be part of its job. Most of the older computers (pre-486) were sold without coprocessors. So the CPU had to process all the computer's hardware and software functions, handle all interrupt requests (we'll talk later), and direct all information and data, as well as performing all floating point calculations. This required a lot of the processor's time.
By having a second processor, or 'coprocessor', to take over the number crunching, it can free up a lot of the CPU's precious time. This would allow the Central Processing Unit to focus all of its resources on the other functions it has to perform, thus increasing the overall speed and performance of the entire system. It's not like this was a great revelation that came over the scientific community in the midst of home computer development. The absence of a math coprocessor in early home computer systems was a matter of keeping production costs down. The advantage was recognized right from the beginning, and most of these motherboards had an empty slot for the aftermarket addition of a coprocessor. The number (or name) of the math coprocessor followed the CPU's numbering sequence, only the last digit would be a '7', not a '6'. If you had an 8086 CPU then you could add an 8087 coprocessor. For an 80286 you would install an 80287.

Virtual Memory

Virtual memory is an addressing scheme implemented in hardware and software that allows non-contiguous memory to be addressed as if it is contiguous. The technique used by all current implementations provides two major capabilities to the system:
Memory can be addressed that does not currently reside in main memory and the hardware and operating system will load the required memory from auxiliary storage automatically, without any knowledge of the program addressing the memory, thus allowing a program to reference more (RAM) memory than actually exists in the computer.
In multi tasking systems, total memory isolation, otherwise referred to as a discrete address space, can be provided to every task except the lowest level operating system. This greatly increases reliability by isolating program problems within a specific task and allowing unrelated tasks to continue to process
Background
Hardware must have two methods of addressing RAM, real and virtual. In real mode, the memory address register will contain the integer that addresses a word or byte of RAM. The memory is addressed sequentially and by adding to the address register, the location of the memory being addressed moves forward by the number being added to the address register. In virtual mode, memory is divided into pages usually 4096 bytes long (see page size). These pages may reside in any available ram location that can be addressed in virtual mode. The high order bits in the memory address register are indexes into tables in RAM at specific starting locations low in memory and the tables are indexed using real addresses. The low order bits in the address register are an offset of 0 up to 4,095 (0 to the page size - 1) into the page ultimately referenced by resolving all the table references of page locations.
The size of the tables is governed by the computer design and the size of RAM purchased by the user. All virtual addressing schemes require the page tables to start at a fixed location low in memory that can be indexed by a single byte and have a maximum length determined by the hardware design. In a typical computer, the first table will be an array of addresses of the start of the next array; the first byte of the memory address register will be the index into the first array. Depending on the design goal of the computer, each array entry can be any size the computer can address. The second byte will be an index into the array resolved by the first index. This set of arrays of arrays can be repeated for as many bytes that can be contained in the memory address register. The number of tables and the size of the tables will vary by manufacturer, but the end goal is to take the high order bytes of the virtual address in the memory address register and resolve them to an entry in the page table that points to either the location of the page in real memory or a flag to say the page is not available.
If a program references a memory location that resolves within a page not available, the computer will generate a page fault. The hardware will pass control to the operating system at a place that can load the required page from auxiliary storage and turn on the flag to say the page is available. The hardware will then take the start location of the page, add in the offset of the low order bits in the address register and access the memory location desired.
All the work required to access the correct memory address is invisible to the application addressing the memory. If the page is in memory, the hardware resolves the address. If a page fault is generated, software in the operating system resolves the problem and passes control back to the application trying to access the memory location. This scheme is called paging.
To minimize the performance penalty of address translation, most modern CPUs include an on-chip MMU, and maintain a table of recently used virtual-to-physical translations, called a Translation Lookaside Buffer, or TLB. Addresses with entries in the TLB require no additional memory references (and therefore time) to translate, However, the TLB can only maintain a fixed number of mappings between virtual and physical addresses; when the needed translation is not resident in the TLB, action will have to be taken to load it in.
On some processors, this is performed entirely in hardware; the MMU has to do additional memory references to load the required translations from the translation tables, but no other action is needed. In other processors, assistance from the operating system is needed; an exception is raised, and the operating system handles this exception by replacing one of the entries in the TLB with an entry from the primary translation table, and the instruction which made the original memory reference is restarted.
Hardware that supports virtual memory almost always supports memory protection mechanisms as well. The MMU may have the ability to vary its operation according to the type of memory reference (for read, write or execution), as well as the privilege mode of the CPU at the time the memory reference was made. This allows the operating system to protect its own code and data (such as the translation tables used for virtual memory) from corruption by an erroneous application program and to protect application programs from each other and (to some extent) from themselves (e.g. by preventing writes to areas of memory that contain code).

The BIOS Chip and BIOS Recovery

The BIOS Chip and BIOS Recovery
Before 1990 or so BIOSes were held on ROM chips that could not be altered. As its complexity and need for updates grew, BIOS firmware was subsequently stored on EEPROM or flash memory devices. The first flash chips attached to the ISA bus. Starting in 1998, the BIOS flash moved to the LPC bus, a functional replacement for ISA, following a new standard implementation known as "firmware hub" (FWH). In 2006, the first systems supporting a Serial Peripheral Interface (SPI) appeared, and the BIOS flash moved again.
EEPROM chips are advantageous because they can easily be updated by the user; hardware manufacturers frequently issue BIOS updates to upgrade their products, improve compatibility and remove bugs. However, the risk is that an improperly executed or aborted BIOS update can render the computer or device unusable. To recover from BIOS corruption, some new motherboards have a backup BIOS (i.e. they are referred to as "Dual BIOS" boards, Gigabyte even offers a motherboard with quad BIOS). Also, most BIOSes have a "boot block" which is a portion of the ROM that runs first and is not updateable. This code will verify that the rest of the BIOS is intact (via checksum, hash, etc.) before transferring control to it. If the boot block detects that the main BIOS is corrupted, then it will typically initiate a recovery process, by booting to a removable device (floppy, CD or USB memory) so that the user can try flashing again.
Due to the limitation on the number of times that flash memory can be flashed, a flash-based BIOS is vulnerable to "flash-burn" viruses that repeatedly write to the flash, permanently corrupting the chip. Such attacks can be prevented by some form of write-protection, the ultimate protection being the replacement of the flash memory with a true ROM.

Sunday, January 25, 2009

Remove Windows XP's Messenger

How to Remove Windows XP's Messenger
open the Add and Remove Programs applet in the Control Panel. Click the Add / Remove Windows Components icon. You should see "Windows Messenger" in that list. Remove the checkmark from its box, and you should be set.

ANTIVIRUS PROGRAMS

Which anti-virus program is the best?

A number of computer magazines perform regular anti-virus roundups and note the best performers. But over the long haul, which anti-virus program is the best one for you?


Each year, one of the magazines I write for regularly, Australian PC User, does an in-depth assessment of the main anti-virus programs available, and gives the nod to the anti-virus program which does the best job. The most recent winner of the PC User anti-virus gong was Eset's NOD32. In fact, NOD32 has claimed the top spot in the last four PC User anti-virus shootouts. It's priced competitively and runs at lightning speed.

What more could you want?

Well, some people do want more. For instance:

  • Those on a strict budget want to know whether they can use free anti-virus software and still get decent protection.
  • Those who have used one anti-virus product for years want to know whether they should stick with it or give it the heave ho.
  • Those battling spyware and spam want a product that can defend them on all fronts.
  • Those whose systems came with anti-virus Product X already installed want to know it it's good enough to do the job.
  • Some people want a program that not only offers good protection but which also provides a little handholding.

NOD32 is a great choice for many users, but it's not everyone's cup of tea.

The problem with anti-virus tests

One of the problems with anti-virus tests is that they focus almost exclusively on accuracy and scanning performance. But what about usability? What about the way the program interacts with your operating system and other applications over time? What about a friendly interface, understandable error messages, unobtrusive virus handling, configurability? How about aesthetics?

Those may seem like trivial issues in comparison to effectiveness and accuracy, but in many ways issues such as aesthetics have an influence on the effectiveness of any program. If you hate the way an anti-virus program looks, works and alerts you to problems, chances are you'll either stop using the program or limit how often or how closely it monitors your computer. It will no longer be as effective.

That's not to say NOD32 is defective in these areas, but it's certainly a utilitarian product, designed to do a single job very well without coddling the user. For experienced computer users, it's the perfect choice.

If you don't think of yourself as an experienced user, or if you're recommending anti-virus software to someone who lacks computer nous, there are alternative choices well worth considering. After all, 17 products in PC User's September 2004 wrap-up received the independent Virus Bulletin's top-rank VB100% Award; three products in addition to NOD32 scored perfectly in every single test. In a usability shootout, some of those other programs would trounce NOD32.

So let's take a look at some of the alternatives.

Freebies

Traditional advice about free anti-virus software is that it's okay as an emergency interim measure and better than no protection if you really cannot afford to buy a "real" anti-virus product, but not all that reliable in the long term. That judgment was based on the limitations imposed on free versions and their track record in anti-virus tests over the years.

It's probably time to revise that assessment. Take, for example, AVG from Grisoft, one of the best-known free anti-virus programs. In the past, it only rarely earned the VB 100% Award; in the last two years, it has earned the award six times on multiple platforms. That's a good recommendation. In addition, if you look at the limitations placed on the free version, you'll find very few of them are likely to have a major impact on the program's effectiveness, unless you work in a large networking environment. You can only schedule a single scan per day (but you can run any number of scans manually); you can only schedule one update per day (but once again you can update manually at any time); it can't be installed on server operating systems; there's no technical support; and several advanced testing options are not available. For the average home user, those limitations are immaterial and AVG makes a good choice. Even better, because it's free you can try it out and see whether you like how it works.

AVG is not the only free program out there. A Google search for "free anti-virus" will turn up a number of other products, including AntiVir Personal Edition, which runs on Linux, Solaris and a number of other operating systems in addition to Windows; avast! 4 Home Edition, which missed out on a VB 100% Award running on Windows XP, but has a fairly good track record overall; and BitDefender Free Edition v7, probably the only anti-virus program with a skinnable interface.

Combos

Viruses are only one of the threats you face as a computer user. Spyware, phishing scams, intruders, spam…the list is long and gruesome. In fact, for most people these days spyware is a far greater danger than viruses. So products which combine multiple lines of defense – firewall, anti-spyware, spam blocker, anti-virus, and so on – are particularly attractive. Instead of having to pay for the individual components, you can get a bargain by opting for a security suite.

The leading suites are McAfee Internet Security Suite, Norton Internet Security, PC-cillin Internet Security and ZoneAlarm Security Suite. Each includes an anti-virus module, a firewall and a content filter. Some add a spam filter and a spyware remover. Each of the anti-virus modules, in their standalone form, scored a VB 100% Award in the latest testing.

Despite the financial benefits and ease of installing a single security suite, there are some drawbacks to these combination programs. Firstly, the interface can get a little cluttered and the array of options somewhat dazzling. PC-cillin provides an exceptional interface and Norton is pretty good, too; McAfee falls down in this regard.

Secondly, individual components in a suite are not necessarily of the highest calibre. For example, even if you choose a suite with an anti-spyware tool, you'll probably need to install additional spyware removers to ensure good protection.

Finally, the all-in-one approach of the suites can lead to some pretty hefty code bloat. Norton Internet Security, for example, is just too heavy for its own good. It bogs down your system, causing slow boot up, slow application load times and a general decline in performance. You'll frequently find Norton scores very well in tests in computer publications, but user ratings (including my own assessment) fall far short of those scores. That's because its performance deficiencies show up over time and are exacerbated on a well-used machine. McAfee, too, suffers similar performance problems.

If you want to use a suite, try the 2005 version of PC-cillin Internet Security. It provides excellent anti-virus protection, a spyware scanner, spam blocker, and a content blocker. Its firewall is so-so and you may want to adjust the frequency of its pop-up alerts, but it won't hurt your computer's performance and it's a pleasure to use. It's also available in an economical 3-pack – great for homes with several computers.

Online scanners

If you're in the unfortunate position of having no anti-virus software installed and you think your system may be infected, you can always try one of the free online scanners. These Web-based programs are stop-gap measures only, but useful in times of emergency. They can also provide a useful test of your existing anti-virus defences: run your computer through a battery of online scanners and you may be surprised to find viruses lurking on your seemingly well-defended machine.

Trend Micro's Housecall provides free online virus and spyware scanning. (Click the image to see a full-size screenshot.)

There are almost a dozen online scanners available. Note that some of these programs do nothing more than scan your computer for viruses; they won't remove them if they find them. Some do a full job of scanning and disinfecting, while some even add spyware scanning and removal to their tests. You'll need to use Internet Explorer to access most of these online scanners, because they run ActiveX scripts to perform their task, so if you're a Firefox or Mozilla user, be ready to load up IE for a change.

Four of the best online scanners are BitDefender Online, Panda ActiveScan, RAV AntiVirus Online Virus Scan and Trend House call.

Conclusion

There's plenty of choice when it comes to choosing effective anti-virus software. You probably won't go wrong if you choose any of the 17 programs listed below in the Award Winners box: each scored a perfect 100% on Virus Bulletin's recent tests. Most of these programs provide a trial version you can download and take for a spin, but note that some anti-virus programs are notoriously difficult to uninstall (Norton is one such), so make sure you set a System Restore point before installing.

Whichever program you choose, make sure you keep it up to date. New threats emerge on an almost daily basis, so even the best anti-virus program is only as good as its last update.

Award winners

Each of these programs qualified as a VB 100% Award winner in recent tests:


Authentium COMMAND Antivirus

Computer Associates eTrust Antivirus 7.1

Computer Associates Vet Anti-Virus

Cat Quick Heal X-Gen

* Eset NOD32 Antivirus System

FRISK F-Prot Antivirus

F-Secure Anti-Virus

* G DATA AntiVirustKit

Grisoft AVG Anti-Virus Professional

H+BEDV AntiVir

* Kaspersky Anti-Virus

McAfee VirusScan

Norman Virus Control

* Norton AntiVirus

Sophos Anti-Virus

Trend Micro PC-cillin

VirusBuster

Web passwords

How to make "Web passwords" made easy

Learn how to create secure passwords for all your online activities – without making your brain hurt.


A while back I visited my brother. He's always happy to talk tech with me and on this occasion he was delighted to show me his latest handheld computing toy. He presented it to me with obvious relish and watched as I started it up. It opened to a password screen. Within one second – on my first attempt, mind you – I cracked his password and was into the device.

"How'd you do that?" asked my clearly disconcerted brother.

"I chose the most obvious password you would choose...and it worked."

As far as I know, my brother no longer uses his daughter's name as a password. I hope you don't, either.

Open sesame

So, how crackable are your passwords? As easy as my brother's? Or do you cunningly resort to using your daughter's – or mother's, sister's, brother's, father's, partner's, pet's, favourite sports star's – name reversed or with a number on the end? That's not so cunning, I'm afraid. Anyone who knows you is already halfway to cracking your password. Anyone who has a password cracking tool – easily locatable on the Internet – won't have any problems getting into your system.

Brute-force password cracking programs, which are used both by crackers and by system administrators wishing to test the strength of employees' passwords, can crack most passwords within a couple of days.

Take, for example, the experience of one large technology company which used @Stake's LC3 password auditing tool to test its password security. Within 10 minutes, 18 percent of the company's passwords had been cracked. Within 48 hours, that figure rose to 90 percent. And this was at a company where employees were required to choose passwords of nine characters or more containing mixed case and including numbers or symbols.

How do you think your password would fare?

The password dilemma

The trouble with passwords is they need to be cryptic enough they're not easily cracked and yet memorable enough our poor human brains can keep them stored safely.

In companies where users are required to change their passwords on a regular basis, most users resort to one of two tactics. The first is to write the password down and keep it somewhere handy but out of sight. The second is to rotate the same few passwords month after month. Both methods are highly insecure.

Unfortunately, with the growth of the Internet, password protection has become an increasingly big issue. Having your computer online makes it more accessible to intruders. At the same time, you probably find yourself having to come up with more and more passwords: One for your PC; one for your Internet Service Provider; one for logging in to your work computer remotely; one for your favorite instant messaging program; one for each shopping site you visit; one for each online banking service you use; one for your brokerage account; innumerable ones for Web sites which require password access. It's not uncommon for computer users to have several dozen logins or passwords.

That makes trying to find a solution which recognizes both human limits and security needs no easy task.

Good passwords

So what constitutes a good password? Here are some tests you should apply to all your passwords:

  • It should be memorable. If you have to write it down, it's of no use.
  • It shouldn't be easily guessable.
  • It should be at least six characters long. Shorter passwords are far more easily cracked. Some sites limit passwords to four characters. That's okay if the site's purpose is trivial, but be wary of storing any sensitive information on such a site.
  • It should contain a combination of uppercase and lowercase letters, numbers and punctuation marks.
  • It should be unique. Don't use the same password for multiple purposes. In particular, don't mix work and pleasure passwords.

A practical solution

If you read through that list of good password requirements and think "My brain hurts", never fear. There's a way to meet all those requirements without taxing your synapses too much.

How? By using a password creation technique recommended by the US government's National Infrastructure Protection Center. It's easy to do:

  1. Choose a phrase you will remember.
  2. Choose a date you will remember.
  3. Interlace the date with the first letters in the phrase.

For instance, if your phrase is I wanna be your lover, baby and your date is 25/1/60, interlacing the date and first part of the phrase will give you:

I2w5a1n6n0

Add another level of security by including punctuation. For instance, we could grab the item of punctuation from the selected phrase and place it at the end of the password:

I2w5a1n6n0,

To ratchet up the security another notch, modify the password for each site or service you use by adding a distinguishing letter or number for that site. For instance, you might choose to include the third letter – capitalised – of a site's domain name in the password, and make that letter the third last character in the password.

For example, if your password is I2w5a1n6n0, and you want to modify it for use at Hotmail and for your Internet Service Provider, Bigpond, you'd end up with the following two passwords:

Hotmail: I2w5a1n6nT0,

Bigpond: I2w5a1n6nG0,

Even though the password itself isn't easy to remember, it's very easily reconstructed. That's the beauty of this method, and you can apply it to all your passwords: Internet passwords, computer log-ons, encryption passwords.

Just remember: Never reveal your phrase and date choice to anyone else.

Change is good

To take your password security one final step, change your password regularly.

How often is 'regularly'? Most good passwords of this size can probably be cracked within a couple of months, given enough computing power. So you should change your password before those two months expire. Do so more frequently if you feel particularly vulnerable, and do so immediately if you do anything to compromise your chosen phrase and date.

To change it, simply come up with a new memorable phrase and date combo.

Password no-nos

When choosing a password, never:

  • Use a word found in a dictionary (even a foreign language or technical dictionary).
  • Use a dictionary word followed by two numbers.
  • Use a word which contains any sequence of four or more letters which can be found in a dictionary.
  • Use any dictionary word or sequence reversed.
  • Use the names of people (family members, friends, celebrities, and so on), places, pets.
  • Write it down and store it near your computer.
  • Share it with anyone else.
  • Use the same password for more than one account.
  • Use the same password for an extended period of time.
  • Use the default password provided by a site or computer manufacturer.