Adsense Camp

Jumat, Juli 18, 2008

Friendster

Friendster is an Internet social network service. The Friendster site was founded in Mountain View, California, United States by Jonathan Abrams in March 2002[1] and is privately owned. Friendster is based on the Circle of Friends and Web of Friends techniques for networking individuals in virtual communities and demonstrates the small world phenomenon. It currently has more than 70 million members worldwide[2] and is mostly used in Asia [3][4]. Based on Alexa, Friendster ranked 2nd most visited website in the Philippines and third party friendster-layouts.com, 16th. [5] It is estimated that nearly 90 percent of internet users in the Philippines have Friendster accounts. [6] David Jones, vice president for global marketing of Friendster, said that "the biggest percentage of users is from the Philippines, clocking in with 39 percent of the site's traffic." [7]

History

Google offered $30 million to buy Friendster in 2003. Friendster, however, refused the offer.

Friendster was funded by Kleiner Perkins Caufield & Byers and Benchmark Capital in October 2003 with a reported valuation of $53 million.

In April 2004, Abrams was removed as Chief Executive Officer and Tim Koogle took over as interim CEO. Koogle previously served as President and CEO at Yahoo!. Koogle was replaced by Scott Sassa in June 2004. left in May 2005 and was replaced by Taek Kwon. Taek Kwon was succeeded by Kent Lindstrom.

Patent

Based on a June 16, 2003 application, Friendster was awarded a patent in 2006 for a method and apparatus for calculating, displaying and acting upon relationships in a social network. Dubbed the Web of Friends because the method combines the Circle of Friends with the Web of Contacts, the system collects descriptive data about various individuals and allows those individuals to indicate other individuals with whom they have a personal relationship. The descriptive data and the relationship data are integrated and processed to reveal the series of social relationships connecting any two individuals within a social network. The pathways connecting any two individuals can be displayed. Further, the social network itself can be displayed to any number of degrees of separation. A user of the system can determine the optimal relationship path (i.e., contact pathway) to reach desired individuals. A communications tool allows individuals in the system to be introduced (or introduce themselves) and initiate direct communication.

Based on a June 27, 2008 application, Friendster, the 7th largest website in the world and top social network in Asia, announced that its users in Singapore, Malaysia, Indonesia and the Philippines will soon be able to subscribe to Friendster Text Alerts. When launched, Users are required to register the use of Friendster Text Alerts by entering their mobile number details and selecting which mobile text alerts they wish to receive on their Friendster settings page. Users will then be able to receive text message alerts for friend requests, new messages, comments, bulletins and more or when such activity takes place within their network of friends on Friendster, they will receive an SMS text message on their mobile phone. Users will also be able to respond, share and communicate on Friendster by sending a text message to Friendster to update content on their profile, send messages, and reply to friend requests. This service is not chargeable, but users are still subject to text messaging usage fees of their telephone and wireless service provider. Although no specific dates have been given yet to launch Friendster Text Alerts but it's understood it could happen in the coming weeks.

In other languages

Friendster's Traditional Chinese, Simplified Chinese, Japanese, Korean, Spanish, Indonesian, Vietnamese, Malay, and Thai (beta) sites exist as part of its main. [1] A link in the upper right corner toggles you between English to the other languages mentioned above.[8]

Personal Computer

A personal computer (PC) is a computer whose original sales price, size, and capabilities make it useful for individuals, and which is intended to be operated directly by an end user, with no intervening computer operator.

Today a PC may be a desktop computer, a laptop computer or a tablet computer. The most common operating systems are Microsoft Windows, Mac OS X and Linux, while the most common microprocessors are x86 compatible CPUs. However, the term "PC" is often used only to refer to computers running Microsoft Windows. Software applications for personal computers include word processing, spreadsheets, Database, games, and a myriad of personal productivity and special-purpose software. Modern personal computers often have high-speed or dial-up connections to the Internet, allowing access to the World Wide Web and a wide range of other resources.

A PC may be a home computer, or may be found in an office, often connected to a local area network. The distinguishing characteristics are that the computer is primarily used, interactively, by one person at a time. This is in contrast to the batch processing or time-sharing models which allowed large expensive systems to be used by many people, usually at the same time, or large data processing systems which required a full-time staff to operate efficiently.

While early PC owners usually had to write their own programs to do anything useful with the machines, today's users have access to a wide range of commercial and free software which is easily installed.

History

IBM 5150 as of 1981

IBM 5150 as of 1981

The capabilities of the PC have changed greatly since the introduction of electronic computers. By the early 1970s, people in academic or research institutions had the opportunity for single-person use of a computer system in interactive mode for extended durations, although these systems would still have been too expensive to be owned by a single person. The introduction of the microprocessor, a single chip with all the circuitry that formerly occupied large cabinets, led to the proliferation of personal computers after about 1975. Early personal computers - generally called microcomputers - were sold often in kit form and in limited volumes, and were of interest mostly to hobbyists and technicians. By the late 1970s, mass-market pre-assembled computers allowed a wider range of people to use computers, focusing more on software applications and less on development of the processor hardware.

Throughout the 1970s and 1980s, home computers were developed for household use, offering personal productivity, programming and games. Somewhat larger and more expensive systems (although still low-cost compared with minicomputers and mainframes) were aimed for office and small business use. Workstations are characterized by high-performance processors and graphics displays, with large local disk storage, networking capability, and running under a multitasking operating system. Workstations are still used for tasks such as computer-aided design, drafting and modelling, computation-intensive scientific and engineering calculations, image processing, architectural modelling, and computer graphics for animation and motion picture visual effects.[1]

Eventually the market segments lost any technical distinction; business computers acquired color graphics capability and sound, and home computers and game systems users used the same computers and operating systems as office workers. Mass-market computers had graphics capabilities and memory comparable to dedicated workstations of a few years before. Even local area networking, originally a way to allow business computers to share expensive mass storage and peripherals, became a standard feature of the personal computers used at home.

Market

In 2001 125 million personal computers were shipped in comparison to 48 thousand in 1977. More than 500 million PCs were in use in 2002 and one billion personal computers had been sold worldwide since mid-1970s till this time. Of the latter figure, 75 percent were professional or work related, while the rest sold for personal or home use. About 81.5 percent of PCs shipped had been desktop computers, 16.4 percent laptops and 2.1 percent servers. United States had received 38.8 percent (394 million) of the computers shipped, Europe 25 percent and 11.7 percent had gone to Asia-Pacific region, the fastest-growing market as of 2002. The second billion was expected to be sold by 2008.[2] Almost half of all the households in Western Europe had a personal computer and a computer could be found in 40 percent of homes in United Kingdom, compared with only 13 percent in 1985.[3]

As of June 2008, the number of personal computers worldwide in use hit one billion, while another billion is expected to be reached by 2014. Mature markets like the United States, Western Europe and Japan accounted for 58 percent of the worldwide installed PCs. The emerging markets were expected to double their installed PCs by 2013 and to take 70 percent of the second billion PCs. About 180 million PCs (16 percent of the existing installed base) were expected to be replaced and 35 million to be dumped into landfill in 2008. The whole installed base grew 12 percent annually.[4][5]

Types

Desktop computer

Main article: Desktop computer
Dell OptiPlex desktop computer

A desktop computer is an independent personal computer (PC), as opposed to smaller forms of PCs, such as a mobile laptop. Prior to the wide spread of PCs a computer that could fit on a desk was considered remarkably small. Today the phrase usually indicates a particular style of computer case. Desktop computers come in a variety of styles ranging from large vertical tower cases to small form factor models that can be tucked behind an LCD monitor. In this sense, the term 'desktop' refers specifically to a horizontally-oriented case, usually intended to have the display screen placed on top to save space on the desk top. Most modern desktop computers have separate screens and keyboards.

Laptop

Main article: Laptop
A modern mid-range HP Laptop.

A laptop computer or simply laptop, also called a notebook computer or sometimes a notebook, is a small personal computer designed for mobility. Usually all of the interface hardware needed to operate the laptop, such as parallel and serial ports, graphics card, sound channel, etc., are built in to a single unit. Most laptops contain batteries to facilitate operation without a readily available electrical outlet. In the interest of saving power, weight and space, they usually share RAM with the video channel, slowing their performance compared to an equivalent desktop machine.

One main drawback of the laptop is that, due to the size and configuration of components, relatively little can be done to upgrade the overall computer from its original design. Some devices can be attached externally through ports (including via USB), however internal upgrades are not recommended or in some cases impossible, making the desktop PC more modular.

Tablet PC

Main article: Tablet PC
HP Compaq tablet PC with rotating/removable keyboard.

A tablet PC is a notebook or slate-shaped mobile computer, first introduced by Pen Computing in the early 90s with their PenGo Tablet Computer and popularized by Microsoft. Its touchscreen or graphics tablet/screen hybrid technology allows the user to operate the computer with a stylus or digital pen, or a fingertip, instead of a keyboard or mouse. The form factor offers a more mobile way to interact with a computer. Tablet PCs are often used where normal notebooks are impractical or unwieldy, or do not provide the needed functionality.

Ultra-Mobile PC

Main article: Ultra-Mobile PC
Samsung Q1 Ultra-Mobile PC.

The ultra-mobile PC (UMPC) is a specification for a small form factor tablet PC. It was developed as a joint development exercise by Microsoft, Intel, and Samsung, among others. Current UMPCs typically feature the Windows XP Tablet PC Edition 2005, Windows Vista Home Premium Edition, or Linux operating system and low-voltage Intel Pentium or VIA C7-M processors in the 1 GHz range.

Home theater PC

Main article: Home theater PC
Antec Fusion V2 home theater PC with keyboard on top.

A home theater PC (HTPC) is a convergence device that combines the functions of a personal computer and a digital video recorder. It is connected to a television or a television-sized computer display and is often used as a digital photo, music, video player, TV receiver and digital video recorder. Home theater PCs are also referred to as media center systems or media servers. The general goal in a HTPC is usually to combine many or all components of a home theater setup into one box. They can be purchased pre-configured with the required hardware and software needed to add television programming to the PC, or can be cobbled together out of discrete components as is commonly done with Windows Media Center, GB-PVR, SageTV, Famulent or LinuxMCE.

Pocket PC

Main article: Pocket PC
An O2 pocket PC

A pocket PC is a hardware specification for a handheld-sized computer (personal digital assistant) that runs the Microsoft Windows Mobile operating system. It may have the capability to run an alternative operating system like NetBSD or Linux. It has many of the capabilities of modern desktop PCs.

Currently there are thousands of applications for handhelds adhering to the Microsoft Pocket PC specification, many of which are freeware. Some of these devices also include mobile phone features. Microsoft compliant Pocket PCs can also be used with many other add-ons like GPS receivers, barcode readers, RFID readers, and cameras. In 2007, with the advent of Windows Mobile 6, Microsoft dropped the name Pocket PC in favor of a new naming scheme. Devices without an integrated phone are called Windows Mobile Classic instead of Pocket PC. Devices with an integrated phone and a touch screen are called Windows Mobile Professional.[6]

Hardware

Main article: Computer hardware

A typical hardware setup of a desktop computer consists of:

These components can usually be put together with little knowledge to build a computer. The motherboard is a main part of a computer that connects all devices together. The memory card(s), graphics card and processor are mounted directly onto the motherboard (the processor in a socket and the memory and graphics cards in expansion slots). The mass storage is connected to it with cables and can be installed in the computer case or in a separate case. This is the same for the keyboard and mouse, except that they are external and connect to the I/O panel on the back of the computer. The monitor is also connected to the I/O panel, either through an onboard port on the motherboard, or a port on the graphics card.

Several functions (implemented by chipsets) can be integrated into the motherboard, typically USB and network, but also graphics and sound. Even if these are present, a separate card can be added if what is available isn't sufficient. The graphics and sound card can have a break out box to keep the analog parts away from the electromagnetic radiation inside the computer case. For really large amounts of data, a tape drive can be used or (extra) hard disks can be put together in an external case.

The hardware capabilities of personal computers can sometimes be extended by the addition of expansion cards connected via an expansion bus. Some standard peripheral buses often used for adding expansion cards in personal computers as of 2005 are PCI, AGP (a high-speed PCI bus dedicated to graphics adapters), and PCI Express. Most personal computers as of 2005 have multiple physical PCI expansion slots. Many also include an AGP bus and expansion slot or a PCI Express bus and one or more expansion slots, but few PCs contain both buses.

Computer case

Main article: Computer case
A stripped ATX case lying on its side.

A computer case is the enclosure that contains the main components of a computer. Cases are usually constructed from steel, aluminium, or plastic, although other materials such as wood, plexiglas or fans[7] have also been used in case designs. Cases can come in many different sizes, or form factors. The size and shape of a computer case is usually determined by the form factor of the motherboard that it is designed to accommodate, since this is the largest and most central component of most computers. Consequently, personal computer form factors typically specify only the internal dimensions and layout of the case. Form factors for rack-mounted and blade servers may include precise external dimensions as well, since these cases must themselves fit in specific enclosures.

Currently, the most popular form factor for desktop computers is ATX, although microATX and small form factors have become very popular for a variety of uses. Companies like Shuttle Inc. and AOpen have popularized small cases, for which FlexATX is the most common motherboard size. Apple Computer has also produced the Mac Mini computer, which is similar in size to a standard CD-ROM drive.

Motherboard

Main article: Motherboard
Asus motherboard

The motherboard, also referred to as systemboard or mainboard, is the primary circuit board within a personal computer. Many other components connect directly or indirectly to the motherboard. Motherboards usually contain one or more CPUs, supporting circuitry - usually integrated circuits (ICs) - providing the interface between the CPU memory and input/output peripheral circuits, main memory, and facilities for initial setup of the computer immediately after power-on (often called boot firmware or, in IBM PC compatible computers, a BIOS). In many portable and embedded personal computers, the motherboard houses nearly all of the PC's core components. Often a motherboard will also contain one or more peripheral buses and physical connectors for expansion purposes. Sometimes a secondary daughter board is connected to the motherboard to provide further expandability or to satisfy space constraints.

Central processing unit

AMD Athlon 64 CPU.

The central processing unit, or CPU, is that part of a computer which executes software program instructions. In older computers this circuitry was formerly on several printed circuit boards, but in PCs is a single integrated circuit. Nearly all PCs contain a type of CPU known as a microprocessor. The microprocessor often plugs into the motherboard using one of many different types of sockets. IBM PC compatible computers use an x86-compatible processor, usually made by Intel, AMD, VIA Technologies or Transmeta. Apple Macintosh computers were initially built with the Motorola 680x0 family of processors, then switched to the PowerPC series (a RISC architecture jointly developed by Apple Computer, IBM and Motorola), but as of 2006, Apple has switched again, this time to x86 compatible processors. Modern CPUs are equipped with a fan attached via heat sink.

Main memory

Main article: Primary storage
1GB DDR SDRAM 400 module

A PC's main memory is fast storage that is directly accessible by the CPU, and is used to store the currently executing program and immediately needed data. PCs use semiconductor random access memory (RAM) of various kinds such as DRAM or SRAM as their primary storage. Which exact kind depends on cost/performance issues at any particular time. Main memory is much faster than mass storage devices like hard disks or optical discs, but is usually volatile, meaning it does not retain its contents (instructions or data) in the absence of power, and is much more expensive for a given capacity than is most mass storage. Main memory is generally not suitable for long-term or archival data storage.

Hard disk

Main article: Hard disk drive
A Western Digital 250 GB hard disk drive.

Mass storage devices store programs and data even when the power is off; they do require power to perform read and write functions during usage. Although semiconductor flash memory has dropped in cost, the prevailing form of mass storage in personal computers is still the electromechanical hard disk.

The disk drives use a sealed head/disk assembly (HDA) which was first introduced by IBM's "Winchester" disk system. The use of a sealed assembly allowed the use of positive air pressure to drive out particles from the surface of the disk, which improves reliability.

If the mass storage controller provides for expandability, a PC may also be upgraded by the addition of extra hard disk or optical disc drives. For example, DVD-ROMs, CD-ROMs, and various optical disc recorders may all be added by the user to certain PCs. Standard internal storage device interfaces are ATA, Serial ATA, SCSI, and CF+ type II in 2005.

Video card

Main article: Video card

The video card - otherwise called a graphics card, graphics adapter or video adapter - processes and renders the graphics output from the computer to the computer display, also called the visual display unit (VDU), and is an essential part of the modern computer. On older models, and today on budget models, graphics circuitry tended to be integrated with the motherboard but, for modern flexible machines, they are supplied in PCI, AGP, or PCI Express format.

When the IBM PC was introduced, many existing personal computers used text-only display adapters and had no graphics capability.

Visual display unit

Main article: Visual display unit
An LG flat-panel LCD monitor.

A visual display unit (also called monitor) is a piece of electrical equipment which displays viewable images generated by a computer without producing a permanent record. The word "monitor" is used in other contexts; in particular in television broadcasting, where a television picture is displayed to a high standard. A computer display device is usually either a cathode ray tube or some form of flat panel such as a TFT LCD. The monitor comprises the display device, circuitry to generate a picture from electronic signals sent by the computer, and an enclosure or case. Within the computer, either as an integral part or a plugged-in interface, there is circuitry to convert internal data to a format compatible with a monitor.

Other components

Mass storage
The operating system (e.g.: Microsoft Windows, Mac OS, Linux or many others) can be located on either of these, but typically it is on one of the hard disks. A Live CD is also possible, but it is very slow and is usually used for installation of the OS, demonstrations, or problem solving.
Computer communications
Common peripherals and adapter cards

Software

Main article: Computer software
A screenshot of the OpenOffice.org Writer software

Computer software is a general term used to describe a collection of computer programs, procedures and documentation that perform some tasks on a computer system.[8] The term includes application software such as word processors which perform productive tasks for users, system software such as operating systems, which interface with hardware to provide the necessary services for application software, and middleware which controls and co-ordinates distributed systems.

Software applications for word processing, Internet browsing, Internet faxing, e-mail and other digital messaging, multimedia playback, computer game play and computer programming are common. The user of a modern personal computer may have significant knowledge of the operating environment and application programs, but is not necessarily interested in programming nor even able to write programs for the computer. Therefore, most software written primarily for personal computers tends to be designed with simplicity of use, or "user-friendliness" in mind. However, the software industry continuously provide a wide range of new products for use in personal computers, targeted at both the expert and the non-expert user.

Operating system

Main article: Operating system
KDE 4 running on a Linux distribution.

An operating system (OS) manages computer resources and provides programmers with an interface used to access those resources. An operating system processes system data and user input, and responds by allocating and managing tasks and internal system resources as a service to users and programs of the system. An operating system performs basic tasks such as controlling and allocating memory, prioritizing system requests, controlling input and output devices, facilitating computer networking and managing files.

Common contemporary desktop OSes are Linux, Mac OS X, Microsoft Windows and Solaris. Mac, Linux, and Windows all have server and personal variants. With the exception of Microsoft Windows, the designs of each of the aforementioned OSs were inspired by, or directly inherited from, the Unix operating system. Unix was developed at Bell Labs beginning in the late 1960s and spawned the development of numerous free and proprietary operating systems.

Palm OS, Windows Mobile, Familiar Linux, The Ångström Distribution and iPhone OS can be found on mobile devices.

Microsoft Windows

Main article: Microsoft Windows
Windows XP operating system

Microsoft Windows is the name of several families of software operating systems by Microsoft. Microsoft first introduced an operating environment named Windows in November 1985 as an add-on to MS-DOS in response to the growing interest in graphical user interfaces (GUIs).[9] [10] The most recent client version of Windows is Windows Vista. The current server version of Windows is Windows Server 2008.

Mac OS X

Main article: Mac OS X
Mac OS X desktop

Mac OS X is a line of graphical operating systems developed, marketed, and sold by Apple Inc., the latest of which is pre-loaded on all currently shipping Macintosh computers. Mac OS X is the successor to the original Mac OS, which had been Apple's primary operating system since 1984. Unlike its predecessors, Mac OS X is a Unix-based operating system[11] built on technology developed at NeXT from the second half of the 1980s until early 1997, when Apple purchased the company.

The server edition, Mac OS X Server, is architecturally very similar to its desktop counterpart but usually runs on Apple's line of Macintosh server hardware. It includes workgroup management and administration software tools that provide simplified access to key network services, including a mail transfer agent, a Samba server, an LDAP server, a domain name server, and others.

Linux

Main article: Linux
GNOME Linux desktop

Linux is a Unix-like computer operating system. Linux is one of the most prominent examples of free software and open source development: typically all underlying source code can be freely modified, used, and redistributed by anyone.[12] The name "Linux" comes from the Linux kernel, started in 1991 by Linus Torvalds. The system's utilities and libraries usually come from the GNU operating system, announced in 1983 by Richard Stallman. The GNU contribution is the basis for the alternative name GNU/Linux.[13]

Predominantly known for its use in servers, Linux is supported by corporations such as Dell, Hewlett-Packard, IBM, Novell, Oracle Corporation, Red Hat, and Sun Microsystems. It is used as an operating system for a wide variety of computer hardware, including desktop computers, supercomputers,[14] video game systems, such as the PlayStation 3, several arcade games, and embedded devices such as mobile phones, routers, and stage lighting systems.

Applications


GIMP raster graphics editor

Application software employs the capabilities of a computer directly and thoroughly to a task that the user wishes to perform. This should be contrasted with system software which is involved in integrating a computer's various capabilities, but typically does not directly apply them in the performance of tasks that benefit the user. In this context the term application refers to both the application software and its implementation. A simple, if imperfect analogy in the world of hardware would be the relationship of an electric light bulb (an application) to an electric power generation plant (a system). The power plant merely generates electricity, not itself of any real use until harnessed to an application like the electric light that performs a service that benefits the user.

Typical examples of software applications are word processors, spreadsheets, and media players. Multiple applications bundled together as a package are sometimes referred to as an application suite. Microsoft Office and OpenOffice.org, which bundle together a word processor, a spreadsheet, and several other discrete applications, are typical examples. The separate applications in a suite usually have a user interface that has some commonality making it easier for the user to learn and use each application. And often they may have some capability to interact with each other in ways beneficial to the user. For example, a spreadsheet might be able to be embedded in a word processor document even though it had been created in the separate spreadsheet application.

User-written software tailors systems to meet the user's specific needs. User-written software include spreadsheet templates, word processor macros, scientific simulations, graphics and animation scripts. Even email filters are a kind of user software. Users create this software themselves and often overlook how important it is.

Lifetime

Most personal computers are standardized to the point that purchased software is expected to run with little or no customization for the particular computer. Many PCs are also user-upgradeable, especially desktop and workstation class computers. Devices such as main memory, mass storage, even the motherboard and central processing unit may be easily replaced by an end user. This upgradeability is, however, not indefinite due to rapid changes in the personal computer industry. A PC that was considered top-of-the-line five or six years prior may be impractical to upgrade due to changes in industry standards. Such a computer usually must be totally replaced once it is no longer suitable for its purpose. This upgrade and replacement cycle is partially related to new releases of the primary mass-market operating system, which tends to drive the acquisition of new hardware and render obsolete previously serviceable hardware (planned obsolescence).

Sabtu, Juli 05, 2008

Ultrasonic Calibrate

Calibration of the wall thickness


To refer to the basic equation, the wall thickness calculation is depend on the Ultrasonic sound velocity and this further on material properties and temperature conditions.

In Measurements under production, the sound velocity or the "sound time of flight" in the pipe is known. For some materials the temperature influence on sound velocity has been researched. It is clear, that the calibration of the wall thickness measurement is essential first.

Calibration methods

There are 3 methods known.

1. Manual
At the cooled pipe the wall thickness is to be determined (with mechanical measurements), further a correction factor needs to calculated and the original ultrasonic measurement result to be recalculated. The manual correction has to be done frequently since the pipe temperature has changed and the sound velocity is different.
2. Temperature equation
During the pipe cooling process there is a temperature profile in pipe axis direction. This temperature curve can be, upto the measurement location, determined by mathematics. The mathematics are based on physics models for the cooling procedure. This method is rarely applied, since the mass temperature is not known exactly as a starting point and the cooling model is not always matched with the real cooling devices.
3. Pipe In-Line weight/meter measurements
The safest and simplest method is the calibration with the help of the mass throughput value. A mass throughput measurement unit is a precondition for this function. With the measured "ultrasound time of flight", a relative pipe cross section is determined. The absolute cross section wall thickness is measured by the mass throughput unit. The real sound velocity can be determined and the real wall thickness is calculated.
grafical display explains the Principle.
The preconditions for that calculation are, that the outer diameter and the material density are known. This calibration procedure is automatically by the system done and continuously.

There is one problem that can appears with thick wall pipes: temperature differences at the pipe circumference. These increase during the cooling . But can also rise, depending on extruder die conditions, directly behind the die.

The reason for this are wall thickness and temperature differences in the die, different cooling properties at upper and lower pipe sections and the adjustment of the cooling sprayers in the tank. Materials like PE can create a temperature difference also by "sacking". All these conditions are very complex and cannot be predetermined.

To handle such problems, a few suppliers offer a software function , called "Segmenting Calibration Correction" that is based on manually determined corrections and is entered into the system. Because of the instability of those temperature conditions, the manual procedures have to be done frequently. Through this it lost the purpose of automation and does not convince the users to applying this method.To overcome this problem, others tried to measure close to the die.

Without considering method 2, there is only the choice between either Manual or mass throughput methods. The individual case decides what is the better method. If a mass-throughput unit is not essentially necessary, or not accurate because of too little material throughput, the user often chooses manual calibration. Thus the installed hardware is cheaper and more reliable.

Generally, it should be the user's accuracy demands that assumes the decision. The measurement error is mostly 0.3%/ degree Celsius, thus with constant process and pipe temperature conditions, manual calibration can be recommended. The PC's receipt handling supports the manual method, by recalling the same calibration during a later machine start-up procedure. This has been practiced by a few extruder line suppliers for a number of years.

Users often request only a die centering support as the key function of the system. In that case, an absolute accuracy demand is not necessary.

Also new is a system that compensates with software the influence of cooling water on the calibration. Because of the integrated diameter measurement, the system refers to the water temperature anyway, thus there are no further costs borne.

The accuracy statements of In-Line ultrasonic measurement systems can only be made when facing all the influences mentioned.

The resolution of wall thickness measurement should be 1/100 mm and isconfirmed by most suppliers. The accuracy gained practically is dependant on individual conditions and covers a range from 0,02 mm - 0,10 mm.

Ultrasonic Flowmeter

Ultrasonic Flowmeter Basics
Doppler and transit-time flowmeters are gaining ground in liquid, and in some instances gas, flow measurement applications. Understanding how they work will help guarantee optimum performance.
Users and designers of flow metering systems can profit by keeping abreast of new developments. The ultrasonic flowmeter, a recent arrival on the scene, has profited from technological advances, especially those in electronic circuitry. For example, fast Fourier transform (FFT) signal processing is being used in one transit-time flowmeter design. And a supplier of Doppler flowmeters credits proprietary software and superior electronics design with opening up new application areas for this well-known technique.

Both types of ultrasonic flowmeters feature clamp-on designs with transducer assemblies that detect flow rate from the outside. Installation entails neither breaks in the line nor interruption of flow. One recommendation is that where practical, the new user experiment with a clamp-on meter to investigate the feasibility of a permanent installation, perhaps with wetted transducers and the requisite changes in piping.

Why Check Out Ultrasonic Types?


Figure 1. A clamp-on design with rail-mounted transducers makes this typical transit-time flowmeter easy to position. The microprocessor-based converter is also shown.
Figure 1 is a typical example of the ultrasonic flowmeters offered by at least 30 suppliers in the U.S. and Canada. Following are some of the capabilities of this particular model.

  • The meter can measure pure water, wash water, sewage, process liquids, oils, and other light homogeneous liquids. The basic requirement is that the fluid be capable of ultrasonic wave propagation and have a reasonably axis-symmetrical flow.
  • Clamp-on types measure flow through the pipe without any wetted parts, ensuring that corrosion and other effects from the fluid will not deteriorate the sensors.
  • A corollary to the above is that clamp-on types simplify and speed up meter installation and minimize maintenance.
  • This design and others are portable, a feature particularly advantageous for backing up an already installed flowmeter or checking out existing meters in a number of locations.
  • Depending on the model, the flowmeters can operate on pipe diameters from 0.5 in. (13 mm) to 20 ft (6 m); fluid temperatures from ­40ºF (­40ºC) to 392ºF (200ºC); and flow rates from 1.0 ft/s (0.3 m/s) to 106 ft/s (32 m/s).
  • Measurement accuracy can be in the range of 1% of flow rate, and speed of response can be as fast as 1 s.
  • The handheld, microprocessor-based converter provides a local graphics display and has a keypad for calling up page menus for flow data, trend displays, setting up site parameters, and other requirements.
  • The converter can log data for as many as 20 sites and 40,000 data points. It can also provide a PC interface via RS-232 serial communication, and an output of 4­20 mA DC for operating a digital controller, DCS, PLC, or recorder.
  • As is true of most such meters, operation is linear and bidirectional.
  • The flowmeter has a built-in, rechargeable battery and can operate continuously for five hours.
  • Advanced digital signal processing improves its performance where the flowing fluid contains air or gas bubbles.

Some suppliers offer ultrasonic measurements of both level and flow velocity to calculate flow quantities in open channels with weirs or flumes. Others carry ultrasonic meters especially adapted to measure the flow rate of gases. This class of meter is attractive compared to conventional flow metering methods because, in addition to the points listed above, the meters inherently provide linear calibration; have wide rangeability; induce no pressure drop or disturbance in the flow stream; and may offer the most economical cost of ownership.

Basic Operating Principles
To detect flow through a pipe, ultrasonic flowmeters use acoustic waves or vibrations of a frequency >20 kHz. Depending on the design, they use either wetted or nonwetted transducers on the pipe perimeter to couple ultrasonic energy with the fluid flowing in the pipe.


Figure 2. Doppler ultrasonic flowmeters operate on the Doppler effect, whereby the transmitted frequency is altered linearly by being reflected from particles and bubbles in the fluid. The net result is a frequency shift between transmitter and receiver frequencies that can be directly related to the flow rate.
Doppler Flowmeters. Doppler flowmeters are named for the Austrian physicist and mathematician Christian Johann Doppler (1803­1853), who in 1842 predicted that the frequencies of received sound waves depended on the motion of the source or observer relative to the propagating medium. To use the Doppler effect to measure flow in a pipe, one transducer transmits an ultrasonic beam of ~0.5 MHz into the flow stream (see Figure 2). Liquid flowing through the pipe must contain sonically reflective materials such as solid particles or entrained air bubbles. The movement of these materials alters the frequency of the beam reflected onto a second, receiving transducer. The frequency shift is linearly proportional to the rate of flow of materials in the pipe and therefore can be used to develop an analog or digital signal proportional to flow rate.

The basic equations defining the Doppler flowmeter are:

(1)

and by Snell's law:

(2)

Thus, from Equations (1) and (2), we have:

(3)

where:

Equation (3) clearly shows that flow velocity is a linear function of the Doppler frequency shift. Now, because the inside diameter of the pipe, D, is known, volumetric flow rate (e.g., in gallons per minute) can be measured using the following expression:

(4)

where:

One Doppler meter design mounts both the transmitting and the receiving transducers in the same case, attached to one side of the pipe. Reflectors in the flowing liquid return the transmitter signals to the receiver, with a frequency shift proportional to the flow velocity, as is the case when the two transducers are mounted separately on opposite sides of the pipe.

A portable, clamp-on Doppler meter capable of operating on AC power or from a rechargeable power pack has recently been developed. A set of 4-20 mA DC output terminals permits the unit to be connected to a strip chart recorder or other remote device for readout and/or control.


Figure 3. Transit-time flowmeters measure the difference in travel time between pulses transmitted in a single path along and against the flow. Two transducers are used, one upstream of the other. Each acts as both a transmitter and receiver for the ultrasonic beam.
Transit-Time Flowmeters. Transit-time meters, as the name implies, measure the difference in travel time between pulses transmitted in the direction of, and against, the flow. This type of meter is also called time of flight and time of travel.

In the example shown in Figure 3, the sonic beam is at a 45º angle, with one transducer located upstream of the other. Each transducer alternately transmits and receives bursts of ultrasonic energy; the difference in the transit times in the upstream vs. the downstream directions (TU - TD) measured over the same path can be used to calculate the flow through the pipe:

(5)

where:

This equation shows that the liquid flow velocity is directly proportional to the meas-ured difference between upstream and downstream transit times. Because the cross-sectional area of the pipe is known, the product of that area and the flow velocity will provide a measure of volumetric flow. Such calculations are easily performed by the microprocessor-based converter. With this type of meter, particles or air bubbles in the flow stream are undesirable because their reflecting qualities interfere with the transmission and receipt of the applied ultrasonic pulses. The liquid, however, must be a reasonable conductor of sonic energy.


Figure 4. For single-path measurements with the transit-time flowmeter, there are three methods of mounting the two transducers, Z, V, and W. The choice is dictated by installation factors such as size and condition of the pipe-line.
Figure 4 shows three placements that can be used for the two transducers. All are identified as single measuring path because the sonic beam follows a single path, and in all three the two transducers are connected by cable to a converter that can output a 4-20 mA DC signal. The selection of one configuration over another is dictated by several factors associated with the installation, including pipe size, space available for mounting the transducers, condition of the inside pipe walls, type of lining, and nature of the flowing liquid.

The Z configuration places the transducers on opposite sides of the pipe, one downstream of the other. Generally, the distance downstream is ~D/2, where D = pipe diameter. The converter uses specific data on piping parameters to compute the optimum distance. The Z method is recommended for use only in adverse conditions such as where space is limited, the fluid has high turbidity (e.g., sewage), there is a mortar lining, and when the pipe is old and a thick scale has built up on the inside wall that tends to weaken the received signals. It is not recommended for smaller pipes, where its measuring accuracy tends to degrade.

In most installations, the V method is recommended, with the two transducers on the same side of the pipe about a pipe diameter apart. The rail attachment that can be clamped on the pipe facilitates sliding the transducers horizontally along the pipe and positioning them the calculated distance apart.

The W method should be considered on pipe 1½ in. down to ½ in. dia. Its main limitation is a possible deterioration in accuracy due to buildup of scale or deposits on the pipe wall-note that the sonic signal must bounce off the wall three times. Turbidity of the liquid also could be harmful since the signal has a longer distance to travel.

Open-Channel Flowmetering. Ultrasonic flowmeters have been used successfully for certain open-channel flow measurements, in conjunction with weirs or flumes downstream. The transducer is installed above the channel, beaming pulses down on the surface of liquid in the channel. The pulses are reflected back to the transducer and the travel time can be related to the height of the liquid in the channel. Essentially, this is an application of an ultrasonic level detector. By relating the channel level with the flow velocity at the weir or flume, the metering system can provide a volumetric meas-ure of flow.

Application Notes
It is essential to carefully follow the manufacturer's operating instructions. Early problems with ultrasonic flowmeters were perhaps due, at least in part, to the users' not understanding the importance of certain fundamentals such as proper mounting of the transducers on the pipe. The acoustic coupling to the pipe and the relative alignment of the transducers must be retained despite events such as a large change in pipe temperature or unusual vibration.

For both Doppler and transit-time flowmeters to indicate true volumetric flow rate, the pipe must always be full. A Doppler meter on a partially full pipe, however, will continue to indicate flow velocity as long as the transducers are both mounted below the liquid level in the pipe.

Most manufacturers specify the minimum distance that the meter must be from valves, tees, elbows, pumps, and the like, both upstream and downstream. This is usually expressed in pipe diameters and typically should be 10­20 diameters upstream and 5 diameters downstream.

Transit-time meters rely on an ultrasonic signal's completely traversing the pipe, so the path must be relatively free of solids and air or gas bubbles. Bubbles in particular tend to attenuate the acoustic signals, a problem that has been addressed in the Fuji Portaflow X shown in Figure 1. The unit's electronic circuitry uses a proprietary Fourier transform technique to provide what is termed an advanced antibubble measurement.

Doppler meters, on the other hand, rely on reflectors in the flowing liquid. To obtain reliable measurements, therefore, attention must be given to the lower limits for concentrations and sizes of solids or bubbles. The flow must also be rapid enough to keep these materials in suspension. One manufacturer gives as typical the values of 6 ft/s (1.8 m/s) for solids and 2.5 ft/s (0.75 m/s) for small bubbles.

Over the past few years, some suppliers of Doppler meters have introduced models that operate at frequencies >1 MHz. The claim for such units is that they will operate on virtually clean liquids because reflections will occur off the swirls and eddies of the flowing liquid. A cautionary note has been sounded, however, advising prospective users to limit the technique to low concentrations of bubbles and particles.

Because in the operation of ultrasonic flowmeters the energy for measurement passes through only part of the measured liquid, Reynolds number, which can be thought of as the ratio between the inertial forces and the viscous forces in a flowing stream, affects the performance of the meter. For example, to perform within their stated specifications, some Doppler meters and a type of transit-time meter require minimum Reynolds numbers of 4000 and 10,000, respectively. Here again, for such limitations the manufacturer's instruction should guide the user.

Clamp-on meters typically require that the thickness of the pipe wall be relatively small in relation to the distance the ultrasonic energy must pass through the measured liquid. As a general rule, the ratio of pipe diameter to wall thickness should be >10:1; i.e., a 10 in. pipe should not have a wall thickness >1 in.

When it comes to the stated accuracies of ultrasonic flowmeters there are still not a lot of independent test data to confirm or refute the claims made by various manufacturers. As the use of these meters becomes more widespread, one can hope that the availability of supporting data will equal that on orifice meters, supported by a wealth of test data and standards.

Both types of ultrasonic meters are finding new applications. One market research organization has determined that transit-time meter applications are increasing at a faster rate than are Dopplers. At present, the installations are split about 60/40 in favor of transit types. Developments in technology, however, can greatly affect this picture and only time will tell.


For Further Reading
Considine, D.M. 1993. "Ultrasonic Flowmeters," Process/Industrial Instruments & Controls Handbook, 4th Ed., McGraw-Hill, New York, NY:4.115-4.119.

The Flow and Liquid Level Handbook, Vol. 29. 1995. "Ultrasonic Doppler Flow-meters," Omega Engineering, Inc., Stamford, CT.

Liptak, Bela G. 1995. Instrument Engineers Handbook, 3rd Ed., Vol 1, Process Measurement and Analysis, Chilton Book Co. (now available from CRC Press LLC, Tampa, FL):26-232.

Spitzer, David W. 1990. Industrial Flow Measurement, "Ultrasonic Flowmeters," Instrument Society of America, Research Triangle Park, NC.

Measurement & Control. Oct 1996. "Ultrasonic Flowmeter."