poling paling tolol gua baru liat ketawa ngakak gua liatnya

10 01 2011

nih kalo gak percaya

perhatiin baek baek, ada yang salah gak? hahahahahahahaha

nih resultnya

hahahahahahahahahahahahahahahahaha





Gate array

30 11 2010

gate array or uncommitted logic array (ULA) is an approach to the design and manufacture of application-specific integrated circuits(ASICs). A gate array circuit is a prefabricated silicon chip circuit with no particular function in which transistors, standard NAND or NORlogic gates, and other active devices are placed at regular predefined positions and manufactured on a wafer, usually called a master slice. Creation of a circuit with a specified function is accomplished by adding a final surface layer or layers of metal interconnects to the chips on the master slice late in the manufacturing process, joining these elements to allow the function of the chip to be customised as desired. This layer is analogous to the copper layer(s) of a printed circuit board (PCB).

Gate array master slices are usually prefabricated and stockpiled in large quantities regardless of customer orders. The design and fabrication according to the individual customer specifications may be finished in a shorter time compared with standard cell or full customdesign. The gate array approach reduces the mask costs since fewer custom masks need to be produced. In addition manufacturing test tooling lead time and costs are reduced since the same test fixtures may be used for all gate array products manufactured on the same diesize. Gate arrays were the predecessor of the more advanced structured ASIC; unlike gate arrays, structured ASICs tend to include predefined or configurable memories and/or analog blocks. Structured ASICs are still sold by companies such as ChipX, Inc.

An application circuit must be built on a gate array that has enough gates, wiring and I/O pins. Since requirements vary, gate arrays usually come in families, with larger members having more of all resources, but correspondingly more expensive. While the designer can fairly easily count how many gates and I/Os pins are needed, the amount of routing tracks needed may vary considerably even among designs with the same amount of logic. (For example, a crossbar switch requires much more routing than a systolic array with the same gate count.) Since unused routing tracks increase the cost (and decrease the performance) of the part without providing any benefit, gate array manufacturers try to provide just enough tracks so that most designs that will fit in terms of gates and I/O pins can be routed. This is determined by estimates such as those derived from Rent’s rule or by experiments with existing designs.

The main drawbacks of gate arrays are their somewhat lower density and performance compared with other approaches to ASIC design. However this style is often a viable approach for low production volumes.

Sinclair Research ported an enhanced ZX80 design to a ULA chip for the ZX81, and later used a ULA in the ZX Spectrum. A compatible chip was made in Russia as T34VG1. Acorn Computers used several ULA chips in the BBC Micro, and later managed to compress almost all of that machine’s logic into a single ULA for the Acorn Electron. Many other manufacturers from the time of the home computer boom period used ULAs in their machines. Ferranti in the UK pioneered ULA technology, then later abandoned this lead in semi-custom chips. The IBM PC took over much of the personal computer market, and the sales volumes made full-custom chips more economical.

Designers still wished for a way to create their own complex chips without the expense of full-custom design, and eventually this wish was granted with the arrival of the field-programmable gate array (FPGA), complex programmable logic device (CPLD), and structured ASIC. Whereas a ULA required a semiconductor wafer foundry to deposit and etch the interconnections, the FPGA and CPLD had programmable interconnections.

 





Field Programmable Gate Array (lanj)

30 11 2010
“FPGA” redirects here. It is not to be confused with Flip-chip pin grid array.

An Altera Stratix IV GX FPGA

An example of an FPGA programming/evaluation board

field-programmable gate array (FPGA) is an integrated circuit designed to be configured by the customer or designer after manufacturing—hence “field-programmable”. The FPGA configuration is generally specified using a hardware description language (HDL), similar to that used for an application-specific integrated circuit (ASIC) (circuit diagrams were previously used to specify the configuration, as they were for ASICs, but this is increasingly rare). FPGAs can be used to implement any logical function that an ASIC could perform. The ability to update the functionality after shipping, partial re-configuration of the portion of the design and the low non-recurring engineering costs relative to an ASIC design (notwithstanding the generally higher unit cost), offer advantages for many applications.

FPGAs contain programmable logic components called “logic blocks”, and a hierarchy of reconfigurable interconnects that allow the blocks to be “wired together”—somewhat like a one-chip programmable breadboard. Logic blocks can be configured to perform complex combinational functions, or merely simple logic gates like AND and XOR. In most FPGAs, the logic blocks also include memory elements, which may be simple flip-flops or more complete blocks of memory.

In addition to digital functions, some FPGAs have analog features. The most common analog feature is programmable slew rate and drive strength on each output pin, allowing the engineer to set slow rates on lightly loaded pins that would otherwise ring unacceptably, and to set stronger, faster rates on heavily loaded pins on high-speed channels that would otherwise run too slow. Another relatively common analog feature is differential comparators on input pins designed to be connected to differential signaling channels. A few “mixed signal FPGAs” have integrated peripheral Analog-to-Digital Converters (ADCs) and Digital-to-Analog Converters (DACs) with analog signal conditioning blocks allowing them to operate as a system-on-a-chip.Such devices blur the line between an FPGA, which carries digital ones and zeros on its internal programmable interconnect fabric, and field-programmable analog array (FPAA), which carries analog values on its internal programmable interconnect fabric.

History

The FPGA industry sprouted from programmable read-only memory (PROM) and programmable logic devices (PLDs). PROMs and PLDs both had the option of being programmed in batches in a factory or in the field (field programmable), however programmable logic was hard-wired between logic gates.

In the late 1980s the Naval Surface Warfare Department funded an experiment proposed by Steve Casselman to develop a computer that would implement 600,000 reprogrammable gates. Casselman was successful and a patent related to the system was issued in 1992.

Some of the industry’s foundational concepts and technologies for programmable logic arrays, gates, and logic blocks are founded in patents awarded to David W. Page and LuVerne R. Peterson in 1985.

Xilinx Co-Founders, Ross Freeman and Bernard Vonderschmitt, invented the first commercially viable field programmable gate array in 1985 – the XC2064. The XC2064 had programmable gates and programmable interconnects between gates, the beginnings of a new technology and market. The XC2064 boasted a mere 64 configurable logic blocks (CLBs), with two 3-input lookup tables (LUTs). More than 20 years later, Freeman was entered into the National Inventor’s Hall of Fame for his invention.

Xilinx continued unchallenged and quickly growing from 1985 to the mid-1990s, when competitors sprouted up, eroding significant market-share. By 1993, Actel was serving about 18 percent of the market.

The 1990s were an explosive period of time for FPGAs, both in sophistication and the volume of production. In the early 1990s, FPGAs were primarily used in telecommunications and networking. By the end of the decade, FPGAs found their way into consumer, automotive, and industrial applications.

FPGAs got a glimpse of fame in 1997, when Adrian Thompson, a researcher working at the University of Sussex, merged genetic algorithm technology and FPGAs to create a sound recognition device. Thomson’s algorithm configured an array of 10 x 10 cells in a Xilinx FPGA chip to discriminate between two tones, utilising analogue features of the digital chip. The application of genetic algorithms to the configuration of devices like FPGA’s is now referred to as Evolvable hardware

Modern developments

A recent trend has been to take the coarse-grained architectural approach a step further by combining the logic blocks and interconnects of traditional FPGAs with embedded microprocessors and related peripherals to form a complete “system on a programmable chip”. This work mirrors the architecture by Ron Perlof and Hana Potash of Burroughs Advanced Systems Group which combined a reconfigurable CPU architecture on a single chip called the SB24. That work was done in 1982. Examples of such hybrid technologies can be found in the Xilinx Virtex-II PRO and Virtex-4 devices, which include one or more PowerPC processors embedded within the FPGA’s logic fabric. The Atmel FPSLIC is another such device, which uses an AVR processor in combination with Atmel’s programmable logic architecture. The Actel SmartFusion devices incorporate an ARM architecture Cortex-M3 hard processor core (with up to 512kB of flash and 64kB of RAM) and analog peripherals such as a multi-channel ADC and DACs to their flash-based FPGA fabric.

An alternate approach to using hard-macro processors is to make use of soft processor cores that are implemented within the FPGA logic.

As previously mentioned, many modern FPGAs have the ability to be reprogrammed at “run time,” and this is leading to the idea ofreconfigurable computing or reconfigurable systems — CPUs that reconfigure themselves to suit the task at hand. The Mitrion Virtual Processor from Mitrionics is an example of a reconfigurable soft processor, implemented on FPGAs. However, it does not support dynamic reconfiguration at runtime, but instead adapts itself to a specific program.

Additionally, new, non-FPGA architectures are beginning to emerge. Software-configurable microprocessors such as the Stretch S5000 adopt a hybrid approach by providing an array of processor cores and FPGA-like programmable cores on the same chip.

Gates

  • 1987: 9,000 gates, Xilinx
  • 1992: 600,000, Naval Surface Warfare Department
  • Early 2000s: Millions

Market size

  • 1985: First commercial FPGA technology invented by Xilinx
  • 1987: $14 million
  • ~1993: >$385 million
  • 2005: $1.9 billion
  • 2010 estimates: $2.75 billion

FPGA design starts

  • 10,000
  • 2005: 80,000
  • 2008: 90,000

FPGA comparisons

Historically, FPGAs have been slower, less energy efficient and generally achieved less functionality than their fixed ASIC counterparts. A study has shown that designs implemented on FPGAs need on average 18 times as much area, draw 7 times as much dynamic power, and are 3 times slower than the corresponding ASIC implementations.

An Altera Cyclone II FPGA, on an Altera teraSIC DE1 Prototyping board.

Advantages include the ability to re-program in the field to fix bugs, and may include a shorter time to market and lower non-recurring engineering costs.[citation needed]Vendors can also take a middle road by developing their hardware on ordinary FPGAs, but manufacture their final version so it can no longer be modified after the design has been committed.

Xilinx claims that several market and technology dynamics are changing the ASIC/FPGA paradigm:

  • Integrated circuit costs are rising aggressively
  • ASIC complexity has lengthened development time
  • R&D resources and headcount are decreasing
  • Revenue losses for slow time-to-market are increasing
  • Financial constraints in a poor economy are driving low-cost technologies

These trends make FPGAs a better alternative than ASICs for a larger number of higher-volume applications than they have been historically used for, to which the company attributes the growing number of FPGA design starts (see History).

Some FPGAs have the capability of partial re-configuration that lets one portion of the device be re-programmed while other portions continue running.

Versus complex programmable logic devices

The primary differences between CPLDs and FPGAs are architectural. A CPLD has a somewhat restrictive structure consisting of one or more programmable sum-of-products logic arrays feeding a relatively small number of clocked registers. The result of this is less flexibility, with the advantage of more predictable timing delays and a higher logic-to-interconnect ratio. The FPGA architectures, on the other hand, are dominated by interconnect. This makes them far more flexible (in terms of the range of designs that are practical for implementation within them) but also far more complex to design for.

Another notable difference between CPLDs and FPGAs is the presence in most FPGAs of higher-level embedded functions (such as adders and multipliers) and embedded memories, as well as to have logic blocks implement decoders or mathematical functions.

Security considerations

With respect to security, FPGAs have both advantages and disadvantages as compared to ASICs or secure microprocessors. FPGAs’ flexibility makes malicious modifications during fabrication a lower risk. For many FPGAs, the loaded design is exposed while it is loaded (typically on every power-on). To address this issue, some FPGAs support bitstream encryption.

Applications

Applications of FPGAs include digital signal processing, software-defined radio, aerospace and defense systems, ASIC prototyping, medical imaging, computer vision, speech recognition, cryptography, bioinformatics, computer hardware emulation, radio astronomy, metal detection and a growing range of other areas.

FPGAs originally began as competitors to CPLDs and competed in a similar space, that of glue logic for PCBs. As their size, capabilities, and speed increased, they began to take over larger and larger functions to the state where some are now marketed as full systems on chips (SoC). Particularly with the introduction of dedicated multipliers into FPGA architectures in the late 1990s, applications which had traditionally been the sole reserve of DSPs began to incorporate FPGAs instead.

FPGAs especially find applications in any area or algorithm that can make use of the massive parallelism offered by their architecture. One such area is code breaking, in particular brute-force attack, of cryptographic algorithms.

FPGAs are increasingly used in conventional high performance computing applications where computational kernels such as FFT orConvolution are performed on the FPGA instead of a microprocessor.

The inherent parallelism of the logic resources on an FPGA allows for considerable computational throughput even at a low MHz clock rates. The flexibility of the FPGA allows for even higher performance by trading off precision and range in the number format for an increased number of parallel arithmetic units. This has driven a new type of processing called reconfigurable computing, where time intensive tasks are offloaded from software to FPGAs.

The adoption of FPGAs in high performance computing is currently limited by the complexity of FPGA design compared to conventional software and the turn-around times of current design tools.

Traditionally, FPGAs have been reserved for specific vertical applications where the volume of production is small. For these low-volume applications, the premium that companies pay in hardware costs per unit for a programmable chip is more affordable than the development resources spent on creating an ASIC for a low-volume application. Today, new cost and performance dynamics have broadened the range of viable applications.

Architecture

The most common FPGA architecture[26] consists of an array of logic blocks (called Configurable Logic Block, CLB, or Logic Array Block, LAB, depending on vendor), I/O pads, and routing channels. Generally, all the routing channels have the same width (number of wires). Multiple I/O pads may fit into the height of one row or the width of one column in the array.

An application circuit must be mapped into an FPGA with adequate resources. While the number of CLBs/LABs and I/Os required is easily determined from the design, the number of routing tracks needed may vary considerably even among designs with the same amount of logic. For example, a crossbar switch requires much more routing than a systolic array with the same gate count. Since unused routing tracks increase the cost (and decrease the performance) of the part without providing any benefit, FPGA manufacturers try to provide just enough tracks so that most designs that will fit in terms of LUTs and IOs can be routed. This is determined by estimates such as those derived fromRent’s rule or by experiments with existing designs.

In general, a logic block (CLB or LAB) consists of a few logical cells (called ALM, LE, Slice etc). A typical cell consists of a 4-input Lookup table (LUT), a Full adder (FA) and a D-type flip-flop, as shown below. The LUT are in this figure split into two 3-input LUTs. In normal modethose are combined into a 4-input LUT through the left mux. In arithmetic mode, their outputs are fed to the FA. The selection of mode are programmed into the middle mux. The output can be either synchronous or asynchronous, depending on the programming of the mux to the right, in the figure example. In practice, entire or parts of the FA are put as functions into the LUTs in order to save space.

Simplified example illustration of a logic cell

ALMs and Slices usually contains 2 or 4 structures similar to the example figure, with some shared signals.

CLBs/LABs typically contains a few ALMs/LEs/Slices.

In recent years, manufacturers have started moving to 6-input LUTs in their high performance parts, claiming increased performance.
Since clock signals (and often other high-fanout signals) are normally routed via special-purpose dedicated routing networks in commercial FPGAs, they and other signals are separately managed.

For this example architecture, the locations of the FPGA logic block pins are shown below.

Logic Block Pin Locations

Each input is accessible from one side of the logic block, while the output pin can connect to routing wires in both the channel to the right and the channel below the logic block.

Each logic block output pin can connect to any of the wiring segments in the channels adjacent to it.

Similarly, an I/O pad can connect to any one of the wiring segments in the channel adjacent to it. For example, an I/O pad at the top of the chip can connect to any of the W wires (where W is the channel width) in the horizontal channel immediately below it.

Generally, the FPGA routing is unsegmented. That is, each wiring segment spans only one logic block before it terminates in a switch box. By turning on some of the programmable switches within a switch box, longer paths can be constructed. For higher speed interconnect, some FPGA architectures use longer routing lines that span multiple logic blocks.

Whenever a vertical and a horizontal channel intersect, there is a switch box. In this architecture, when a wire enters a switch box, there are three programmable switches that allow it to connect to three other wires in adjacent channel segments. The pattern, or topology, of switches used in this architecture is the planar or domain-based switch box topology. In this switch box topology, a wire in track number one connects only to wires in track number one in adjacent channel segments, wires in track number 2 connect only to other wires in track number 2 and so on. The figure below illustrates the connections in a switch box.

Switch box topology

Modern FPGA families expand upon the above capabilities to include higher level functionality fixed into the silicon. Having these common functions embedded into the silicon reduces the area required and gives those functions increased speed compared to building them from primitives. Examples of these include multipliers, generic DSP blocks, embedded processors, high speed IO logic and embedded memories.

FPGAs are also widely used for systems validation including pre-silicon validation, post-silicon validation, and firmware development. This allows chip companies to validate their design before the chip is produced in the factory, reducing the time-to-market.

FPGA design and programming

To define the behavior of the FPGA, the user provides a hardware description language (HDL) or a schematic design. The HDL form is more suited to work with large structures because it’s possible to just specify them numerically rather than having to draw every piece by hand. However, schematic entry can allow for easier visualisation of a design.

Then, using an electronic design automation tool, a technology-mapped netlist is generated. The netlist can then be fitted to the actual FPGA architecture using a process called place-and-route, usually performed by the FPGA company’s proprietary place-and-route software. The user will validate the map, place and route results via timing analysis, simulation, and other verification methodologies. Once the design and validation process is complete, the binary file generated (also using the FPGA company’s proprietary software) is used to (re)configure the FPGA.

Going from schematic/HDL source files to actual configuration: The source files are fed to a software suite from the FPGA/CPLD vendor that through different steps will produce a file. This file is then transferred to the FPGA/CPLD via a serial interface (JTAG) or to an external memory device like an EEPROM.

The most common HDLs are VHDL and Verilog, although in an attempt to reduce the complexity of designing in HDLs, which have been compared to the equivalent of assembly languages, there are moves to raise the abstraction level through the introduction of alternative languages. National Instrument’s LabVIEW graphical programming language ( sometimes referred to as “G” ) has an FPGA add-in module available to target and program FPGA hardware. The LabVIEW approach drastically simplifies the FPGA programming process[citation needed].

To simplify the design of complex systems in FPGAs, there exist libraries of predefined complex functions and circuits that have been tested and optimized to speed up the design process. These predefined circuits are commonly called IP cores, and are available from FPGA vendors and third-party IP suppliers (rarely free, and typically released under proprietary licenses). Other predefined circuits are available from developer communities such as OpenCores (typically released under free and open source licenses such as the GPL, BSD or similar license), and other sources.

In a typical design flow, an FPGA application developer will simulate the design at multiple stages throughout the design process. Initially theRTL description in VHDL or Verilog is simulated by creating test benches to simulate the system and observe results. Then, after thesynthesis engine has mapped the design to a netlist, the netlist is translated to a gate level description where simulation is repeated to confirm the synthesis proceeded without errors. Finally the design is laid out in the FPGA at which point propagation delays can be added and the simulation run again with these values back-annotated onto the netlist.

Basic process technology types

  • SRAM – based on static memory technology. In-system programmable and re-programmable. Requires external boot devices. CMOS.
  • Antifuse – One-time programmable. CMOS.
  • PROM – Programmable Read-Only Memory technology. One-time programmable because of plastic packaging.
  • EPROM – Erasable Programmable Read-Only Memory technology. One-time programmable but with window, can be erased with ultraviolet (UV) light. CMOS.
  • EEPROM – Electrically Erasable Programmable Read-Only Memory technology. Can be erased, even in plastic packages. Some but not all EEPROM devices can be in-system programmed. CMOS.
  • Flash – Flash-erase EPROM technology. Can be erased, even in plastic packages. Some but not all flash devices can be in-system programmed. Usually, a flash cell is smaller than an equivalent EEPROM cell and is therefore less expensive to manufacture. CMOS.
  • Fuse – One-time programmable. Bipolar.

Major manufacturers

Xilinx and Altera are the current FPGA market leaders and long-time industry rivals. Together, they control over 80 percent of the market,with Xilinx alone representing over 50 percent.

Both Xilinx and Altera provide free Windows and Linux design software.

Other competitors include Lattice Semiconductor (SRAM based with integrated configuration Flash, instant-on, low power, live reconfiguration), Actel (antifuse, flash-based, mixed-signal), SiliconBlue Technologies (extremely low power SRAM-based FPGAs with option integrated nonvolatile configuration memory), Achronix (RAM based, 1.5 GHz fabric speed) who will be building their chips on Intels’ state-of-the art 22nm process, and QuickLogic (handheld focused CSSP, no general purpose FPGAs).

In March 2010, Tabula announced their new FPGA technology that uses time-multiplexed logic and interconnect for greater potential cost savings for high-density applications.





Installing and Configuring C/C++ Support Netbeans

29 11 2010

Downloading C/C++ Support

 

If you do not have the NetBeans 6.0 IDE, go to the NetBeans IDE 6.0 Download Page, and download a version of the IDE that contains C/C++ support. If you have a NetBeans IDE 6.0 installation that does not include C/C++ support, complete the following steps to add C/C++ support to the IDE.

  1. If your network uses a proxy, choose Tools > Options from the main menu, select Manual Proxy Settings, enter the HTTP Proxy and Port for your proxy, and click OK.
  2. Choose Tools > Plugins from the main menu.
  3. In the Plugins dialog box, click the Available Plugins tab, and scroll to the C/C++ category.
  4. Select the C/C++ checkbox.
  5. Click Install.

    The NetBeans IDE Installer starts.

  6. In the NetBeans IDE Installer, click Next.
  7. Read the license agreement, then select the I Accept the Terms in All License Agreements radio button, and click Next.
  8. Click Install.
  9. After the installation completes, select either Restart IDE Now or Restart IDE Later and click Finish.

Installing and Setting Up the Compilers and Tools

Netbeans C/C++ pack requires a C compiler, C++ compiler, make utility, and gdb debugger.

Windows

NetBeans C/C++ pack has been tested with the following compilers and tools:

  • Cygwin 1.5.21
  • Cygwin gcc-core (C compiler) 3.4.x
  • Cygwin gcc-c++ (C++ compiler) 3.4.x
  • Cygwin gdb (GNU Debugger) 6.5.50
  • Cygwin make 3.80

If you already have the Cygwin GNU compilers, GNU make, and gdb debugger installed on your Windows system and your path is set up correctly to find them, make sure that you have the correct versions. If you have the correct versions, then no further setup is necessary.

To check the versions of your Cygwin compilers and tools:

  1. Check the version of Cygwin by typing:
    cygcheck -c cygwin
  2. Check the versions of the compilers, make, and gdb by typing:
    gcc --version
    g++ --version
    make --version
    gdb --version

To install the GNU compilers, make, and gdb debugger from cygwin.com:

  1. Download the Cygwin setup.exe program by clicking the Install or Update Now! icon in the middle of the page.
  2. Run the setup.exe program. Accept the defaults until you reach the Select Your Internet Connection page. Select the option on this page that is best for you. Click Next.
  3. On the Choose Download Site page, choose a download site you think might be relatively close to you. Click Next.
  4. On the Select Packages page you select the packages to download. Click the + next to Devel to expand the development tools category. You may want to resize the window so you can see more of it at one time.
  5. Select each package you want to download by clicking the Skip label next to it. At a minimum, select gcc-core: C compiler, gcc-g++: C++ compiler, gdb: The GNU Debugger, and make: the GNU version of the ‘make’ utility.
  6. Now add the Compiler directory to your path:
    1. Open the Control Panel (Start > Settings > Control Panel) and double-click the System program.
    2. Select the Advanced tab and click Environment Variables.
    3. In the System Variables panel of the Environment Variables dialog, select thePath variable and click Edit.
    4. Add the path to the cygwin-directory\bin directory to the Path variable, and click OK. By default, cygwin-directory is C:\cygwin. Directory names must be separated with a semicolon.
    5. Click OK in the Environment Variables dialog and the System Properties dialog.

Solaris OS

NetBeans C/C++ pack has been tested with the following compilers and tools:

  • Sun Studio 12 C compiler 5.9 and gcc 3.4.3
  • Sun Studio 12 C++ compiler 5.9 and g++ 3.4.3
  • gdb (GNU debugger) 6.2.1
  • Solaris make and gmake 3.80

Sun Studio 12 Compilers

If you want to use the Sun Studio 12 compilers:

  • If you have Sun Studio 12 software installed, ensure that /installation directory/SUNWspro/bin is in your path before you start the NetBeans IDE.
  • If you do not have Sun Studio 12 software installed, you can download it free athttp://developers.sun.com/sunstudio/downloads/.

 

To download and install the Sun Studio 12 compilers:

  1. Create a directory for the downloaded file. You must have write permission for this directory.
  2. Download the file for your platform into the download directory.
  3. Go to the download directory, and uncompress and untar the downloaded file.
    bzcat filename | tar xvf -
  4. Follow the instructions in Chapter 2 of the Sun Studio 12 Quick Installation (English, Japanese, Simplified Chinese) guide to install the C compiler, C++ compiler, and required Solaris patches.

Add the path to the Sun Studio software to your PATH before starting the NetBeans IDE.

GNU Compilers and GNU make

If you want to use the GNU compilers and GNU make:

  • If you have a standard installation of the Solaris 10 OS, the compilers and gmake are installed in/usr/sfw/bin. Make sure that this location is in your path before staring the NetBeans IDE.
  • If the compilers and gmake are not installed on your system, you can download them fromhttp://www.sunfreeware.com.

To download and install the GNU compilers and make

  1. Download gcc 3.4.6 and make 3.81.
  2. If the download zip files are not automatically gunzipped during download, unzip them with gunzip.
  3. Install the packages with the pkgadd command.

Make sure to include the GNU compiler directory and the GNU make directory in your path before starting the NetBeans IDE.

gdb Debugger

Whether you use the Sun Studio compilers and Solaris make or the GNU compilers and GNU make, you must have the gdb debugger to debug applications in NetBeans C/C++ Development Pack. You can download gdb 6.2.1 from http://www.sun.com/software/solaris/freeware/s10pkgs_download.xml.

To download and install gdb:

  1. Under “Select a Download” at the botttom of the page, select the Solaris 10 Companion Software download for your Solaris platform.
  2. On the Download page, accept the License Agreement and select the gdb - GNU source level debugger package.
  3. Become root (superuser).
  4. Unzip the file with binzip and install gdb with pkgadd:
    bunzip2 SFWgdb.bz2
    pkgadd -d SFWgdb

Make sure to include the path to gdb in your path before starting the NetBeans IDE.

Linux

NetBeans C/C++ pack has been tested with the following compilers and tools:

  • Sun Studio 12 C compiler, Red Hat Fedora Core 3 gcc, and Ubuntu 6.10 and 7.04 gcc
  • Sun Studio 12 C++ compiler, Red Hat Fedora Core 3 g++, and Ubuntu 6.10 and 7.04 g++
  • Red Hat Fedora Core 3 gdb and Ubuntu 6.10 and 7.04 gdb
  • Red Hat Fedora Core make and Ubuntu 6.10 and 7.04 make

 

To download and install the Sun Studio 12 compilers:

  1. Create a directory for the downloaded file. You must have write permission for this directory.
  2. Download the file for your platform into the download directory.
  3. Go to the download directory, and uncompress and untar the downloaded file.
    bzcat filename | tar xvf -
  4. Follow the instructions in Chapter 2 of the Sun Studio 12 Quick Installation (English, Japanese, Simplified Chinese) guide to install the C compiler, C++ compiler, and required Solaris patches.

Add the path to the Sun Studio software to your PATH before starting the NetBeans IDE.

Macintosh OS X

NetBeans C/C++ pack has been tested with the following compilers and tools:

  • gcc 4.0.1 compilers
  • gdb (GNU debugger) 6.1

Install the following packages that are provided with your Macintosh OS X:

  • Xcode
  • X11

Verifying the Installation

To verify that the installation is correct, start the NetBeans IDE, build a sample project, and run it in the gdb debugger.

Windows

To start the IDE on Microsoft Windows machines, do one of the following:

  • Double-click the NetBeans IDE icon on your desktop.
  • From the Start menu, select Programs > NetBeans 6.0 > NetBeans IDE.

To build a sample project and run it in the debugger:

  1. Open the New Project wizard by choosing File > New Project.
  2. In the Categories panel on the Choose Project page of the wizard, expand the Samples category and the C/C++ Development subcategory, and select the C/C++ subcategory.
  3. In the Projects panel, select the Welcome project. Click Next.
  4. On the Project Name and Location page, click Finish.
  5. In the Projects tab of the IDE, right-click the Welcome_1 project and choose Build Project. If your compilers and make utility are installed correctly and the path to them is set, build output is displayed in the Output window and the project builds successfully.
  6. Double-click the welcome.cc file to open it in the Source Editor.
  7. Right-click in the left margin of the Source Editor window and choose Show Line Numbers.
  8. Set a breakpoint by clicking in the left margin of the Source Editor window next to line 33.
  9. Right-click the project and choose Debug Project. If the gdb debugger is installed correctly and the path to it is set, gdb starts up, the Debugger tabs are displayed, and the Welcome application runs and stops at the breakpoint.
  10. Choose Run > Continue to run the application to completion.

Solaris OS

To start the NetBeans IDE on Solaris systems:

  1. Navigate to the bin subdirectory of your installation.
  2. Execute the launcher script by typing ./netbeans.

To build a sample project and run it in the debugger:

  1. Open the New Project wizard by choosing File > New Project.
  2. In the Categories panel on the Choose Project page of the wizard, expand the Samples category and the C/C++ Development subcategory, and select the C/C++ subcategory.
  3. In the Projects panel, select the Welcome project. Click Next.
  4. On the Project Name and Location page, click Finish.
  5. In the Projects tab of the IDE, right-click the Welcome_1 project and choose Properties.
  6. In the Project Properties dialog box, set the Compiler Collection property to the compiler collection you want to validate and click OK.
  7. In the Projects tab, right-click the project and choose Build Project. If your compilers and makeutility are installed correctly and the path to them is set, build output is displayed in the Output window and the project builds successfully.
  8. Double-click the welcome.cc file to open it in the Source Editor.
  9. Right-click in the left margin of the Source Editor window and choose Show Line Numbers.
  10. Set a breakpoint by clicking in the left margin of the Source Editor window next to line 33.
  11. Right-click the project and choose Debug Project. If the gdb debugger is installed correctly and the path to it is set, gdb starts up, the Debugger tabs are displayed, and the Welcome application runs and stops at the breakpoint.
  12. Choose Run > Continue to run the application to completion.

Linux

To start the NetBeans IDE on Linux systems:

  1. Navigate to the bin subdirectory of your installation.
  2. Execute the launcher script by typing ./netbeans.

To build a sample project and run it in the debugger:

  1. Open the New Project wizard by choosing File > New Project.
  2. In the Categories panel on the Choose Project page of the wizard, expand the Samples category and the C/C++ Development subcategory, and select the C/C++ subcategory.
  3. In the Projects panel, select the Welcome project. Click Next.
  4. On the Project Name and Location page, click Finish.
  5. In the Projects tab of the IDE, right-click the Welcome_1 project and choose Properties.
  6. In the Project Properties dialog box, set the Compiler Collection property to the compiler collection you want to validate and click OK.
  7. In the Projects tab, right-click the project and choose Build Project. If your compilers and makeutility are installed correctly and the path to them is set, build output is displayed in the Output window and the project builds successfully.
  8. Double-click the welcome.cc file to open it in the Source Editor.
  9. Right-click in the left margin of the Source Editor window and choose Show Line Numbers.
  10. Set a breakpoint by clicking in the left margin of the Source Editor window next to line 33.
  11. Right-click the project and choose Debug Project. If the gdb debugger is installed correctely and the path to it is set, gdb starts up, the Debugger tabs are displayed, and the Welcome application runs and stops at the breakpoint.
  12. Choose Run > Continue to run the application to completion.

Mac OS X

To start the IDE on Macintosh machines, double-click the NetBeans icon on your desktop.

To build a sample project and run it in the debugger:

  1. Open the New Project wizard by choosing File > New Project.
  2. In the Categories panel on the Choose Project page of the wizard, expand the Samples category and the C/C++ Development subcategory, and select the C/C++ subcategory.
  3. In the Projects panel, select the Welcome project. Click Next.
  4. On the Project Name and Location page, click Finish.
  5. In the Projects tab of the IDE, right-click the Welcome_1 project and choose Build Project. If your compilers and make utility are installed correctly and the path to them is set, build output is displayed in the Output window and the project builds successfully.
  6. Double-click the welcome.cc file to open it in the Source Editor.
  7. Right-click the project and choose Debug Project. If the gdb debugger is installed correctly and the path to it is set, gdb starts up and the Debugger tabs are displayed.

 

 





Printed Circuit Board (PCB(Mother Board))

29 11 2010

Sebuah papan sirkuit tercetak, atau PCB, digunakan untuk mendukung secara mekanis dan elektrik terhubung komponen elektronik menggunakan konduktif jalur, trek atau jejak sinyal terukir dari lembaran tembaga dilaminasi ke non-konduktif substrat. Hal ini juga disebut sebagai papan jaringan kabel tercetak (PWB) atau papan jaringan kabel terukir. Sebuah PCB diisi dengan komponen elektronik adalah perakitan sirkuit cetak(PCA), juga dikenal sebagai perakitan papan sirkuit cetak (PCBA).

PCB yang murah, dan dapat sangat diandalkan. Mereka membutuhkan tata letak usaha yang lebih banyak dan biaya awal lebih tinggi dari baik bungkus kawat atau -to-point titik konstruksi, tetapi jauh lebih murah dan cepat untuk-volume produksi yang tinggi. Banyak industri PCB desain elektronik, perakitan, dan kebutuhan kontrol kualitas yang ditetapkan oleh standar yang diterbitkan oleh IPC organisasi.

Terutama sekali, sedikitnya ada 7 hal yang harus diperhatikan pada sebuah motherboard. Ketujuh komponen tersebut adalah :

  1. Chipset
  2. Tipe CPU
  3. Slot dan tipe memori
  4. Cache memory
  5. Sistem BIOS
  6. Slot ekspansi
  7. Port I/O

 

Dari sinilah sesungguhnya problem pada sebuah system PC bisa dilacak atau dideteksi. Kerusakan di luar 7 komponen tersebut biasanya jarang terjadi. Kemungkinan yang lain, bila ketujuh komponen ini terlihat beres-beres saja, patut diduga bahwa masalahnya terletak pada arsitektur motherboard itu sendiri, entah sirkuit-sirkuitnya, atau komponen-komponen yang dipergunakannya.

Chipset : Komandan data dan proses

Disebut chipset karena barang satu ini umumnya merupakan sepasang chip yang mengendalikan prosesor dan fitur-fitur hardware yang ada pada mortherboard secara menyeluruh. Sepasang chip ini, yang satu buah disebut North Bright chip dan satu lagi dipanggil South Bridge chip, bisa dibilang merupakan panglima tertinggi pada sebuah system bernama motherboard.Saat ini, terdapat banyak motherboard dengan chipset yang berbeda-beda. Jenis chipset yang digunakan pada motherboard akan menentukan beberapa hal antara lain.

  • Tipe prosesor yang bias digunakan
  • Jenis memori yang bias mendukung system PC dan kapasitas maksimumnya
  • Kelengkapan I/O yang mampu disediakan
  • Tipe display adapter yang bisa digunakan
  • Lebar data pada motgherboarad yang bisa didukung
  • Ketersedian fitur-fitur tambahan (misalnya LAN, sound card, atau modem onboard).

Tipe CPU

Terdapat tiga tipe CPU yang banyak beredar di pasaran yakni CPU keluaran Intel Corporation, AMD keluaran Advanced Micro Device, dan Cyrix atau VIA C3 keluaran VIA Technologies Corporation. CPU alias prosesor keluaran VIA sendiri pada umumnya mengikuti platform teknologi yang dikeluarkan oleh Intel. Artinya, setiap seri prosesor yang dirilis VIA pada umumnya selalu memiliki kompatibilitas dengan seri prosesor yang dibuat Intel.           Sementara AMD menggunakan platform teknologi yang berbeda dari yang digunakan oleh Intel, sekalipun teknologi pross yang digunakan oleh perusahaan ini juga mengikuti apa yang dilakukan Intel. Lantaran perbedaan platform ini, prosesor AMD menggunakan soket atau slot yang berbeda dari yang digunakan oleh Intel. Bila Intel menyebut Slot 1, AM menyebutnya Slot A. pada prosesor soket, belakangan AMD relative lebih konsisten dalam mengeluarakan tipe soket yang digunakan, yakni senantiasa menggunakan Soket A yang kompatibel pada seri kecepatan manapun, yakni soket dengan jumlah pin 462 buah. Bandingkan dengan Intel yang selalu berubah-ubah, dari soket 370 pin, kemudian menjadi 423 pin, lalu berubah lagi menjadi 478. akibatnya, kemungkinan untuk meng-upgrade sebuah prosesor Intel generasi baru selalu harus dibarengi dengan penggantian motherboard itu sendiri.            Berikut adalah sedikit sejarah perkembangan prosesor Intel dan para clone-nya yang berhasil disarikan

  • Debut Intel dimulai dengan processor seri MCS4 yang merupakan cikal bakal dari prosesor i4040. Processor 4 bit ini yang direncanakan untuk menjadi otak calculator , pada tahun yang sama (1971), intel membuat revisi ke i440. Awalnya dipesan oleh sebuah perusahaan Jepang untuk pembuatan kalkulator , ternyata prosesor ini jauh lebih hebat dari yang diharapkan sehingga Intel membeli hak guna dari perusahaan Jepang tersebut untuk perkembangan dan penelitian lebih lanjut. Di sinilah cikal bakal untuk perkembangan ke arah prosesor komputer.
  • Berikutnya muncul processor 8 bit pertama i8008 (1972), tapi agak kurang disukai karena multivoltage.. lalu baru muncul processor i8080, disini ada perubahan yaitu jadi triple voltage, pake teknologi NMOS (tidak PMOS lagi), dan mengenalkan pertama kali sistem clock generator (pake chip tambahan), dikemas dalam bentuk DIP Array 40 pins. Kemudian muncul juga processor2 : MC6800 dari Motorola -1974, Z80 dari Zilog -1976 (merupakan dua rival berat), dan prosessor2 lain seri 6500 buatan MOST, Rockwell, Hyundai, WDC, NCR dst. Z80 full compatible dengan i8008 hanya sampai level bahasa mesin. Level bahasa rakitannya berbeda (tidak kompatibel level software). Prosesor i8080 adalah prosesor dengan register internal 8-bit, bus eksternal 8-bit, dan memori addressing 20-bit (dapat mengakses 1 MB memori total), dan modus operasi REAL.
  • Thn 77 muncul 8085, clock generatornya onprocessor, cikal bakalnya penggunaan single voltage +5V (implementasi s/d 486DX2, pd DX4 mulai +3.3V dst).
  • i8086, prosesor dengan register 16-bit, bus data eksternal 16-bit, dan memori addressing 20-bit. Direlease thn 78 menggunakan teknologi HMOS, komponen pendukung bus 16 bit sangat langka , sehingga harganya menjadi sangat mahal.
  • Maka utk menjawab tuntutan pasar muncul i8088 16bit bus internal, 8bit bus external. Sehingga i8088 dapat memakai komponen peripheral 8bit bekas i8008. IBM memilih chip ini untuk pebuatan IBM PC karena lebih murah daripada i8086. Kalau saja CEO IBM waktu itu tidak menyatakan PC hanyalah impian sampingan belaka, tentu saja IBM akan menguasai pasar PC secara total saat ini. IBM PC first release Agustus 1981 memiliki 3 versi IBM PC, IBM PC-Jr dan IBM PC-XT (extended technology). Chip i8088 ini sangat populer, sampai NEC meluncurkan sebuah chip yang dibangun berdasarkan spesifikasi pin chip ini, yang diberi nama V20 dan V30. NEC V20 dan V30 adalah processor yang compatible dengan intel sampai level bahasa assembly (software).

Chip 8088 dan 8086 kompatibel penuh dengan program yang dibuat untuk chip 8080, walaupun mungkin ada beberapa program yang dibuat untuk 8086 tidak berfungsi pada chip 8088 (perbedaan lebar bus)

  • Lalu muncul 80186 dan i80188.. sejak i80186, prosessor mulai dikemas dalam bentuk PLCC, LCC dan PGA 68 kaki.. i80186 secara fisik berbentuk bujursangkar dengan 17 kaki persisi (PLCC/LCC) atau 2 deret kaki persisi (PGA) dan mulai dari i80186 inilah chip DMA dan interrupt controller disatukan ke dalam processor. semenjak menggunakan 286, komputer IBM menggunakan istilah IBM PC-AT (Advanced Technology)dan mulai dikenal pengunaan istilah PersonalSystem (PS/1). Dan juga mulai dikenal penggunaan slot ISA 16 bit yang dikembangkan dari slot ISA 8 bit , para cloner mulai ramai bermunculan. Ada AMD, Harris & MOS yang compatible penuh dengan intel. Di 286 ini mulai dikenal penggunaan Protected Virtual Adress Mode yang memungkinkan dilakukannya multitasking secara time sharing (via hardware resetting).

Tahun 86 IBM membuat processor dengan arsitektur RISC 32bit pertama untuk kelas PC. Namun karena kelangkaan software, IBM RT PC ini “melempem” untuk kelas enterprise, RISC ini berkembang lebih pesat, setidaknya ada banyak vendor yang saling tidak kompatibel.

  • Lalu untuk meraih momentum yang hilang dari chip i8086, Intel membuat i80286, prosesor dengan register 16-bit, bus eksternal 16-bit, mode protected terbatas yang dikenal dengan mode STANDARD yang menggunakan memori addressing 24-bit yang mampu mengakses maksimal 16 MB memori. Chip 80286 ini tentu saja kompatibel penuh dengan chip-chip seri 808x sebelumnya, dengan tambahan beberapa set instruksi baru. Sayangnya chip ini memiliki beberapa bug pada desain hardware-nya, sehingga gagal mengumpulkan pengikut.
  • Pada tahun 1985, Intel meluncurkan desain prosesor yang sama sekali baru: i80386. Sebuah prosesor 32-bit , dalam arti memiliki register 32-bit, bus data eksternal 32-bit, dan mempertahankan kompatibilitas dengan prosesor generasi sebelumnya, dengan tambahan diperkenalkannya mode PROTECTED 32-BIT untuk memori addressing 32-bit, mampu mengakses maksimum 4 GB , dan tidak lupa tambahan beberapa instruksi baru. Chip ini mulai dikemas dalam bentuk PGA (pin Grid Array)

Prosesor Intel sampai titik ini belum menggunakan unit FPU secara
internal . Untuk dukungan FPU, Intel meluncurkan seri 80×87. Sejak 386 ini mulai muncul processor cloner : AMD, Cyrix, NGen, TI, IIT, IBM (Blue Lightning) dst, macam-macamnya :i80386 DX (full 32 bit)
i80386 SX (murah karena 16bit external)
i80486 DX (int 487)
i80486 SX (487 disabled)
Cx486 DLC (menggunakan MB 386DX, juga yang lain)
Cx486 SLC (menggunakan MB 386SX)
i80486DX2
i80486DX2 ODP
Cx486DLC2 (arsitektur MB 386)
Cx486SLC2 (arsitektur MB 386)
i80486DX4
i80486DX4 ODP
i80486SX2
Pentium
Pentium ODP

  • Sekitar tahun 1989 Intel meluncurkan i80486DX. Seri yang tentunya sangat populer, peningkatan seri ini terhadap seri 80386 adalah kecepatan dan dukungan FPU internal dan skema clock multiplier (seri i486DX2 dan iDX4), tanpa tambahan instruksi baru. Karena permintaan publik untuk prosesor murah, maka Intel meluncurkan seri i80486SX yang tak lain adalah prosesor i80486DX yang sirkuit FPU-nya telah disabled . Seperti yang seharusnya, seri i80486DX memiliki kompatibilitas penuh dengan set instruksi chip-chip seri sebelumnya.
  • AMD dan Cyrix kemudian membeli rancangan prosesor i80386 dan i80486DX untuk membuat prosesor Intel-compatible, dan mereka terbukti sangat berhasil. Pendapat saya inilah yang disebut proses ‘cloning’, sama seperti cerita NEC V20 dan V30. AMD dan Cyrix tidak melakukan proses perancangan vertikal (berdasarkan sebuah chip seri sebelumnya), melainkan berdasarkan rancangan chip yang sudah ada untuk membuat chip yang sekelas.
  • Tahun 1993, dan Intel meluncurkan prosesor Pentium. Peningkatannya terhadap i80486: struktur PGA yang lebih besar (kecepatan yang lebih tinggi , dan pipelining, TANPA instruksi baru. Tidak ada yang spesial dari chip ini, hanya fakta bahwa standar VLB yang dibuat untuk i80486 tidak cocok (bukan tidak kompatibel) sehingga para pembuat chipset terpaksa melakukan rancang ulang untuk mendukung PCI. Intel menggunakan istilah Pentium untuk meng”hambat” saingannya. Sejak Pentium ini para cloner mulai “rontok” tinggal AMD, Cyrix . Intel menggunakan istilah Pentium karena Intel kalah di pengadilan paten. alasannya angka tidak bisa dijadikan paten, karena itu intel mengeluarkan Pentium menggunakan TM. AMD + Cyrix tidak ingin tertinggal, mereka mengeluarkan standar Pentium Rating (PR) sebelumnya ditahun 92 intel sempat berkolaborasi degan Sun, namun gagal dan Intel sempat dituntut oleh Sun karena dituduh menjiplak rancangan Sun. Sejak Pentium, Intel telah menerapkan kemampuan Pipelining yang biasanya cuman ada diprocessor RISC (RISC spt SunSparc). Vesa Local Bus yang 32bit adalah pengembangan dari arsitektur ISA 16bit menggunakan clock yang tetap karena memiliki clock generator sendiri (biasanya >33Mhz) sedangkan arsitektur PCI adalah arsitektur baru yang kecepatan clocknya mengikuti kecepatan clock Processor (biasanya kecepatannya separuh kecepatan processor).. jadi Card VGA PCI kecepatannya relatif tidak akan sama di frekuensi MHz processor yang berbeda alias makin cepat MHz processor, makin cepat PCI-nya
  • Tahun 1995, kemunculan Pentium Pro. Inovasi disatukannya cache memori ke dalam prosesor menuntut dibuatnya socket 8 . Pin-pin prosesor ini terbagi 2 grup: 1 grup untuk cache memori, dan 1 grup lagi untuk prosesornya sendiri, yang tak lebih dari pin-pin Pentium yang diubah susunannya . Desain prosesor ini memungkinkan keefisienan yang lebih tinggi saat menangani instruksi 32-bit, namun jika ada instruksi 16-bit muncul dalam siklus instruksi 32-bit, maka prosesor akan melakukan pengosongan cache sehingga proses eksekusi berjalan lambat. Cuma ada 1 instruksi yang ditambahkan: CMOV (Conditional MOVe) .
  • Tahun 1996, prosesor Pentium MMX. Sebenarnya tidak lebih dari sebuah Pentium dengan unit tambahan dan set instruksi tambahan, yaitu MMX. Intel sampai sekarang masih belum memberikan definisi yang jelas mengenai istilah MMX. Multi Media eXtension adalah istilah yang digunakan AMD . Ada suatu keterbatasan desain pada chip ini: karena modul MMX hanya ditambahkan begitu saja ke dalam rancangan Pentium tanpa rancang ulang, Intel terpaksa membuat unit MMX dan FPU melakukan sharing, dalam arti saat FPU aktif MMX non-aktif, dan sebaliknya. Sehingga Pentium MMX dalam mode MMX tidak kompatibel dengan Pentium.

Bagaimana dengan AMD K5? AMD K5-PR75 sebenarnya adalah sebuah ‘clone’ i80486DX dengan kecepatan internal 133MHz dan clock bus 33MHz . Spesifikasi Pentium yang didapat AMD saat merancang K5 versi-versi selanjutnya dan Cyrix saat merancang 6×86 hanyalah terbatas pada spesifikasi pin-pin Pentium. Mereka tidak diberi akses ke desain aslinya. Bahkan IBM tidak mampu membuat Intel bergeming (Cyrix, mempunyai kontrak terikat dengan IBM sampai tahun 2005)Mengenai rancangan AMD K6, tahukah anda bahwa K6 sebenarnya adalah rancangan milik NexGen ? Sewaktu Intel menyatakan membuat unit MMX, AMD mencari rancangan MMX dan menambahkannya ke K6. Sayangnya spesifikasi MMX yang didapat AMD sepertinya bukan yang digunakan Intel, sebab terbukti K6 memiliki banyak ketidakkompatibilitas instruksi MMX dengan Pentium MMX.

  • Tahun 1997, Intel meluncurkan Pentium II, Pentium Pro dengan teknologi MMX yang memiliki 2 inovasi: cache memori tidak menjadi 1 dengan inti prosesor seperti Pentium Pro , namun berada di luar inti namun berfungsi dengan kecepatan processor. Inovasi inilah yang menyebabkan hilangnya kekurangan Pentium Pro (masalah pengosongan cache) Inovasi kedua, yaitu SEC (Single Edge Cartidge), Kenapa? Karena kita dapat memasang prosesor Pentium Pro di slot SEC dengan bantuan adapter khusus. Tambahan : karena cache L2 onprocessor, maka kecepatan cache = kecepatan processor, sedangkan karena PII cachenya di”luar” (menggunakan processor module), maka kecepatannya setengah dari kecepatan processor. Disebutkan juga penggunaan Slot 1 pada PII karena beberapa alasan :

Pertama, memperlebar jalur data (kaki banyak – Juga jadi alasan Socket 8), pemrosesan pada PPro dan PII dapat paralel. Karena itu sebetulnya Slot 1 lebih punya kekuatan di Multithreading / Multiple Processor. ( sayangnya O/S belum banyak mendukung, benchmark PII dual processorpun oleh ZDBench lebih banyak dilakukan via Win95 ketimbang via NT)Kedua, memungkinkan upgrader Slot 1 tanpa memakan banyak space di Motherboard sebab bila tidak ZIF socket 9 , bisa seluas Form Factor(MB)nya sendiri konsep hemat space ini sejak 8088 juga sudah ada .Mengapa keluar juga spesifikasi SIMM di 286? beberapa diantaranya adalah efisiensi tempat dan penyederhanaan bentuk.

Ketiga, memungkinkan penggunaan cache module yang lebih efisien dan dengan speed tinggi seimbang dengan speed processor dan lagi-lagi tanpa banyak makan tempat, tidak seperti AMD / Cyrix yang “terpaksa” mendobel L1 cachenya untuk menyaingi speed PII (karena L2-nya lambat) sehingga kesimpulannya AMD K6 dan Cyrix 6×86 bukan cepat di processor melainkan cepat di hit cache! Sebab dengan spec Socket7 kecepatan L2 cache akan terbatas hanya secepat bus data / makin lambat bila bus datanya sedang sibuk, padahal PII direncanakan beroperasi pada 100MHz (bukan 66MHz lagi). Point inilah salah satu alasan kenapa intel mengganti chipset dari 430 ke 440 yang berarti juga harus mengganti Motherboard.





Rosswel Incident

29 11 2010

The Roswell UFO Incident was the alleged recovery of extra-terrestrial debris, including alien corpses, from an object which crashed near Roswell, New Mexico, in June or July 1947. Since the late 1970s the incident has been the subject of intense controversy and the subject of conspiracy theories as to the true nature of the object which crashed. The United States military maintains that what was actually recovered was debris from an experimental high-altitude surveillance balloon belonging to a classified program named “Mogul”; however, many UFO proponents maintain that in fact a crashed alien craft and bodies were recovered, and that the military then engaged in a cover up. The incident has turned into a widely known pop culture phenomenon, making the name Roswell synonymous with UFOs. It ranks as one of the most publicized and controversial alleged UFO incidents.

On July 8, 1947, Roswell Army Air Field (RAAF) public information office in Roswell, New Mexico, issued a press release stating that personnel from the field’s 509th Bomb Group had recovered a crashed “flying disc” from a ranch near Roswell, sparking intense media interest. The following day, the press reported thatCommanding General of the Eighth Air Force stated that, in fact, a radar-tracking balloon had been recovered by the RAAF personnel, not a “flying disc.” A subsequent press conference was called, featuring debris from the crashed object which seemed to confirm the weather balloon description.

This case was quickly forgotten and almost completely ignored, even by UFO researchers, for more than 30 years. Then, in 1978, physicist and ufologist Stanton T. Friedman interviewed Major Jesse Marcel who was involved with the original recovery of the debris in 1947. Marcel expressed his belief that the military had covered up the recovery of an alien spacecraft. His story spread through UFO circles, being featured in some UFO documentaries at the time. In February 1980, The National Enquirer ran its own interview with Marcel, garnering national and worldwide attention for the Roswell incident.

Additional witnesses added significant new details, including claims of a huge military operation dedicated to recovering alien craft and aliens themselves, at as many as 11 crash sites, and alleged witness intimidation. In 1989, former mortician Glenn Dennis put forth a detailed personal account, wherein he claimed that alien autopsies were carried out at the Roswell base.

In response to these reports, and after congressional inquiries, the General Accounting Office launched an inquiry and directed the Office of the Secretary of the Air Force to conduct an internal investigation. The result was summarized in two reports. The first, released in 1995, concluded that the reported recovered material in 1947 was likely debris from a secret government program called Project Mogul, which involved high altitude balloons meant to detect sound waves generated by Soviet atomic bomb tests and ballistic missiles. The second report, released in 1997, concluded that reports of recovered alien bodies were likely a combination of innocently transformed memories of military accidents involving injured or killed personnel, innocently transformed memories of the recovery of anthropomorphic dummies in military programs like Project High Dive conducted in the 1950s, and hoaxes perpetrated by various witnesses and UFO proponents. The psychological effects of time compression and confusion about when events occurred explained the discrepancy with the years in question. These reports were dismissed by UFO proponents as being either disinformation or simply implausible. However, certain UFO researchers discount the probability that the incident had anything to do with aliens.

Contemporary accounts of materials found

On June 14, 1947, William Ware “Mack” or “Mac” Brazel noticed some strange clusters of debris while working on the Foster homestead, where he was foreman, some 30 miles (50 km) north of Roswell. This date (or “about three weeks” before July 8) appeared in later stories featuring Brazel, but the initial press release from the Roswell Army Air Field said the find was “sometime last week,” suggesting Brazel found the debris in early July. Brazel told the Roswell Daily Record that he and his son saw a “large area of bright wreckage made up of rubber strips, tinfoil, a rather tough paper and sticks.” He paid little attention to it but returned on July 4 with his son, wife and daughter to gather up the material. Some accounts have described Brazel as having gathered some of the material earlier, rolling it together and stashing it under some brush. The next day, Brazel heard reports about “flying discs” and wondered if that was what he had picked up. On July 7, Brazel saw Sheriff Wilcox and “whispered kinda confidential like” that he may have found a flying disc. Another account quotes Wilcox as saying that Brazel reported the object on July 6.

Sheriff Wilcox called Roswell Army Air Field. Major Jesse Marcel and a “man in plainclothes” accompanied Brazel back to the ranch where more pieces were picked up. “[We] spent a couple of hours Monday afternoon [July 7] looking for any more parts of the weather device”, said Marcel. “We found a few more patches of tinfoil and rubber.”

As described in the July 9, 1947, edition of the Roswell Daily Record,

“The balloon which held it up, if that was how it worked, must have been 12 feet long, [Brazel] felt, measuring the distance by the size of the room in which he sat. The rubber was smoky gray in color and scattered over an area about 200 yards in diameter. When the debris was gathered up, the tinfoil, paper, tape, and sticks made a bundle about three feet long and 7 or 8 inches thick, while the rubber made a bundle about 18 or 20 inches long and about 8 inches thick. In all, he estimated, the entire lot would have weighed maybe five pounds. There was no sign of any metal in the area which might have been used for an engine, and no sign of any propellers of any kind, although at least one paper fin had been glued onto some of the tinfoil. There were no words to be found anywhere on the instrument, although there were letters on some of the parts. Considerable Scotch tape and some tape with flowers printed upon it had been used in the construction. No strings or wires were to be found but there were some eyelets in the paper to indicate that some sort of attachment may have been used.”

A telex sent to an FBI office from their office in Dallas, Texas, quoted a major from the Eighth Air Force on July 8:

“THE DISC IS HEXAGONAL IN SHAPE AND WAS SUSPENDED FROM A BALLON [sic] BY CABLE, WHICH BALLON [sic] WAS APPROXIMATELY TWENTY FEET IN DIAMETER. MAJOR CURTAN FURTHER ADVISED THAT THE OBJECT FOUND RESEMBLES A HIGH ALTITUDE WEATHER BALLOON WITH A RADAR REFLECTOR, BUT THAT TELEPHONIC CONVERSATION BETWEEN THEIR OFFICE AND WRIGHT FIELD HAD NOT [unintelligible] BORNE OUT THIS BELIEF.”

Early on Tuesday, July 8, the Roswell Army Air Field issued a press release which was immediately picked up by numerous news outlets:

“The many rumors regarding the flying disc became a reality yesterday when the intelligence office of the 509th Bomb group of the Eighth Air Force, Roswell Army Air Field, was fortunate enough to gain possession of a disc through the cooperation of one of the local ranchers and the sheriff’s office of Chaves County. The flying object landed on a ranch near Roswell sometime last week. Not having phone facilities, the rancher stored the disc until such time as he was able to contact the sheriff’s office, who in turn notified Maj. Jesse A. Marcel of the 509th Bomb Group Intelligence Office. Action was immediately taken and the disc was picked up at the rancher’s home. It was inspected at the Roswell Army Air Field and subsequently loaned by Major Marcel to higher headquarters.”

Colonel William H. Blanchard, commanding officer of the 509th, contacted General Roger M. Ramey of the Eighth Air Force in Fort Worth, Texas, and Ramey ordered the object be flown toFort Worth Army Air Field. At the base, Warrant Officer Irving Newton confirmed Ramey’s preliminary opinion, identifying the object as being a weather balloon and its “kite,”[13] a nickname for a radar reflector used to track the balloons from the ground. Another news release was issued, this time from the Fort Worth base, describing the object as being a “weather balloon.”

In Fort Worth, several news photographs were taken that day of debris said to be from the object.

 

Witness accounts emerge

In 1978, nuclear physicist and author Stanton T. Friedman interviewed Jesse Marcel, the only person known to have accompanied the Roswell debris from where it was recovered to Fort Worth. Over the next few years, the accounts he and others gave elevated Roswell from a forgotten incident to perhaps the most famous UFO case of all time.

By the early 1990s, UFO researchers such as Friedman, William Moore, Karl T. Pflock, and the team of Kevin D. Randle and Donald R. Schmitt had interviewed several hundred people who had, or claimed to have had, a connection with the events at Roswell in 1947.Additionally, hundreds of documents were obtained via Freedom of Information Act requests, as were some apparently leaked by insiders, such as the disputed “Majestic 12” documents.

Their conclusions were that at least one alien craft had crashed in the Roswell vicinity, that aliens, some possibly still alive, were recovered, and that a massive cover-up of any knowledge of the incident was put in place.

Numerous books, articles, television specials and even a made-for-TV movie brought the 1947 incident fame and notoriety so that by the mid-1990s, strong majorities in polls, such as a 1997 CNN/Time poll, believed that aliens had visited earth and specifically that aliens had landed at Roswell and the government was covering up the fact.

A new narrative emerged which was at strong odds with what was reported in 1947. This narrative evolved over the years from the time the first book on Roswell was published in 1980 as many new witnesses and accounts emerged, drawn out in part by publicity on the incident. Though skeptics had many objections to the plausibility of these accounts, it was not until 1994 and the publication of the first Air Force report on the incident that a strong counter-argument to the presence of aliens was widely publicized.

Numerous scenarios emerged from these authors as to what they felt were the true sequence of events, depending on which witness accounts were embraced or dismissed, and what the documentary evidence suggested. This was especially true in regards to the various claimed crash and recovery sites of alien craft, as various authors had different witnesses and different locations for these events.

However, the following general outline from UFO Crash at Roswell (1991) by Kevin D. Randle and Donald R. Schmitt is common to most of these accounts:

A UFO crashed northwest of Roswell, New Mexico, in the summer of 1947. The military acted quickly and efficiently to recover the debris after its existence was reported by a ranch hand. The debris, unlike anything these highly trained men had ever seen, was flown without delay to at least three government installations. A cover story was concocted to explain away the debris and the flurry of activity. It was explained that a weather balloon, one with a new radiosonde target device, had been found and temporarily confused the personnel of the 509th Bomb Group. Government officials took reporters’ notes from their desks and warned a radio reporter not to play a recorded interview with the ranch hand. The men who took part in the recovery were told never to talk about the incident. And with a whimper, not a bang, the Roswell event faded quickly from public view and press scrutiny.

What follows is accounts of the sequence of events according to some of the major books published on the subject.

The Roswell Incident (1980)

The first book on the subject, The Roswell Incident by Charles Berlitz and William L. Moore, was published in 1980. The authors at the time said they had interviewed more than ninety witnesses. Though uncredited, Stanton Friedman did substantial research for the book.The book featured accounts of debris described by Jesse Marcel as “nothing made on this earth.” Additional accounts suggested that the material Marcel recovered had super-strength and other attributes not associated with anything known of terrestrial origin, and certainly not anything associated with a “weather balloon”. The book also introduced the contention that debris recovered by Marcel at the Foster ranch (visible in photographs showing Marcel posing with the debris) was substituted for debris from a weather device (visible in pictures with Gen. Ramey, Marcel and others) as part of a cover-up. The actual debris recovered from the ranch – which, the authors claimed, was from a crashed UFO – was not permitted a close inspection by the press. Also described were efforts by the military to discredit and “counteract the growing hysteria towards flying saucers”.Additionally, various accounts of witness intimidation were included, in particular reports of the incarceration of Mac Brazel, who reported the debris in the first place.

A report of Roswell residents Dan Wilmot and his wife seeing an object “like two inverted saucers faced mouth to mouth” passing overhead on the evening of July 2 was included, as were other reports of mysterious objects seen flying overhead. The book also introduced an alien account by Barney Barnett who had died years earlier. Friends said he had on numerous occasions described the crash of a flying saucer and the recovery of alien corpses in the Socorro area, about 150 miles (240 km) west of the Foster ranch. He and a group ofarchaeologists who happened to be in the vicinity had stumbled upon an alien craft and its occupants on the morning of July 3, only to be led away by military personnel. Further accounts suggested that these aliens and their craft were shipped to Edwards Air Force Base (known then as Muroc Army Air Field) in California. The book suggested that either there were two crafts which crashed or debris from the vehicle Barnett had described had landed on the Foster ranch after an explosion.

Marcel said he “heard about it on July 7” when the sheriff whom Brazel had called him, but also said that “[on] Sunday, July 6, Brazel decided he had better go into town and report this to someone,” who in turn called Marcel, suggesting, though not stating, that he was contacted July 6. In 1947, Marcel was quoted as saying he visited the ranch on Monday, July 7.

Marcel described returning to Roswell the evening of July 7 to find that news of the discovery of a flying disc had leaked out. Calls were made to his house, including a visit from a reporter, but he could not confirm this. “The next morning, that written press release went out, and after that things really hit the fan.”

The book suggested that the military orchestrated Brazel’s testimony to make it appear a mundane object had landed on the ranch, though the book did not explicitly say that the military instructed Brazel to give a mid-June date for his discovery. “Brazel… [went] to great pains to tell the newspaper people exactly what the Air Force had instructed him to say regarding how he had come to discover the wreckage and what it looked like …”

UFO Crash at Roswell (1991)

In 1991, with the benefit of a decade of publicity on the incident and numerous new witness interviews, Kevin D. Randle and Donald R. Schmitt published UFO Crash at Roswell.

Timelines were slightly altered. The date that Brazel reported the debris and Marcel went to the ranch was said to be Sunday, July 6, not the next day as some of the original accounts suggested, and The Roswell Incident had left unclear. Additionally, Marcel and an unidentified counter-intelligence agent spent the night at the ranch, something not mentioned previously. They gathered material on Monday, then Marcel dropped by his house on the way to the Roswell base in the early hours of Tuesday, July 8.

Significant new details emerged, including accounts of a “gouge… that extended four or five hundred feet” at the ranch and descriptions of an elaborate cordon and recovery operation. (Several witnesses in The Roswell Incident described being turned back from the Foster ranch by armed military police, but more extensive descriptions were lacking.)

The Barnett accounts were mentioned, though the dates and locations were changed from the accounts found in The Roswell Incident. In this new account, Brazel is described as leading the Army to a second crash site on the ranch, where the Army was “horrified to find civilians [including Barnett] there already.”

New witness accounts added substantially to the reports of aliens and their recovery. Glenn Dennis had emerged as an important witness after calling the hotline when an episode of “Unsolved Mysteries” featured the Roswell incident in 1989. His descriptions of Roswell alien autopsies were the first to place alien corpses at the Roswell Army Air Base.

No mention, except in passing, was made of the claim found in The Roswell Incident that the Roswell aliens and their craft were shipped to Edwards Air Force Base. The book established a chain of events with alien corpses seen at a crash site, their bodies shipped to the Roswell base as witnessed by Dennis, and then flown to Fort Worth and finally to Wright Field in Dayton, Ohio, the last known location of the bodies (accounts assembled in part from the testimony of Frank Kaufmann and Capt. O. W. Henderson).

The book also introduced an account from General Arthur E. Exon, an officer stationed at the alleged final resting place of the recovered material. He stated there was a shadowy group which he called the Unholy Thirteen who controlled and had access to whatever was recovered. He later stated:

“In the ’55 time period (when Exon was at the Pentagon), there was also the story that whatever happened, whatever was found at Roswell was still closely held and probably would be held until these fellows I mentioned had died so they wouldn’t be embarrassed or they wouldn’t have to explain why they covered it up. … until the original thirteen died off and I don’t think anyone is going to release anything [until] the last one’s gone.

Crash at Corona (1992)

In 1992, Crash at Corona, written by Stanton Friedman and Robert Wooten, suggested a high-level cover-up of a UFO recovery, based on documents they obtained such as the Majestic 12 archive. These documents were anonymously dropped off at a UFO researcher’s house in 1984 and purported to be 1952 briefing papers for incoming President Dwight Eisenhower describing a high-level government agency whose purpose was to investigate aliens recovered at Roswell and to keep such information hidden from public view. Friedman had done much of the research for The Roswell Incident with William Moore, and Crash at Corona built on that research. The title contains Corona instead of Roswell as Corona is geographically closer to the Foster ranch crash site.

The time-line is largely the same as previously, with Marcel and Cavitt visiting the ranch on Sunday, July 6. But the book says that Brazel was “taken into custody for about a week” and escorted into the offices of the Roswell Daily Record on July 10 where he gave an account he was told to give by the government.

A sign of the disputes between various researchers is on display as Friedman and Berliner move the Barnett account back to near Socorro and introduce a new eyewitness account of the site from Gerald Anderson who provided vivid descriptions of both a downed alien craft and four aliens, of which at least one was alive. The authors note that much of their evidence had been dismissed by UFO Crash at Roswell“without a solid basis” and that “a personality conflict between Anderson and Randle” meant that Friedman was the author who investigated his claim. The book, however, largely embraces the sequence of events from UFO Crash at Roswell, where aliens are seen at the Roswell Army Air Field, based on the Dennis account, and then shipped off to Fort Worth and then Wright Field.

The book suggests as many as eight alien corpses were recovered from two crash sites: three dead and perhaps one alive from the Foster ranch, and three dead and one living from the Socorro site.

The Truth about the UFO Crash at Roswell (1994)

In 1994, Randle and Schmitt published a second book, The Truth about the UFO Crash at Roswell. while restating much of the case as laid out in their earlier book, new and expanded accounts of aliens were included, and a new location for the recovery of aliens was detailed. Additionally, an almost completely new scenario as to the sequence of events was laid out.

For the first time, the object was said to have crashed on the evening of Friday, July 4 instead of Wednesday July 2, the date in all the previous books. Another important difference was the assertion that the alien recovery was well under way before Brazel went into Roswell with his news about debris on the Foster ranch. Indeed, several objects had been tracked by radar for a few days in the vicinity before one crashed. In all previous accounts, the military was made aware of the alleged alien crash only when Brazel came forward. Additionally, Brazel was said to have given his news conference on July 9, and his press conference and the initial news release announcing the discovery of a “flying disc” were all part of an elaborate ruse to shift attention away from the “true” crash site.

The book featured a new witness account describing an alien craft and aliens from Jim Ragsdale, at a new location just north of Roswell, instead of closer to Corona on the Foster ranch. Corroboration was given by accounts from a group of archaeologists. Five alien corpses were seen. While the Foster ranch was a source of debris as well, no bodies were recovered there.

Expanded accounts came from Dennis and Kaufmann. And a new account from Ruben Anaya described New Mexico Lieutenant GovernorJoseph Montoya’s claim that he saw alien corpses at the Roswell base.

More disagreement between Roswell researchers is on display in the book. A full chapter is devoted to dismissing the Barnett and Anderson accounts from Socorro, a central part of Crash at Corona and The Roswell Incident. “…Barnett’s story, and in fact, the Plains [of San Augustin, near Soccoro] scenario, must be discarded”, say the authors. An appendix is devoted to describing the Majestic 12 documents, another central part of Crash at Corona, as a hoax.

The two Randle and Schmitt books remain highly influential in the UFO community, their interviews and conclusions widely reproduced on websites. Randle and Schmitt claimed to have “conducted more than two thousand interviews with more than five hundred people” during their Roswell investigations.

UFO community schism

By the publication of The Truth About the UFO Crash at Roswell in 1994, a serious split had emerged within the UFO community as to the true sequence of the events and the locations of the alleged alien crash sites. The Center for UFO Studies (CUFOS) and the Mutual UFO Network (MUFON), two leading UFO societies, were at odds over the various scenarios presented by Randle/Schmitt and Friedman/Berliner, so much so that several conferences were held to try to resolve the differences. One of the issues under discussion was where, precisely, Barnett was when he saw the alien craft he was said to have encountered. A 1992 conference tried to achieve a consensus among the various scenarios as portrayed in Crash at Corona and UFO Crash at Roswell, but the publication of The Truth About the UFO Crash at Roswell in 1994 “resolved” the Barnett problem by simply ignoring him and citing a new location for the alien craft recovery, including a new group of archaeologists not connected to the ones the Barnett story cited.

This fundamental disagreement over the location of the alleged crash sites still exists within the UFO community today.

“Alien autopsy” footage

Film footage claimed to have been taken by a U.S. military official shortly after the Roswell incident, and purportedly showing an alien autopsy, was produced in 1995 by Ray Santilli, a London-based video entrepreneur. The footage caused an international sensation when it aired on television networks around the world. In 2006, Santilli admitted that the film was mostly a reconstruction but continued to claim that it was based on genuine footage now lost, and that some frames from the original remained. The view of many, however, is that the film was a hoax in its entirety. The story was retold in the comedy film Alien Autopsy, released in 2006.

Air Force and skeptics respond to alien reports

Air Force reports on the Roswell UFO incident

In the mid-1990s, the United States Air Force issued two reports which, they said, accounted for the debris found and reported on in 1947, and which also accounted for the later reports of alien recoveries. The reports identified the debris as coming from a top secret government experiment called Project Mogul, which tested the feasibility of detecting Soviet nuclear tests and ballistic missiles with equipment on high-altitude balloons. Accounts of aliens were explained as resulting from misidentified military experiments which used anthropomorphic dummies, accidents involving injured or killed military personnel, and hoaxes perpetrated by various witnesses and UFO proponents.

The Air Force report formed a basis for a skeptical response to the claims many authors were making about the recovery of aliens, though skeptical researchers such as Philip J. Klass and Robert Todd had already been publishing articles for several years raising doubts about alien accounts before the Air Force issued its conclusions.

While books published into the 1990s suggested there was much more to the Roswell incident than the mere recovery of a weather balloon,skeptics, and even some social anthropologists instead saw the increasingly elaborate accounts as evidence of a myth being constructed. After the release of the Air Force reports in the mid-1990s, several books, such as Kal K. Korff’s The Roswell UFO Crash: What They Don’t Want You To Know, published in 1997, built on the evidence presented in the reports to conclude “there is no credible evidence that the remains of an extraterrestrial spacecraft was involved.”

Critics identified several reasons for their contention that the Roswell incident had nothing to do with aliens:

Problems with witness accounts

Hundreds of witnesses were interviewed by the various researchers, a seemingly impressive figure, but a comparable few were true “witnesses” who claimed to have actually seen debris or aliens, critics point out. Most “witnesses” were in fact repeating the claims of others, and their testimony would be inadmissible hearsay in an American court, says Korff. Of the 90 witnesses claimed to have been interviewed for The Roswell Incident, says Korff, the testimony of only 25 appear in the book, and only seven actually saw the debris. Of these, five handled the debris.

Karl T. Pflock, in his 2001 book Roswell: Inconvenient Facts and the Will to Believe, makes a similar point about Randle and Schmitt’s UFO Crash at Roswell. Some 271 people are listed in the book who were “contacted and interviewed” for the book, and this number does not include those who chose to remain anonymous, etc., meaning more than 300 “witnesses” were interviewed, a figure Pflock said the authors frequently cited. Of these 300-plus individuals, said Pflock, only 41 can be “considered genuine first- or second-hand witnesses to the events in and around Roswell or at the Fort Worth Army Air Field,” and only 23 can be “reasonably thought to have seen physical evidence, debris recovered from the Foster Ranch.” Of these, said Pflock, only seven have asserted anything suggestive of otherworldly origins for the debris.

As for the several accounts from those who claimed to have seen aliens, critics identified problems with these accounts ranging from the reliability of second-hand accounts (Pappy Henderson, General Exon, etc.), to serious credibility problems with witnesses making demonstrably false claims or multiple, contradictory accounts (Gerald Anderson, Glenn Dennis, Frank Kaufmann, Jim Ragsdale), to dubious death-bed “confessions” or accounts from elderly and easily confused witnesses (Maj. Edwin Easley, Lewis Rickett).

Pflock, writing in 2001, noted that only four people with firsthand knowledge of alien bodies were interviewed and identified by Roswell authors: Frank Kaufmann; Jim Ragsdale; Lt. Col. Albert Lovejoy Duran; Gerald Anderson. Duran is mentioned in a brief footnote in The Truth About the UFO Crash at Roswell and never again, while the other three all have serious credibility problems, said Pflock.

A basic problem with all the witness accounts, charge critics, is that they all came a minimum of 31 years after the events in question, and in many cases were recounted more than 40 years after the fact. Not only are memories this old of dubious reliability, say the critics, they were also subject to contamination from other accounts they may have heard.

Finally, the shifting claims of Jesse Marcel, whose suspicions that what he recovered in 1947 was “not of this world” sparked interest in the incident in the first place, cast serious doubt on the reliability of what he claimed, critics charge.

In The Roswell Incident, Marcel stated: “Actually, this material may have looked like tinfoil and balsa wood, but the resemblance ended there.” And, “They took one picture of me on the floor holding up some of the less-interesting metallic debris…The stuff in that one photo was pieces of the actual stuff we found. It was not a staged photo.” Timothy Printy points out that the material Marcel positively identified as being part of what he recovered is material which skeptics and UFO advocates agree is debris from a balloon device.

After that fact was pointed out to him, Marcel changed his story to say that that material was not what he recovered. Skeptics like Robert G. Todd argue that Marcel had a history of embellishment and exaggeration, such as claiming to have been a pilot and having received five Air Medals for shooting down enemy planes, claims which were found to be false, and his evolving Roswell story was another instance of this.

Contradictory conclusions, questionable research, Roswell as a myth

Critics point out that the large variety of claimed crash flights suggest events spanning many years have been incorporated into a single event and that many authors uncritically embrace anything that suggests aliens, even when accounts contradict each other. Said Karl Pflock, a one-time advocate of an alien incident at Roswell: “[T]he case for Roswell is a classic example of the triumph of quantity over quality. The advocates of the crashed-saucer tale… simply shovel everything that seems to support their view into the box marked ‘Evidence’ and say, ‘See? Look at all this stuff. We must be right.’ [emphasis in original] Never mind the contradictions. Never mind the lack of independent supporting fact. Never mind the blatant absurdities.”

Kal Korff suggests there are clear incentives for some to promote the idea of aliens at Roswell, while many researchers are not doing competent work: “[The] UFO field is comprised of people who are willing to take advantage of the gullibility of others, especially the paying public. Let’s not pull any punches here: The Roswell UFO myth has been very good business for UFO groups, publishers, for Hollywood, the town of Roswell, the media, and UFOlogy … [The] number of researchers who employ science and its disciplined methodology is appallingly small.”

Gildenberg and others said that, when added up, there were as many as 11 reported alien recovery sites and these recoveries bore only a marginal resemblance to the event as initially reported in 1947 or recounted later by the initial witnesses. Some of these new accounts could have been confused accounts of the several known recoveries of injured and dead from four military plane crashes which occurred in the vicinity from 1948–50.Others could have been recoveries of test dummies, as suggested by the Air Force in their reports.

Charles Ziegler argued that the Roswell story has all the hallmarks of a traditional folk narrative. He identified six distinct narratives, starting with The Roswell Incident (1980) and a process of transmission via storytellers with a core story which was created from various witness accounts and was shaped and molded by those who carry on the group’s (the UFO community) tradition. Others were sought out to expand the core narrative, with those who give accounts not in line with the core beliefs repudiated or omitted by the “gatekeepers.” Others retold the narratives in new forms, and the process would repeat.

Recent developments

Pro-UFO advocates dismiss Roswell incident

One of the immediate outcomes of the Air Force reports on the Roswell UFO incident was the decision by some prominent UFO researchers to view the Roswell incident as not involving any alien craft.

While the initial Air Force report was a chief reason for this, another was the release of secret documents from 1948 which showed that top Air Force officials did not know what the UFO objects being reported in the media were and their suspicion they might be Soviet spy vehicles.

In January 1997, Karl T. Pflock, one of the more prominent pro-UFO researchers, said “Based on my research and that of others, I’m as certain as it’s possible to be without absolute proof that no flying saucer or saucers crashed in the general vicinity of Roswell or on the Plains of San Agustin in 1947. The debris found by Mac Brazel…was the remains of something very earthly, all but certainly something from the Top Secret Project Mogul….The formerly highly classified record of correspondence and discussions among top Air Force officials who were responsible for cracking the flying saucer mystery from the mid-1940s through the early 1950s makes it crystal clear that they didn’t have any crashed saucer wreckage or bodies of saucer crews, but they were desperate to have such evidence …”

Kent Jeffrey, who organized petitions to ask President Bill Clinton to issue an Executive Order to declassify any government information on the Roswell incident, similarly concluded that no such aliens were likely involved.

William L. Moore, one of the earliest proponents of the Roswell incident, said this in 1997: “After deep and careful consideration of recent developments concerning Roswell…I am no longer of the opinion that the extraterrestrial explanation is the best explanation for this event.” Moore was co-author of the first book on Roswell, The Roswell Incident.

Shoddy research revealed; witnesses suspected of hoaxes

Around the same time, a serious rift between two prominent Roswell authors emerged. Kevin D. Randle and Donald R. Schmitt had co-authored several books on the subject and were generally acknowledged, along with Stanton Friedman, as the leading researchers into the Roswell incident. The Air Force reports on the incident suggested that basic research claimed to have been carried out was not carried out, a fact verified in a 1995 Omni magazine article. Additionally, Schmitt claimed he had a bachelor’s degree, a master’s degree and was in the midst of pursuing a doctorate in criminology. He also claimed to be a medical illustrator. When checked, it was revealed he was in fact a letter carrier in Hartford, Wisconsin, and had no known academic credentials. At the same time, Randle publicly distanced himself from Schmitt and his research. Referring to Schmitt’s investigation of witness Dennis’s accounts of a missing nurse at the Roswell base, he said: “The search for the nurses proves that he [Schmitt] will lie about anything. He will lie to anyone … He has revealed himself as a pathological liar … I will have nothing more to do with him.”

Additionally, several prominent witnesses were shown to be perpetrating hoaxes, or suspected of doing so. Frank Kaufmann, a major source of alien reports in the 1994 Randle and Schmitt book “The Truth About the UFO Crash at Roswell” and a witness whose testimony it was charged was “ignored” by the Air Force when compiling their reports, was shown, after his 2001 death, to have been forging documents and inflating his role at Roswell. Randle and Mark Rodeigher repudiated Kaufmann’s credibility in two 2002 articles.

Glenn Dennis, who testified that Roswell alien autopsies were carried out at the Roswell base and that he and others were the subjects of threats, was deemed one of the “least credible” Roswell witnesses by Randle in 1998. In Randle and Schmitt’s 1991 book “UFO Crash at Roswell,” Dennis’s story was featured prominently. Randle said Dennis was not credible “for changing the name of the nurse once we had proved she didn’t exist.” Dennis’s accounts were also doubted by researcher Pflock.

Photo analysis; documentaries; new claims

UFO researcher David Rudiak, and others before him, claimed that a telegram which appears in one of the 1947 photos of balloon debris in Ramey’s office contains text that confirms that bodies and a “disc” were recovered. Rudiak and some other examiners claim that when enlarged, the text on the paper General Ramey is holding in his hand includes key phrases “the victims of the wreck” and “in/on the ‘disc'” plus other phrases seemingly in the context of a crashed vehicle recovery. However, pro-UFO interpretations of this document are disputed by independent photoanalyses, such as one facilitated by researcher James Houran, Ph.D., that suggest the letters and words are indistinct. Other objections question the plausibility of a general allowing himself to be photographed holding such a document, raise issues with the format of the memo, and ponder the logic of Ramey having in his possession a document he, as Rudiak argued, has sent which says “…the wreck you forwarded…” yet is supposedly addressed to the Headquarters of the Army Air Force in Washington, not the Roswell Army Air Field.

In 2002, the Sci-Fi Channel sponsored an excavation at the Brazel site in the hopes of uncovering any missed debris that the military failed to collect. Although these results have so far turned out to be negative, the University of New Mexicoarchaeological team did verify recent soil disruption at the exact location that some witnesses said they saw a long, linear impact groove. Gov. Bill Richardson of New Mexico, who headed the United States Department of Energy under President Clinton, apparently found the results provocative. In 2004, he wrote in a foreword to The Roswell Dig Diaries, that “the mystery surrounding this crash has never been adequately explained—not by independent investigators, and not by the U.S. government.”

On October 26, 2007, Richardson (at the time a candidate for the Democratic Party nomination for U.S. President) elaborated when he was asked about releasing government files on Roswell. Richardson responded that when he was a Congressman, he attempted to get information on behalf of his New Mexico constituents, but was told by both the Department of Defense and Los Alamos Labs that the information was classified. “That ticked me off,” he said “The government doesn’t tell the truth as much as it should on a lot of issues.” He promised to work on opening the files if he were elected as President.

In October 2002 before airing its Roswell documentary, the Sci Fi Channel also hosted a Washington UFO news conference. John Podesta, President Clinton’s chief of staff, appeared as a member of the public relations firm hired by Sci-Fi to help get the government to open up documents on the subject. Podesta stated, “It is time for the government to declassify records that are more than 25 years old and to provide scientists with data that will assist in determining the true nature of the phenomena.”

In February 2005, the ABC TV network aired a UFO special hosted by news anchor Peter Jennings. Jennings lambasted the Roswell case as a “myth … without a shred of evidence.” ABC endorsed the Air Force’s explanation that the incident resulted solely from the crash of a Project Mogul balloon.

Top Secret/Majic (2005 edition)

Stanton T. Friedman continues to defend his view that the Majestic 12 documents which describe a secret government agency hiding information on recovered aliens are authentic. In an afterword dated April 2005 to a new edition of his book Top Secret/Majic (first published in 1996), he responds to more recent questions on their validity and concludes “I am still convinced Roswell really happened, [and] that the Eisenhower Briefing Document [i.e., Majestic 12] … [and others] are the most important classified documents ever leaked to the public.”

While the bulk of the book discusses the documents in detail, mention is made of the Barnett alien site, near Socorro, and Friedman has self-published as recently as 2003 articles which defend his view that aliens were recovered there and at a second site at the Foster ranch, seemingly the same sites as detailed in Crash at Corona.

Witness to Roswell (2007)

In June 2007, Donald Schmitt and his investigation partner Tom Carey published their first book together, Witness to Roswell. In this book, they claim a “continuously growing roster of more than 600 people directly or indirectly associated with the events at Roswell who support the first account – that initial claim of the flying saucer recovery.” New accounts of aliens or alien recoveries were described, including fromWalter Haut who wrote the initial press release in 1947.

A new date was suggested for a crash of a mysterious object—the evening of Thursday, July 3, 1947. Also, unlike previous accounts, Brazel took debris to Corona, where he showed fragments to local residents in the local bar, hardware store and elsewhere, and to Capitan to the south, while portions of the object ended up at a 4 July rodeo. Numerous people are described as visiting the debris field and taking souvenirs before Brazel finally went to Roswell to report the find on July 6. Once the military was alerted to the debris, extensive efforts were undertaken to retrieve those souvenirs: “Ranch houses were and [sic] ransacked. The wooden floors of livestock sheds were pried loose plank by plank and underground cold storage fruit cellars were emptied of all their contents.”

The subsequent events are related as per the sequence in previous books, except for a second recovery site of an alien body at the Foster ranch. This recovery near the debris field is the same site mentioned in 1991’s UFO Crash at Roswell. The authors suggest that Brazel discovered the second site some days after finding the debris field, and this prompted him to travel to Roswell and report his find to the authorities.

Neither Barnett nor the archaeologists are present at this body site. While noting the earlier “major problems” with Barnett’s account which caused Schmitt and previous partner Randle to omit Barnett’s claim in 1994’s The Truth about the UFO Crash at Roswell, the new book further notes another site mentioned in the 1994 publication. This site closer to Roswell “turned out to be bogus, as it was based upon the testimony of a single, alleged eyewitness [Frank Kaufmann] who himself was later discovered to have been a purveyor of false information.” Jim Ragsdale, whose alien account opened that book and who was claimed to have been present along with some archaeologists, is not mentioned in the new book.

The book includes claims that Major Marcel saw alien bodies, a claim not present in the previous books mentioned. Two witnesses are cited who say Marcel briefly mentioned seeing bodies, one a relative and another a tech sergeant who worked with Marcel’s intelligence team.

Much additional new testimony is presented to support notions that alien bodies were found at the Foster ranch and at another main crash site along with a craft, then processed at the base in a hangar and at the hospital, and finally flown out in containers, all under very tight security. The book suggests Brazel found “two or three alien bodies” about two miles east of the debris field and describes the rest of a stricken alien craft along with the remainder of the crew remaining airborne for some 30 more miles before crashing at another site about 40 miles north/northwest of Roswell (but not the same site described by Kaufmann). The authors claim to have located this final crash site in 2005 where “an additional two or three dead aliens and one live one were discovered by civilian archaeologists,” but offer no more information about the new site.

Walter Haut, as the Roswell Army Air Field public affairs officer, had drafted the initial press release that went out over the news wires on the afternoon of July 8, 1947, announcing a “flying disc”. This was the only direct involvement Haut had previously described in public statements and signed affidavits. The book presents a new affidavit that Haut signed in 2002 in which he claims much greater personal knowledge and involvement, including seeing alien corpses and craft, and involvement in a cover-up. Haut died in 2005.

Another new first-hand account from MP Elias Benjamin describes how he guarded aliens on gurneys taken to the Roswell base hospital from the same hangar. Similarly, family members of Miriam Bush, secretary to the chief medical officer at Roswell base, told of having been led into an examination room where alien corpses were laid out on gurneys. In both accounts, one of the aliens was said to be still alive. The book also recounted earlier testimony of the Anaya family about picking up New Mexico Lt. Governor Joseph Montoya at the base, and a badly shaken Montoya relating that he saw four alien bodies at the base hangar, one of them alive.Benjamin’s and Bush’s accounts, as do a few lesser ones, again place aliens at the Roswell base hospital, as had the Glenn Dennis story from almost 20 years before. The book notes that Dennis had been found to have told lies, and therefore is a supplier of unreliable testimony, but had nevertheless told others of incidents at the Roswell base long before it became associated with aliens in the late 1970s.

Walter Haut controversy

The publishing of the Walter Haut affidavit in Witness to Roswell, wherein Haut described a cover-up and seeing alien corpses, ignited a controversy in UFO circles. While many embraced his accounts as confirmation of the presence of aliens from a person who was known to have been on the base in 1947, others raised questions about the credibility of the accounts.

UFO researcher Dennis G. Balthaser, who along with fellow researcher Wendy Connors interviewed Haut on-camera in 2000, doubted that the same man he interviewed could have written the affidavit he signed. “[The 2000 video] shows a man that couldn’t remember where he took basic training, names, dates, etc., while the 2002 affidavit is very detailed and precise with information Haut couldn’t accurately remember 2 years after he was video taped.” Witness to Roswell co-author Don Schmitt, he notes, admitted that the affidavit was not written by Haut, but prepared for him to sign, based on statements Haut had made privately to Schmitt and co-author Tom Carey over a period of years.And further, notes Balthaser, neither he nor Carey were there when Haut signed the affidavit and the witness’ name has not been revealed, casting doubt on the circumstances of the signing.

He had further questions about what he saw as problems with the 2002 account. If the cover-up was decided at a meeting at Roswell, he asked, “why was it necessary for Major Marcel to fly debris from Roswell to General Ramey’s office in Ft Worth, since they had all handled the debris in the meeting and apparently set up the cover-up operation?” He also wondered which Haut statements were true, a 1993 affidavit he signed, the 2000 video interview, or the 2002 affidavit.

Bill Birnes, writing for UFO Magazine, summarizes that whatever disagreements there are about the 2000 video and the 2002 affidavit, “I think Walter Haut’s 2002 affidavit really says it all and agrees, on its material facts, with Walter’s 2000 interview with Dennis Balthaser and Wendy Connors. Dennis said he agrees with me, too, on this point.”

A comparison of the affidavit and interview shows that in both Haut said he saw a craft and at least one body in a base hangar and also attended a Roswell staff meeting where General Ramey was present and where Ramey put a cover-up into place.

Birnes also says that Carey told that while Haut may not have written the affidavit, “his statements were typed, shown to him for his review and agreement, and then affirmed by him in the presence of a witness… The fact that a notary was present and sealed the document should end any doubt as to the reality of its existence.”

Julie Shuster, Haut’s daughter and Director of the International UFO Museum in Roswell, said that Schmitt had written the affidavit based on years of conversations he and Carey had had with him. Writing in the September, 2007 MUFON newsletter, she said she and Haut reviewed the document, that “he did not want to make any changes,” and in the presence of two witnesses, a notary public from the museum and a visitor, both unidentified, he signed the affidavit.





Battle of Los Angeles

29 11 2010

The Battle of Los Angeles, also known as The Great Los Angeles Air Raid, is the name given by contemporary sources to the rumored enemy attack and subsequent anti-aircraft artillery barrage which took place from late February 24 to early February 25, 1942 over Los Angeles,California. The incident occurred less than three months after America’s entry into World War II as a result of the Japanese Imperial Navy’s attack onPearl Harbor.

Initially, the target of the aerial barrage was thought to be an attacking force from Japan, but speaking at a press conference shortly afterward Secretary of the Navy Frank Knox called the incident a “false alarm.” Newspapers of the time published a number of sensational reports and speculations of a cover-up. A small number of modern-day UFOlogists have suggested the targets were extraterrestrial spacecraft. When documenting the incident in 1983, the U.S. Office of Air Force History attributed the event to a case of “war nerves” likely triggered by a lost weather balloon and exacerbated by stray flares and shell bursts from adjoining batteries.

Alarms raised

Air raid sirens were sounded throughout Los Angeles County on the night of 24–25 February 1942. A total blackout was ordered and thousands of air raid wardens were summoned to their positions. At 3:16 a.m. the 37th Coast Artillery Brigade began firing 12.8-pound anti-aircraft shells into the air at reported aircraft; over 1,400 shells would eventually be fired. Pilots of the 4th Interceptor Command were alerted but their aircraft remained grounded. The artillery fire continued sporadically until 4:14 a.m. The “all clear” was sounded and the blackout order lifted at 7:21 a.m.

In addition to several buildings damaged by friendly fire, three civilians were killed by the anti-aircraft fire, and another three died of heart attacks attributed to the stress of the hour-long bombardment. The incident was front-page news along the U.S. Pacific coast, and earned some mass media coverage throughout the nation.

Press response

Within hours of the end of the air raid, Secretary of the Navy Frank Knox held a press conference, saying the entire incident was a false alarm due to anxiety and “war nerves”. Knox’s comments were followed by statements from the Army the next day that reflected GeneralGeorge C. Marshall’s belief that the incident might have been caused by commercial airplanes used as a psychological warfare campaign to generate panic.

Some contemporary press outlets suspected a cover up. An editorial in the Long Beach Independent wrote, “There is a mysterious reticence about the whole affair and it appears that some form of censorship is trying to halt discussion on the matter.” Speculation was rampant as to invading airplanes and their bases. Theories included a secret base in northern Mexico as well as Japanese submarines stationed offshore with the capability of carrying planes. Others speculated that the incident was either staged or exaggerated to give coastal defense industries an excuse to move further inland.

Representative Leland Ford of Santa Monica called for a Congressional investigation, saying, “…none of the explanations so far offered removed the episode from the category of ‘complete mystification’ … this was either a practice raid, or a raid to throw a scare into 2,000,000 people, or a mistaken identity raid, or a raid to lay a political foundation to take away Southern California’s war industries.”

Attribution

In 1983, the Office of Air Force History concluded that an analysis of the evidence points to meteorological balloons as the cause of the initial alarm:

“The Battle of Los Angeles”
During the night of 24/25 February 1942, unidentified objects caused a succession of alerts in southern California. On the 24th, a warning issued by naval intelligence indicated that an attack could be expected within the next ten hours. That evening a large number of flares and blinking lights were reported from the vicinity of defense plants. An alert called at 1918 [7:18 p.m., Pacific time] was lifted at 2223, and the tension temporarily relaxed. But early in the morning of the 25th renewed activity began. Radars picked up an unidentified target 120 miles west of Los Angeles. Antiaircraft batteries were alerted at 0215 and were put on Green Alert—ready to fire—a few minutes later. The AAF kept its pursuit planes on the ground, preferring to await indications of the scale and direction of any attack before committing its limited fighter force. Radars tracked the approaching target to within a few miles of the coast, and at 0221 the regional controller ordered a blackout. Thereafter the information center was flooded with reports of “enemy planes, ” even though the mysterious object tracked in from sea seems to have vanished. At 0243, planes were reported near Long Beach, and a few minutes later a coast artillery colonel spotted “about 25 planes at 12,000 feet” over Los Angeles. At 0306 a balloon carrying a red flare was seen over Santa Monica and four batteries of anti-aircraft artillery opened fire, whereupon “the air over Los Angeles erupted like a volcano.” From this point on reports were hopelessly at variance.
Probably much of the confusion came from the fact that anti-aircraft shell bursts, caught by the searchlights, were themselves mistaken for enemy planes. In any case, the next three hours produced some of the most imaginative reporting of the war: “swarms” of planes (or, sometimes, balloons) of all possible sizes, numbering from one to several hundred, traveling at altitudes which ranged from a few thousand feet to more than 20,000 and flying at speeds which were said to have varied from “very slow” to over 200 miles per hour, were observed to parade across the skies. These mysterious forces dropped no bombs and, despite the fact that 1,440 rounds of anti-aircraft ammunition were directed against them, suffered no losses. There were reports, to be sure, that four enemy planes had been shot down, and one was supposed to have landed in flames at a Hollywood intersection. Residents in a forty-mile arc along the coast watched from hills or rooftops as the play of guns and searchlights provided the first real drama of the war for citizens of the mainland. The dawn, which ended the shooting and the fantasy, also proved that the only damage which resulted to the city was such as had been caused by the excitement (there was at least one death from heart failure), by traffic accidents in the blacked-out streets, or by shell fragments from the artillery barrage.
Attempts to arrive at an explanation of the incident quickly became as involved and mysterious as the “battle” itself. The Navy immediately insisted that there was no evidence of the presence of enemy planes, and Secretary [of the Navy, Frank] Knox announced at a press conference on 25 February that the raid was just a false alarm. At the same conference he admitted that attacks were always possible and indicated that vital industries located along the coast ought to be moved inland. The Army had a hard time making up its mind on the cause of the alert. A report to Washington, made by the Western Defense Command shortly after the raid had ended, indicated that the credibility of reports of an attack had begun to be shaken before the blackout was lifted. This message predicted that developments would prove “that most previous reports had been greatly exaggerated.” The Fourth Air Force had indicated its belief that there were no planes over Los Angeles. But the Army did not publish these initial conclusions. Instead, it waited a day, until after a thorough examination of witnesses had been finished. On the basis of these hearings, local commanders altered their verdict and indicated a belief that from one to five unidentified airplanes had been over Los Angeles. Secretary Stimson announced this conclusion as the War Department version of the incident, and he advanced two theories to account for the mysterious craft: either they were commercial planes operated by an enemy from secret fields in California or Mexico, or they were light planes launched from Japanese submarines. In either case, the enemy’s purpose must have been to locate anti-aircraft defenses in the area or to deliver a blow at civilian morale.
The divergence of views between the War and Navy departments, and the unsatisfying conjectures advanced by the Army to explain the affair, touched off a vigorous public discussion. The Los Angeles Times, in a first-page editorial on 26 February, announced that “the considerable public excitement and confusion” caused by the alert, as well as its “spectacular official accompaniments,” demanded a careful explanation. Fears were expressed lest a few phony raids undermine the confidence of civilian volunteers in the aircraft warning service. In Congress, Representative Leland Ford wanted to know whether the incident was “a practice raid, or a raid to throw a scare into 2,000,000 people, or a mistaken identity raid, or a raid to take away Southern California’s war industries.” Wendell Willkie, speaking in Los Angeles on 26 February, assured Californians on the basis of his experiences in England that when a real air raid began “you won’t have to argue about it—you’ll just know.” He conceded that military authorities had been correct in calling a precautionary alert but deplored the lack of agreement between the Army and Navy. A strong editorial in the Washington Post on 27 February called the handling of the Los Angeles episode a “recipe for jitters,” and censured the military authorities for what it called “stubborn silence” in the face of widespread uncertainty. The editorial suggested that the Army’s theory that commercial planes might have caused the alert “explains everything except where the planes came from, whither they were going, and why no American planes were sent in pursuit of them.” The New York Times on 28 February expressed a belief that the more the incident was studied, the more incredible it became: “If the batteries were firing on nothing at all, as Secretary Knox implies, it is a sign of expensive incompetence and jitters. If the batteries were firing on real planes, some of them as low as 9,000 feet, as Secretary Stimson declares, why were they completely ineffective? Why did no American planes go up to engage them, or even to identify them?… What would have happened if this had been a real air raid?” These questions were appropriate, but for the War Department to have answered them in full frankness would have involved an even more complete revelation of the weakness of our air defenses.
At the end of the war, the Japanese stated that they did not send planes over the area at the time of this alert, although submarine-launched aircraft were subsequently used over Seattle. A careful study of the evidence suggests that meteorological balloons—known to have been released over Los Angeles —may well have caused the initial alarm. This theory is supported by the fact that anti-aircraft artillery units were officially criticized for having wasted ammunition on targets which moved too slowly to have been airplanes. After the firing started, careful observation was difficult because of drifting smoke from shell bursts. The acting commander of the anti-aircraft artillery brigade in the area testified that he had first been convinced that he had seen fifteen planes in the air, but had quickly decided that he was seeing smoke. Competent correspondents like Ernie Pyle and Bill Henry witnessed the shooting and wrote that they were never able to make out an airplane. It is hard to see, in any event, what enemy purpose would have been served by an attack in which no bombs were dropped, unless perhaps, as Mr. Stimson suggested, the purpose had been reconnaissance.

– The Army Air Forces in World War II, prepared under the editorship of Wesley Frank Craven, James Lea Cate. v.1, pp. 277-286, Washington, D.C. : Office of Air Force History : For sale by the Supt. of Docs., U.S. G.P.O., 1983