Evolving the next generation of PC/104 technology

1Editor's note: We like to take an annual look at the PC/104 Consortium's roadmap, and this year brings a new specification sure to create excitement in the community. Jim Blazer and Matthias Huber represent the PC/104 Embedded Consortium Technical Committee in this discussion, introducing the thinking behind PCI/104-Express.

The PC/104 Consortium is a technical organization dedicated to the creation, maintenance, and distribution of specifications that support the stackable PC/104 architecture. By following the trends of the desktop PC world and adapting them for the stackable embedded space, the specifications adopted by the Consortium can leverage the large set of devices and chipsets available to create a vast selection of product choices and applications quickly and efficiently.

The Consortium started with PC/104 defining a bus architecture and form factor. Over the years, the Consortium added PC/104-Plus and PCI-104 bus architectures and EBX and EPIC form factors to the standards identified with supporting the stackable PC architecture. Some of these specifications have been created and others adopted, all with the same purpose.

The Consortium's Technical Committee is the working group responsible for new specifications. With industry-leading members including silicon and connector vendors and SBC and I/O manufacturers, the committee consists of experts who know the embedded computing space.

Leading the evolution

Tasked with bringing PCI Express capabilities into the PC/104 architecture, the Technical Committee had to identify and resolve issues related to stacking PCI Express, including mechanical, electrical, and bus features. This past year, the Consortium's Technical Committee met twice a month to achieve this goal. The group's experience and expertise has been the key to creating the PCI/104-Express specification the Consortium is poised to release.

When the Consortium started considering the next generation for PC/104, PCI Express was the logical solution. It is the bus offered in desktop PCs, and those who are acquainted with the history of PC/104 know that the Consortium follows the desktop PC. Early PCs had the ISA bus only, so the Consortium started PC/104 with an ISA bus. After the short-lived battle between PCI and the VL bus, PCI and ISA became standard in all desktops, and thus PC/104-Plus was created. As ISA was dropped from most CPU chipsets, leaving only the PCI bus, PCI-104 was formalized in its own specification. PC/104-Plus and PCI-104 did not replace or kill PC/104, but simply provided additional capabilities.

The continued use and design-in of PC/104 and PC/104-Plus, in spite of processor manufacturers removing the ISA bus, is a testament to these form factors' concept, design, and need in the embedded marketplace. While CPU chip manufacturers keep making room for newer, advanced general PC technology, the embedded community continues to embrace the right technologies for the right application through PCI-to-ISA bridge chips and FPGA cores.

Selecting PCI Express was easy, but there are many options to choose. Links come as x1, x2, x4, x8, and x16. Which option should be supported? A single x1 link has more bandwidth than a 32-bit, 33 MHz PCI bus. This is more than adequate for many embedded requirements, and one could argue that no one will ever need more bandwidth in an embedded system. But this brings to mind the infamous statement: "No one will ever need more than 640K of memory." The industry needs a specification that is flexible enough to cover today's requirements, as well as what may emerge tomorrow.

With PCI Express in the connector, the question then becomes: Should the bus contain other signals, and if so, what? If 10 people are asked this question, they will likely supply 10 different answers depending on their acronym of choice. Low Pin Count (LPC), Serial Peripheral Interface (SPI), System Management Bus (SMB), USB, SATA, PATA, PCI, and ISA are a few signals that come to mind. The merit of adding one or more of these other buses must be considered and weighed against connector density, stacking signal integrity, and future chipset capabilities with increased bandwidth requirements. Additionally, since PC/104 was a pure ISA bus and PCI-104 was a pure PCI bus, keeping PCI/104-Express a pure PCI Express bus is consistent with these former specifications.

Challenges in stacking PCI Express

PCI Express is a point-to-point connection similar to RS-232, running at a 2.5 GHz data rate. Stacking such an architecture poses questions as to how designers should:

  • Route signals with a 1.25 GHz bandwidth to all boards in a system
  • Add multiple devices
  • Enable an add-in board to know which PCI Express link to use
  • Allow add-in boards to stack above or below the host

Getting signals with 1.25 GHz bandwidth to a stack of six or so boards requires a special connector. It first must meet the fundamental PC/104 mechanical issue of 0.600" ± 0.005" between the boards. Then the connector must support the bandwidth, insertion loss, and return loss required by PCI Express. Finally, it must support all these electrical requirements when stacked six boards high.

Connector selection and specification is very critical, and a modified version of the high-density Samtec Q2 connector has been selected for the job. Examples of the connector in use are shown in Figure 1, which depicts a PCI/104-Express CPU host and Ethernet device, courtesy of Digital-Logic. Figure 2 shows a PCI/104-Express data acquisition card, courtesy of RTD Embedded Technologies.

Figure 1
(Click graphic to zoom by 1.7x)

Figure 2
(Click graphic to zoom by 1.4x)

System considerations

Stacking multiple boards on a system that uses point-to-point connections requires a method for each add-in board to select a unique PCI Express link. This is done one of two ways: with switches or by having each board that uses a PCI Express link shift all other links. Switches allow any link on the bus to be selected but require system configuration and a fast switch/multiplexer on all PCI Express links that will affect signal integrity. With link shifting, each add-in board is designed to use the first link and shift link 2 to link 1, link 3 to link 2, and so on. This option provides an automatic PCI Express link assignment.

In a stacked architecture like PC/104, there are two ways to build a system. PC/104 form factor systems typically have the CPU host at the top to allow space for a cooling device, followed by add-in boards and power supply on the bottom. EPIC and EBX systems will have the CPU host on the bottom and add-in cards stacking up. In the case of stack-down, the PCI Express links come in from the top of the card; in a stack-up system, the PCI Express links come in from the bottom of the card. This means one would have to build two different cards depending on the placement relative to the CPU. The ultimate solution is to have universal add-in cards that can be stacked either above or below the CPU and automatically know which link to use.

Evolution becoming reality

PC/104 is famous for being a rugged, easy-to-use system. The next generation must have these same features as PCI Express is adapted to a stacking architecture. All these things have been at the forefront of the Technical Committee's agenda for the past year.

PCI/104-Express gives PC/104 a path to the future. It has the bandwidth to support high-speed applications such as 1 and 10 GbE, high-end graphics, and custom FPGA and DSP requirements. It has the expandability to support I/O-intensive applications. Just as PC/104-Plus did not replace PC/104, PCI/104-Express will not replace either PC/104-Plus or PC/104. PCI/104-Express adds a logical extension to the PC/104 family as the high-speed bus for the next generation of embedded computing.

Jim Blazer is the vice chairman and Chief Technical Officer of RTD Embedded Technologies, Inc., in State College, Pennsylvania, where he is responsible for managing intelligent data acquisition system and embedded PC designs. He currently serves as chairman of the PC/104 Embedded Consortium's Technical Committee. Jim has a BSEE from Penn State University.

Matthias Huber is general manager of the Embedded Modules Division, leading the Kontron America team at the company's facility in Silicon Valley. He currently serves as European VP of the PC/104 Embedded Consortium. His previous experience includes serving as a member of the R&D team of Boards (now Kontron) and general manager of Jumptec Inc. (now Kontron). He has a degree in Precision and Micro-Engineering from the University of Applied Sciences in Munich, Germany.

RTD Embedded Technologies

Kontron America – Silicon Valley